Dyna Edition 191 - June of 2015

Page 1

DYNA Journal of the Facultad de Minas, Universidad Nacional de Colombia - Medellin Campus

DYNA 82 (191), June, 2015 - ISSN 0012-7353 Tarifa Postal Reducida No. 2014-287 4-72 La Red Postal de Colombia, Vence 31 de Dic. 2015. FACULTAD DE MINAS


DYNA

http://dyna.medellin.unal.edu.co/

DYNA is an international journal published by the Facultad de Minas, Universidad Nacional de Colombia, Medellín Campus since 1933. DYNA publishes peer-reviewed scientific articles covering all aspects of engineering. Our objective is the dissemination of original, useful and relevant research presenting new knowledge about theoretical or practical aspects of methodologies and methods used in engineering or leading to improvements in professional practices. All conclusions presented in the articles must be based on the current state-of-the-art and supported by a rigorous analysis and a balanced appraisal. The journal publishes scientific and technological research articles, review articles and case studies. DYNA publishes articles in the following areas: Organizational Engineering Civil Engineering Materials and Mines Engineering

Geosciences and the Environment Systems and Informatics Chemistry and Petroleum

Mechatronics Bio-engineering Other areas related to engineering

Publication Information

Indexing and Databases

DYNA (ISSN 0012-73533, printed; 2346-2183, online) is published by the Facultad de Minas, Universidad Nacional de Colombia, with a bimonthly periodicity (February, April, June, August, October, and December). Circulation License Resolution 000584 de 1976 from the Ministry of the Government.

DYNA is admitted in:

Contact information Web page: E-mail: Mail address: Facultad de Minas

http://dyna.unalmed.edu.co dyna@unal.edu.co Revista DYNA Universidad Nacional de Colombia Medellín Campus Carrera 80 No. 65-223 Bloque M9 - Of.:107 Telephone: (574) 4255068 Fax: (574) 4255343 Medellín - Colombia © Copyright 2014. Universidad Nacional de Colombia The complete or partial reproduction of texts with educational ends is permitted, granted that the source is duly cited. Unless indicated otherwise. Notice All statements, methods, instructions and ideas are only responsibility of the authors and not necessarily represent the view of the Universidad Nacional de Colombia. The publisher does not accept responsibility for any injury and/or damage for the use of the content of this journal. The concepts and opinions expressed in the articles are the exclusive responsibility of the authors.

The National System of Indexation and Homologation of Specialized Journals CT+I-PUBLINDEX, Category A1 Science Citation Index Expanded Journal Citation Reports - JCR Science Direct SCOPUS Chemical Abstract - CAS Scientific Electronic Library on Line - SciELO GEOREF PERIÓDICA Data Base Latindex Actualidad Iberoaméricana RedALyC - Scientific Information System Directory of Open Acces Journals - DOAJ PASCAL CAPES UN Digital Library - SINAB CAPES

Publisher’s Office Juan David Velásquez Henao, Director Mónica del Pilar Rada T., Editorial Coordinator Catalina Cardona A., Editorial Assistant Amilkar Álvarez C., Diagrammer Byron Llano V., Editorial Assistant Institutional Exchange Request DYNA may be requested as an institutional exchange through the Landsoft S.A., IT e-mail canjebib_med@unal.edu.co or to the postal address: Reduced Postal Fee Biblioteca Central “Efe Gómez” Tarifa Postal Reducida # 2014-287 4-72. La Red Postal Universidad Nacional de Colombia, Sede Medellín de Colombia, expires Dec. 31st, 2015 Calle 59A No 63-20 Teléfono: (57+4) 430 97 86 Medellín - Colombia


SEDE MEDELLÍN

DYNA


SEDE MEDELLÍN


COUNCIL OF THE FACULTAD DE MINAS

JOURNAL EDITORIAL BOARD

Dean (e) Pedro Nel Benjumea Hernández PhD

Editor-in-Chief Juan David Velásquez Henao, PhD Universidad Nacional de Colombia, Colombia

Vice-Dean Pedro Nel Benjumea Hernández, PhD Vice-Dean of Research and Extension Verónica Botero Fernández, PhD Director of University Services Carlos Alberto Graciano, PhD Academic Secretary Carlos Alberto Zarate Yepes, PhD Representative of the Curricular Area Directors Néstor Ricardo Rojas Reyes, PhD Representative of the Curricular Area Directors Abel de Jesús Naranjo Agudelo Representative of the Basic Units of AcademicAdministrative Management Germán L. García Monsalve, PhD Representative of the Basic Units of AcademicAdministrative Management Gladys Rocío Bernal Franco, PhD Professor Representative Jaime Ignacio Vélez Upegui, PhD Delegate of the University Council León Restrepo Mejía, PhD FACULTY EDITORIAL BOARD

Editors George Barbastathis, PhD Massachusetts Institute of Technology, USA Tim A. Osswald, PhD University of Wisconsin, USA Juan De Pablo, PhD University of Wisconsin, USA Hans Christian Öttinger, PhD Swiss Federal Institute of Technology (ETH), Switzerland Patrick D. Anderson, PhD Eindhoven University of Technology, the Netherlands Igor Emri, PhD Associate Professor, University of Ljubljana, Slovenia Dietmar Drummer, PhD Institute of Polymer Technology University ErlangenNürnberg, Germany Ting-Chung Poon, PhD Virginia Polytechnic Institute and State University, USA Pierre Boulanger, PhD University of Alberta, Canadá Jordi Payá Bernabeu, Ph.D. Instituto de Ciencia y Tecnología del Hormigón (ICITECH) Universitat Politècnica de València, España

Dean Pedro Nel Benjumea Hernández, PhD

Javier Belzunce Varela, Ph.D. Universidad de Oviedo, España

Vice-Dean of Research and Extension Verónica Botero Fernández, PhD

Luis Gonzaga Santos Sobral, PhD Centro de Tecnología Mineral - CETEM, Brasil

Members Hernán Darío Álvarez Zapata, PhD Oscar Jaime Restrepo Baena, PhD Juan David Velásquez Henao, PhD Jaime Aguirre Cardona, PhD Mónica del Pilar Rada Tobón MSc

Agustín Bueno, PhD Universidad de Alicante, España Henrique Lorenzo Cimadevila, PhD Universidad de Vigo, España Mauricio Trujillo, PhD Universidad Nacional Autónoma de México, México

Carlos Palacio, PhD Universidad de Antioquia, Colombia Jorge Garcia-Sucerquia, PhD Universidad Nacional de Colombia, Colombia Juan Pablo Hernández, PhD Universidad Nacional de Colombia, Colombia John Willian Branch Bedoya, PhD Universidad Nacional de Colombia, Colombia Enrique Posada, Msc INDISA S.A, Colombia Oscar Jaime Restrepo Baena, PhD Universidad Nacional de Colombia, Colombia Moisés Oswaldo Bustamante Rúa, PhD Universidad Nacional de Colombia, Colombia Hernán Darío Álvarez, PhD Universidad Nacional de Colombia, Colombia Jaime Aguirre Cardona, PhD Universidad Nacional de Colombia, Colombia



DYNA 82 (191), June, 2015. Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online

CONTENTS Editorial New publishing rules for journals edited and published by the Facultad de Minas

Juan David Velásquez Henao

A new synthesis procedure for TOPSIS based on AHP

Juan Aguarón-Joven, María Teresa Escobar-Urmeneta, Jorge Luis García-Alcaraz, José María Moreno-Jiménez & Alberto Vega-Bonilla

Capacitated vehicle routing problem for PSS uses based on ubiquitous computing: An emerging markets approach

Alberto Ochoa-Ortíz, Francisco Ornelas-Zapata, Lourdes Margain-Fuentes, Miguel Gastón Cedillo-Campos, Jöns Sánchez-Aguilar, Rubén Jaramillo-Vacio & Isabel Ávila

Structural analysis for the identification of key variables in the Ruta del Oro, Nariño Colombia Aida Mercedes Delgado-Martínez & Freddy Pantoja-Timarán

Intuitionistic fuzzy MOORA for supplier selection

Luis Pérez-Domínguez, Alejandro Alvarado-Iniesta, Iván Rodríguez-Borbón & Osslan Vergara-Villegas

A relax and cut approach using the multi-commodity flow formulation for the traveling salesman problema

11 20 27 34

Makswell Seyiti Kawashima, Socorro Rangel, Igor Litvinchev & Luis Infante

42

A genetic algorithm to solve a three-echelon capacitated location problem for a distribution center within a solid waste management system in the northern region of Veracruz, Mexico

51

María del Rosario Pérez-Salazar, Nicolás Francisco Mateo-Díaz, Rogelio García-Rodríguez, Carlos Eusebio Mar-Orozco & Lidilia Cruz-Rivero

Short-term generation planning by primal and dual decomposition techniques

José Antonio Marmolejo-Saucedo & Román Rodríguez-Aguilar

Technical efficiency of thermal power units through a stochastic frontier

José Antonio Marmolejo-Saucedo, Román Rodríguez-Aguilar, Miguel Gastón Cedillo-Campos & María Soledad Salazar-Martínez

Optimization of the distribution of steel pipes using a mathematical model Miguel Mata-Pérez & Jania Astrid Saucedo-Martínez

Effects of management commitment and organization of work teams on the benefits of Kaizen: Planning stage

Midiala Oropesa-Vento, Jorge Luis García-Alcaraz, Leonardo Rivera & Diego F. Manotas

A framework to evaluate over-costs in natural resources logistics chains

Gabriel Pérez-Salas, Rosa G. González-Ramírez & Miguel Gastón Cedillo-Campos

The role of sourcing service agents in the competitiveness of Mexico as an international sourcing región María del Pilar Ester Arroyo-López & José Antonio Ramos-Rangel

Modeling of CO2 vapor-liquid equilibrium in Colombian heavy oil using SARA analysis

Oscar Ramírez, Carolina Betancur, Bibian Hoyos & Carlos Naranjo

R&D best practices, absorptive capacity and project success Silvia Vicente-Oliva, Ángel Martínez-Sánchez & Luis Berges-Muro

Technologies for the removal of dyes and pigments present in wastewater. A review

Leonardo Fabio Barrios-Ziolo, Luisa Fernanda Gaviria-Restrepo, Edison Alexander Agudelo & Santiago Alonso Cardona-Gallo

Compromise solutions in mining method selection - case study in colombian coal mining

Jorge Iván Romero-Gélvez, Félix Antonio Cortes-Aldana & Giovanni Franco-Sepúlveda

MGE2: A framework for cradle-to-cradle design

María-Estela Peralta-Álvarez, Francisco Aguayo-González, Juan-Ramón Lama-Ruizc & María Jesús Ávila-Gutiérrez

CrN coatings deposited by magnetron sputtering: Mechanical and tribological properties

Alexander Ruden Muñoz, Elisabeth Restrepo-Parra & Federico Sequeda

Weibull accelerated life testing analysis with several variables using multiple linear regression

Manuel R. Piña-Monarrez, Carlos A. Ávila-Chávez & Carlos D. Márquez-Luévano

9

58 63 69 76 85 93 103 109 118 127 137 147 156


State of the art of ergonomic costs as criterion for evaluating and improving organizational performance in industry

Silvana Duarte-dos Santos, Antonio Renato Pereira-Moro & Leonardo Ensslin

Study of the behavior of sugarcane bagasse submitted to cutting Nelson Arzola & Joyner García

Contribution of the analysis of technical efficiency to the improvement of the management of services

David López-Berzosa, Carmen de Pablos-Heredero & Carlos Fernández-Rened

Evaluation of the kinetics of oxidation and removal of organic matter in the self-purification of a mountain river Jorge Virgilio Rivera-Gutiérrez

Comfort perception assessment in persons with transfemoral amputation

Juan Fernando Ramírez-Patiño, Derly Faviana Gutiérrez-Rôa & Alexander Alberto Correa-Espinal

The water budget and modeling of the Montes Torozos' karst aquifer (Valladolid, Spain)

171 176 183 194

Germán Sanz-Lobón, Roberto Martínez-Alegría, Javier Taboada, Teresa Albuquerque, Margarida Antunes & Isabel Montequi

203

Determination of the tide constituents at Livingston and Deception Islands (South Shetland Islands, Antarctica), using annual time series

209

Bismarck Jigena-Antelo, Juan Vidal & Manuel Berrocoso

Effect of cellulose nanofibers concentration on mechanical, optical, and barrier properties of gelatin-based edible films Ricardo David Andrade-Pizarro, Olivier Skurtys & Fernando Osorio-Lira

Tribological properties of BixTiyOz films grown via RF sputtering on 316L steel substrates

Johanna Parra, Oscar Piamba, Jhon Olaya & José Edgar Alfonso

Analysis of energy saving in industrial LED lighting: A case study

Ana Serrano-Tierz, Abelardo Martínez-Iturbe, Oscar Guarddon-Muñoz & José Luis Santolaya-Sáenz

Matrix multiplication with a hypercube algorithm on multi-core processor cluster

José Crispín Zavala-Díaz, Joaquín Pérez-Ortega, Efraín Salazar-Reséndiz & César Guadarrama-Rogel

Our cover Image alluding to Article: Analysis of energy saving in industrial LED lighting: A case study Authors: Ana Serrano-Tierz, Abelardo Martínez-Iturbe, Oscar Guarddon-Muñoz & José Luis Santolaya-Sáenz

163

219 227 231 240


DYNA 82 (191), June, 2015. Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online

CONTENIDO Editorial Nuevas reglas editoriales para las revistas editadas y publicadas por la Facultad de Minas

Juan David Velásquez Henao

Un nuevo procedimiento de síntesis para TOPSIS basado en AHP

Juan Aguarón-Joven, María Teresa Escobar-Urmeneta, Jorge Luis García-Alcaraz, José María Moreno-Jiménez & Alberto Vega-Bonilla

Problema de enrutamiento de vehículos basado en su capacidad para SPS utilizando cómputo ubicuo: Un enfoque de mercados emergentes

Alberto Ochoa-Ortíz, Francisco Ornelas-Zapata, Lourdes Margain-Fuentes, Miguel Gastón Cedillo-Campos, Jöns Sánchez-Aguilar, Rubén Jaramillo-Vacio & Isabel Ávila

Análisis estructural para la identificación de variables claves en la Ruta del Oro, Nariño Colombia Aida Mercedes Delgado-Martínez & Freddy Pantoja-Timarán

MOORA versión difuso intuicionista para la selección de proveedores

Luis Pérez-Domínguez, Alejandro Alvarado-Iniesta, Iván Rodríguez-Borbón & Osslan Vergara-Villegas

Un enfoque relax and cut usando una formulación de flujo multiproductos para el problema del agente viajero

9 11 20 27 34

Makswell Seyiti Kawashima, Socorro Rangel, Igor Litvinchev & Luis Infante

42

Algoritmo genético para resolver el problema de localización de instalaciones capacitado en una cadena de tres eslabones para un centro de distribución dentro de un sistema de gestión de residuos sólidos en la región norte de Veracruz, México

51

María del Rosario Pérez-Salazar, Nicolás Francisco Mateo-Díaz, Rogelio García-Rodríguez, Carlos Eusebio Mar-Orozco & Lidilia Cruz-Rivero

Planeación de la generación a corto plazo mediante técnicas de descomposición primal y dual José Antonio Marmolejo-Saucedo & Román Rodríguez-Aguilar

Eficiencia técnica de unidades de generación termoeléctrica mediante una frontera estocástica

José Antonio Marmolejo-Saucedo, Román Rodríguez-Aguilar, Miguel Gastón Cedillo-Campos & María Soledad Salazar-Martínez

Optimización de la distribución de tubería de acero mediante un modelo matemático

Miguel Mata-Pérez & Jania Astrid Saucedo-Martínez

Efectos del compromiso gerencial y organización de equipos de trabajo en los beneficios del Kaizen: Etapa de planeación

Midiala Oropesa-Vento, Jorge Luis García-Alcaraz, Leonardo Rivera & Diego F. Manotas

Un marco de referencia para evaluar los extra-costos en cadenas logísticas de recursos naturales

58 63 69 76

Gabriel Pérez-Salas, Rosa G. González-Ramírez & Miguel Gastón Cedillo-Campos

85

El papel de los proveedores de servicios de abasto para la competitividad de México como una región de abastecimiento internacional

93

María del Pilar Ester Arroyo-López & José Antonio Ramos-Rangel

Modelado del equilibrio líquido-vapor CO2-crudo pesado colombiano con análisis SARA

Oscar Ramirez, Carolina Betancur, Bibian Hoyos & Carlos Naranjo

Buenas prácticas en la gestión de proyectos de I+D+i, capacidad de absorción de conocimiento y éxito

Silvia Vicente-Oliva, Ángel Martínez-Sánchez & Luis Berges-Muro

Technologies for the removal of dyes and pigments present in wastewater. A review

Leonardo Fabio Barrios-Ziolo, Luisa Fernanda Gaviria-Restrepo, Edison Alexander Agudelo & Santiago Alonso Cardona-Gallo

Compromise solutions in mining method selection - case study in colombian coal mining

Jorge Iván Romero-Gélvez, Félix Antonio Cortes-Aldana & Giovanni Franco-Sepúlveda

MGE2: Un marco de referencia para el diseño de la cuna a la cuna

María-Estela Peralta-Álvarez, Francisco Aguayo-González, Juan-Ramón Lama-Ruizc & María Jesús Ávila-Gutiérrez

Recubrimientos de CrN depositados por pulverización catódica con magnetrón: Propiedades mecánicas y tribológicas

Alexander Ruden Muñoz, Elisabeth Restrepo-Parra & Federico Sequeda

Análisis de pruebas de vida acelerada Weibull con varias variables utilizando regresión lineal múltiple

103 109 118 127 137 147

Manuel R. Piña-Monarrez, Carlos A. Ávila-Chávez & Carlos D. Márquez-Luévano

156

Estado del arte de los costos ergonómicos como un criterio para la evaluación y mejora del desempeño

163


organizacional en la indústria

Silvana Duarte-dos Santos, Antonio Renato Pereira-Moro & Leonardo Ensslin

Estudio del comportamiento del bagazo de caña de azúcar sometido a corte Nelson Arzola & Joyner García

Contribución del análisis de la eficiencia técnica a la mejora en la gestión de servicios

David López-Berzosa, Carmen de Pablos-Heredero & Carlos Fernández-Rened

Evaluación de la cinética de oxidación y remoción de materia orgánica en la autopurificación de un río de montaña

Jorge Virgilio Rivera-Gutiérrez

Valoración de la percepción de confort en personas con amputación transfemoral

Juan Fernando Ramírez-Patiño, Derly Faviana Gutiérrez-Rôa & Alexander Alberto Correa-Espinal

Balance y modelización del acuífero karstico de los Montes Torozos (Valladolid, España)

171 176 183 194

Germán Sanz-Lobón, Roberto Martínez-Alegría, Javier Taboada, Teresa Albuquerque, Margarida Antunes & Isabel Montequi

203

Determinación de las constituyentes de marea en las Islas Livingston y Decepción (Islas Shetland del Sur, Antártida), usando series temporales anuales

209

Efecto de la concentración de nanofibras de celulosa sobre las propiedades mecánicas, ópticas y de barrera en películas comestibles de gelatina

219

Bismarck Jigena-Antelo, Juan Vidal & Manuel Berrocoso

Ricardo David Andrade-Pizarro, Olivier Skurtys & Fernando Osorio-Lira

Propiedades tribológicas de Películas de BixTiyOz producidas por RF sputtering sobre sustratos de acero 316L

Johanna Parra, Oscar Piamba, Jhon Olaya & José Edgar Alfonso

Análisis de ahorro energético en iluminación LED industrial: Un estudio de caso

Ana Serrano-Tierz, Abelardo Martínez-Iturbe, Oscar Guarddon-Muñoz & José Luis Santolaya-Sáenz

Multiplicación de matrices con un algoritmo hipercubo en un cluster con procesadores multi-core José Crispín Zavala-Díaz, Joaquín Pérez-Ortega, Efraín Salazar-Reséndiz & César Guadarrama-Rogel

Nuestra carátula Imagen alusiva al artículo: Análisis de ahorro energético en iluminación LED industrial: Un estudio de caso

Authors: Ana Serrano-Tierz, Abelardo Martínez-Iturbe, Oscar Guarddon-Muñoz & José Luis Santolaya-Sáenz

227 231 240


A new synthesis procedure for TOPSIS based on AHP Juan Aguarón-Joven a; María Teresa Escobar-Urmeneta b; Jorge Luis García-Alcaraz c; José María Moreno-Jiménez d * & Alberto Vega-Bonilla e a

Zaragoza Multicriteria Decision Making Group, Faculty of Economics and Business, University of Zaragoza. Spain. aguaron@unizar.es Zaragoza Multicriteria Decision Making Group, Faculty of Economics and Business, University of Zaragoza. Spain. mescobar@unizar.es c Department of Industrial Engineering and Manufacturing, Autonomous University of Ciudad Juarez, Ciudad Juárez México. jorge.garcia@uacj.mx d Zaragoza Multicriteria Decision Making Group, Faculty of Economics and Business, University of Zaragoza. Spain. moreno@unizar.es e Department of Industrial Engineering and Manufacturing, Autonomous University of Ciudad Juarez, Ciudad Juárez México. albertovega18@gmail.com b

Received: January 28th, 2015. Received in revised form: March 26th, 2015. Accepted: April 30th, 2015.

Abstract Vega et al. [1] analyzed the influence of the attributes’ dependence when ranking a set of alternatives in a multicriteria decision making problem with TOPSIS. They also proposed the use of the Mahalanobis distance to incorporate the correlations among the attributes in TOPSIS. Even in those situations for which dependence among attributes is very slight, the results obtained for the Mahalanobis distance are significantly different from those obtained with the Euclidean distance, traditionally used in TOPSIS, and also from results obtained using any other distance of the Minkowsky family. This raises serious doubts regarding the selection of the distance that should be employed in each case. To deal with the problem of the attributes’ dependence and the question of the selection of the most appropriate distance measure, this paper proposes to use a new method for synthesizing the distances to the ideal and the anti-ideal in TOPSIS. The new procedure is based on the Analytic Hierarchy Process and is able to capture the relative importance of both distances in the context given by the measure that is considered; it also provides rankings, which are closer to the distances employed in TOPSIS, regardless of the dependence among the attributes. The new technique has been applied to the illustrative example employed in Vega et al. [1]. Keywords: Multicriteria Decision Making, TOPSIS, AHP, Dependence, Synthesis.

Un nuevo procedimiento de síntesis para TOPSIS basado en AHP Resumen Vega et al. [1] analizan la influencia que tiene la dependencia entre atributos al ordenar con TOPSIS un conjunto de alternativas en un problema de decisión multicriterio. Asimismo, estos autores proponen utilizar la distancia de Mahalanobis en TOPSIS para incorporar las correlaciones entre los atributos. El problema es que, incluso en situaciones en las que la dependencia entre atributos es muy pequeña, los resultados obtenidos utilizando la distancia de Mahalanobis difieren significativamente de los obtenidos con la distancia euclídea tradicionalmente empleada en TOPSIS, así como de los obtenidos con cualquier otra distancia de la familia de Minkowsky. Este hecho provoca serias dudas a la hora de seleccionar la distancia que debe utilizarse en cada caso. Para abordar el problema de la dependencia entre atributos y el asociado con la selección de la distancia más apropiada, este trabajo propone utilizar una nueva forma de sintetizar las distancias al ideal y anti-ideal en TOPSIS. Este nuevo procedimiento, basado en el Proceso Analítico Jerárquico (AHP), permite capturar la importancia relativa de ambas distancias en el contexto delimitado por la medida considerada y proporciona ordenaciones más cercanas que las de la síntesis tradicional para las diferentes distancias empleadas con TOPSIS, independientemente de la existencia de dependencia entre atributos. El procedimiento propuesto ha sido aplicado al ejemplo seleccionado por Vega et al. [1]. Palabras clave: Decisión Multicriterio, TOPSIS, AHP, Dependencia, Síntesis.

1.

Introduction

TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) is a multicriteria decision making technique used for ranking and selecting the best alternative from a

discrete group (Ai, i=1,…,m) [2,3]. This technique is based on the minimization of geometric distances from the alternatives to the ideal (A+) and anti-ideal (A-) solutions. In order to calculate the relative proximity (Ri) from Ai to A+ and A-, traditional TOPSIS [2] uses the Euclidean distance, which

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 11-19. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.51140


Aguarón-Joven et al / DYNA 82 (191), pp. 11-19. June, 2015.

develop the most effective technique. With respect to the multicriteria techniques based on distance minimization, the original and most commonly used is Compromise Programming [10,12-13]. This technique, with a priori information about the decision maker’s preferences (norms and weights), simultaneously works with all the criteria and seeks a solution that minimizes the distance to the ideal point. Let (1) be a multi-objective optimization problem where, without losing generality, it is supposed that all the q contemplated criteria are maximized:

implicitly assumes that the attributes contemplated for ordering alternatives are independent, and an indicator, or proximity index (Ri), which synthesizes (see Section 2.1.3) the information of both distances as a ratio (Ri= / ). Unfortunately, independence of attributes rarely occurs in the real-life cases to which the technique is applied [4]. Moreover, its study is especially complex and difficult. After analyzing the relevance of this topic, Vega et al. [1] adapted TOPSIS to the consideration of dependent attributes by means of the reformulation of Hwang and Yoon’s initial proposal [3]. The modification comprised a new measurement of ideal and anti-ideal distances, based on the Mahalanobis distance [5, 6], which captures the correlation between the attributes [7] and eliminates the common problem of data normalization. This reformulation provides very different results to those obtained with the Euclidean distance even if the dependence among the attributes is very slight [1]. To deal with this conflicting point, this paper proposes a new method, based on the Analytic Hierarchy Process (AHP), for synthesizing the contribution of the two distances ( and to the final ordering. The new synthesizing procedure allows the consideration of both aspects of different relative importance and without the difficulties associated with a quotient. The new relative importance index (Wi) for each alternative (see Section 3 for details), obtained as Wi = w+ w( ) + w- w( ), reduces the divergence between the rankings that result from the Euclidean and Mahalanobis distances. The structure of the remainder of this paper is as follows: Section 2 briefly describes the multicriteria decision making techniques that use the minimization of distances as methodological support; Section 3 includes the proposal of Vega et al. [1] for dealing with the dependence among the attributes; Section 4 presents the new synthesis process, based on the Analytic Hierarchy Process, and proposed for TOPSIS, the section further includes an illustrative example; finally, Section 5 briefly details the most important conclusions of the work.

,…,

(1)

The compromise solution is obtained by resolving the optimization problem that minimizes the distance to the ideal ∗ ∗ point or vector ,…, ∗ , where that distance is usually given by a Minkowski distance expression:

,

/

,

(2) | 0, 1, … , and Given that ∈ is the norm considered for distance 1,2, … , ∞ , 0 is the weight of criterion and ∗ is the ideal vector where each component ∗ of the vector is the individualised optimum of the criteria 1, … , , we have: ∗

(3)

When → ∞, the expression of the Minkowski distance is known as the Tchebycheff distance; in this case (2) it is: ∈

,

,

|,

1, … ,

|

(4)

For reasons of operational functionality, the most commonly used Minkowski norms are: 1 (Manhattan distance), 2 (Euclidean distance) and ∞ (Tchebycheff distance). In the first case, the optimization problem is linear, in the second it is quadratic and in the third case, the model can be easily transformed into a linear one. Other well-known multicriteria techniques based on the minimization of distances that have been widely used in discrete multicriteria decision-making are: Goal Programming [14], VIKOR [15] and TOPSIS.

2. Background 2.1. Multicriteria decision making techniques based on minimization distances Multicriteria Decision Making can be understood as a series of models, methods and techniques that allow a more effective and realistic solution to complex problems that contemplate multiple scenarios, actors and (tangible and intangible) criteria [8,9]. A variety of multicriteria decision approaches are mentioned in the scientific literature [10]. Despite the diversity of the techniques and the many arguments and discussions that have taken place regarding the different schools and approaches, there is no general agreement that a particular technique is superior to the others [11]. Moreover, in the last decade, debates between the different schools have been replaced by attempts to take advantage of the best elements of each approach in order to

2.1.1. Goal programming (GP) GP is a multicriteria technique that uses the distance minimization concept, but is more focused on the concept of satisfaction, moving away from the traditional concept of optimization. GP integrates a set of constrains that represent some resource limitations or capacities, and can be represented according to (5):

12


Aguarón-Joven et al / DYNA 82 (191), pp. 11-19. June, 2015.

Table 1. TOPSIS decision matrix

(5)

where:

… …

, 0,

1, … ,

1, … , 0,

0, |

0

0,

,

,

1, … ,

,

1, … ,

,

(6)

1, … , (7)

With dimensionless measures, the next step is to estimate and regret (Ri): the measurement of satisfaction ∗

∑ max

,

,

1, … , 1, … ,

(8) (9)

Using these values, VIKOR estimates a proximity index for every alternative: ∗ ∗

1

,

… …

… … … … …

… …

Given a discrete multicriteria decision problem with alternatives , 1, … , evaluated with respect to criteria , 1, … , , each element in Table 1 for the represents the value associated with alternative attribute or criterion , of which the weight or importance is . Traditional TOPSIS (TOPSIS-T) contemplates each alternative or object as a point or vector of an-dimensional space (see decision matrix in Table 1). TOPSIS-T calculates the Euclidean distance between the normalized values (with the distributive mode) of the initial alternatives and those of two special alternatives: the ideal and the anti-ideal , on the understanding that the best alternatives are those which are closer to the ideal and further away from the anti-ideal. To apply this technique, the attributes’ values should be numerical and have commensurable units. TOPSIS implicitly assumes that the contemplated attributes are independent [16,17]. Unfortunately, this rarely occurs in the real-life cases where the technique is applied. The procedure is better described in the following steps, as suggested by Hwang and Yoon [3] in their original proposal (traditional TOPSIS): Step 1. Calculate the normalized decision matrix As TOPSIS allows the evaluated criteria to be expressed in different measurement units, it is necessary to convert them into normalized values. The normalization process, like the metric used to calculate the ideal and anti-ideal distances, is the Euclidean one. In this case, the normalization of element of the decision matrix (Euclidean normalisation mode, ∑ , 1) is calculated as:

When seeking the integration of all the attributes in the ranking process, it is necessary to express them in dimensionless scales. The normalization mode used by VIKOR is utility normalization: –

… … … … …

2.1.3. Traditional TOPSIS technique (TOPSIS-T)

/ ∗

These weights take values in a range between 0 and 1. For example, for 0.5 indicates consensus among decision makers. If 0.5 then majority have more weight in the decision process. But if 0.5, the minority have more weight, producing a veto effect. The results for , and are three lists that allow the alternatives to be ranked in descending order.

VIKOR (VIseKriterijumska Optimizacija I Kompromisno Resenje in Serbian) was originally proposed by Serafim Opricovic in his Ph. D. dissertation (1979) aimed at resolving complex decision problems with conflicting and non-commensurable criteria [15]. For the ranking of the alternatives and the selection of the best one, it uses an index that measures the proximity to the ideal solution. The distance employed for measuring the proximity belongs to the Lp-Minkowsky metric that is traditionally used in Compromise programing [12-13]. ∗

Source: Authors

1, … ,

2.1.2. VIKOR technique

(10)

1, … , = Maxi , ∗ = Mini , = where: ∗ = Mini , Maxi and is the weight associated with the normalised difference from the best collective strategy (best solution with p=1) and 1 the weight associated with the normalised difference to the best individual rejection strategy (best solution with p=).

,

1, … ,

,

1, …

(11)

Step 2. Calculate the weighted normalized decision matrix 13


Aguarón-Joven et al / DYNA 82 (191), pp. 11-19. June, 2015.

The weighted normalized value

is calculated as:

1, … ,

,

1, …

,

Table 2. Saaty’s Fundamental Scale [18] Numerical Scale Verbal of Importance Definition 1 Equal importance

1. The weights can be obtained by with ∑ means of different procedures [10]: direct assignation, AHP, etc. Step 3. Determine the “positive ideal” and “negative ideal” alternatives Without losing generality and supposing that all the criteria are maximized, the ideal positive solution is given by , where ,…, , 1, … , 1, … , and the ideal negative or anti-ideal solution is given where ,…, , by 1, … , 1, … . Step 4. Calculate the distances from the ideal The separation of each alternative

3

5

/

/

two elements with respect to the common element in the higher level of the hierarchy. (iii) Prioritization and synthesis: the determination of the local and global priorities for the elements of the hierarchy and the total priorities for the alternatives of the problem. Saaty’s Eigen Vector method (EGV) and the Row Geometric Mean method (RGM) are the two most common prioritization procedures [18]. One of the characteristics which differentiates this methodology from other multicriteria approaches is that AHP measures the inconsistency of the actors when eliciting the judgments of the pairwise comparison matrices in a formal, elegant and intrinsic manner, linked to the mathematical prioritization procedure. Saaty`s Consistency Ratio (CR) and Aguarón & Moreno-Jiménez’s Geometric Consistency Index (GCI) are the two inconsistency measures usually employed with the EGV and the RGM prioritization procedures, respectively [20].

∑ , 1, … , . Step 5. Calculate the relative proximity to the ideal solution The relative proximity from to and is given by:

1, … ,

Both elements meet criteria by contributing equally to the objective. Judgment and experience favor one of the elements.

Judgment and experience strongly favor one element over the other 7 Very strong One element is much more importance important and its dominance is demonstrated in practice. 9 Extreme The evidence favoring one importance element over another is absolute Intermediate numeric values (2, 4, 6 and 8) reflect intermediate categories Source: Saaty (1980) [18]

∑ solution is calculated as , 1, … , . In a similar way, the separation of each from the ant-ideal solution is calculated as alternative

,

Moderate importance: one element a little more important than the other Strong importance

Explanation

(12)

where is named the proximity index and low values are better. Step 6. Preference order Finally, is used to rank the alternatives; the nearest the is to 0, the greater its value of the proximity index proximity to the ideal, the higher its priority. In short, (Ai≻Aj  Ri<Rj). 2.2. The Analytic Hierarchy Process (AHP) The Analytic Hierarchy Process (AHP) is a multicriteria decision making methodology created by the mathematician Thomas Saaty in the 1970s. It deals with the multicriteria ranking and selection of a discrete set of alternatives in a context with multiple scenarios, actors and criteria (tangible and intangible). The AHP methodology [18-19] comprises the following stages: (i) Modeling the problem: the construction of a hierarchy, the identification of the mission, the relevant criteria to its execution, the sub-criteria for each criterion, the actors and the alternatives. (ii) Valuation: the incorporation of the actors’ preferences by means of pairwise comparisons between the elements of the hierarchy that hang from the same node; this process uses judgments from Saaty’s fundamental scale (see Table 2) and the result is a square, reciprocal and positive pairwise comparison matrix that reflects the relative importance of

3. Dependent attributes in TOPSIS. TOPSIS-M It is well known that all multi-attribute techniques may be improved, depending on the theory on which they are based [21,22]. TOPSIS is no exception. One of its main limitations is the correlation between attributes; in other words, this technique assumes that all the attributes are independent and this is not always the case. There are several ways for measuring the dependence among the attributes [23]. Some of the most widely used are: the scatterplots matrix, the correlation matrix, variance inflation factors, condition numbers [24,25] and the GleasonStaelin indicator. The scatterplots matrix is a visual tool that plots each attribute against the other in matrix form; this allows the observation of patterns or linear trends that helps to determine the dependence between attributes. The 14


Aguarón-Joven et al / DYNA 82 (191), pp. 11-19. June, 2015.

of the Minkowsky family and it is especially notable with the Manhattan (p=1) and the Tchebycheff distances (p=) [1]. In order to deal with this contradiction, in the next section, we advance a new synthesis procedure (Step 5 of the TOPSIS algorithm), based on the Analytic Hierarchy Process (AHP), that allows the consideration of the relative importance of the distances from the ideal and anti-ideal alternatives and provides results that are closer for the distances employed than those obtained with the traditional TOPSIS approach, regardless of the normalization model used.

Correlation matrix, Σ = (ij), is a square (nxn) symmetrical matrix in which each entry ij is the correlation between attributes i and j (ij[-1,1]). The Variance Inflation Factor (VIF) for the attribute j is given by VIFj= 1/(1- ), where is the coefficient of determination of attribute over others in a multiple linear regression. Values of VIF greater than 10 indicate strong dependence in the data [26]. The Condition Numbers are calculated [27] as nj =max/min, where max and min are the maximum and minimum eigen-values of the XTX matrix. If any of the condition numbers calculated for the set of attributes is greater than 1000, it indicates that there is no independence between the attributes. The Gleason-Staelin redundancy measure (Phi) is given by [28]:

4. A new synthesis procedure for TOPSIS 4.1. New Proposal (TOPSIS-AHP)

(13)

The synthesis procedure followed in TOPSIS combines the distances from the ideal and the anti-ideal solutions using the ratio (12) (Ri = / ). When the distance employed to measure proximity is the Euclidean (Step4), data are previously normalised using the Euclidean mode (11). If the distance employed is the Mahalanobis distance (14), then it is not necessary to normalize the original data (the results obtained when normalizing with any norm and not normalizing are the same). As already mentioned, the results obtained for Ri, and therefore for the associated rankings, using both distances (Euclidean and Mahalanobis) can be clearly different if there is any dependence, even if it is very small. The new, AHPbased, procedure deals with this drawback, as well as the problem of selecting the method for normalizing the data; it further allows the assignation of a different relative importance for both distances. The rest of this section presents the new, AHP-based, procedure. The Relative Importance Index (Wi) for alternative Ai i=1,…,m is given by:

where the rij (i,j=1,…,n) are the correlations between the attributes. If this indicator is higher than 0.5, it is an indication of redundancy and dependence in the data. To deal with the problem of dependence among the attributes, Vega et al. [1] proposed an extension of TOPSIST, named TOPSIS-M, which uses the Mahalanobis distance instead of the Euclidean distance in Step 4 of the previous algorithm. TOPSIS-M solves two of the limitations of TOPSIS-T: (i) the redundancy provoked by the dependence when measuring the proximity with the Euclidean distance and (ii) the problem of selecting the appropriate mode for normalizing the data. Using the Mahalanobis distance is not necessary to normalize the initial data. The value of the Mahalanobis distance is the same, apart from the used normalization mode that is used. The value is also the same without normalization. The Mahalanobis distance [5,6] determines the similarity between two multi-dimensional random variables as well as considering the existent correlation between them ( is required to obtain ). The Mahalanobis distance between two random variables with the same and probability distribution and with Σ variance-covariance matrix is formally defined as: d

x, y

x

y Σ

x

y

/

Wi= w+

X

X

(16)

where = w( ) is the priority of derived from the pairwise comparison matrix of the distances ( ) from alternative Ai to the ideal alternative A+ and w+ is the relative importance of the priorities of the distances to the ideal alternative. Analogously, = w( ) is the priority of derived from the pairwise comparison matrix of the distances ( ) from alternative Ai to the anti-ideal alternative A- and w- is the relative importance of the priorities of the distances to the anti-ideal alternative. In order to derive the priorities of the proximities to the ideal ( ) and to the anti-ideal ( ) solutions for a particular distance (Euclidean, Mahalanobis etc.), the following pairwise comparison matrices should be constructed. ), assuming that are the a) In the first case ( distances to the ideal solution (i){1,…,m} in ascending order, the (r,s)-entry of the pairwise comparison matrix, from which the priorities ( ) are derived, includes the judgment from Saaty’s fundamental scale [18] that captures the is preferred to . intensity with which b) In the second case ( ), assuming that are the

(14)

Where Σ

+ w-

(15)

and X is the data matrix with m objects in rows and n columns, the centered matrix, ̅ , and ̅ the arithmetic mean. This value coincides with the Euclidean distance if the covariance matrix is the identity matrix, i.e. if all bivariate correlations between variables are zero. When there is some dependence among the attributes, even if it is very small (GleasonStaelin’s < 0.025), the rankings obtained with the Euclidean distance and those obtained with the Mahalanobis distance can be significantly different [1]. This is also true for the other distances 15


Aguarón-Joven et al / DYNA 82 (191), pp. 11-19. June, 2015. Table 3. Data for the Vega et al. (2014)’s example [1] Alt.\Dist. dE(A+) dE(A-) A1 0.0617 0.0441 A2 0.0493 0.0607 A3 0.0424 0.0497 A4 0.0489 0.0574 A5 0.0655 0.0492 A6 0.0462 0.0609 Max 0.0655 0.0609 Min 0.0424 0.0441 Range (R) 0.0230 0.0168 * 0 d /d 1.5437 1.3809

RE(Ai) RankingE dM(A+) dM(A-) 0.5832 6 331.53 371.96 0.448 2 332.49 371.00 0.460 4 330.58 372.88 0.460 3 333.63 369.83 0.570 5 332.50 370.95 0.431 1 331.23 372.24 0.583 333.63 372.88 0.4318 330.58 369.83 0.1514 3.0544 3.05 1.3506 1.009 1.008

Table 4a. Pairwise comparison matrix for the Euclidean distances to the ideal RM(Ai) RankingM 0.4712 3 0.4729 4 0.4699 1 0.4742 6 0.4726 5 0.4708 2 0.4742 0.4699 0.0043 1.009

order 1 2 3 4 5 6

Judgments dE(A+) dE(A+) Alter. 0.042 A3 0.046 A6 0.049 A4 0.049 A2 0.062 A1 0.066 A5

0.042 A3 1 0.333 0.250 0.250 0.200 0.167

0.046 A6 3 1 0.500 0.500 0.200 0.200

0.049 A4 4 2 1 0.500 0.250 0.200

0.049 A2 4 2 2 1 0.250 0.200

0.062 A1 5 5 4 4 1 0.500

0.066 A5 6 5 5 5 2 1

Source: Authors

Source: Vega et al. (2014) [1] Table 4b. Pairwise comparison matrix for the Euclidean distances to the anti-ideal Judgment dE(A- 0.06 0.06 0.05 0.05 0.04 0.04 s ) 1 1 7 0 9 4 order dE(A-) Alter. A6 A2 A4 A3 A5 A1 1 0.061 A6 1 1 2 4 4 5 1.00 2 0.061 A2 0 1 2 4 4 5 0.50 0.50 3 0.057 A4 0 0 1 4 4 5 0.25 0.25 0.25 4 0.050 A3 0 0 0 1 1 3 0.25 0.25 0.25 1.00 5 0.049 A5 0 0 0 0 1 3 0.20 0.20 0.20 0.33 0.33 6 0.044 A1 0 0 0 3 3 1 Source: Authors

distances to the anti-ideal solution (i){1,…,m} in descending order, the (r,s)-entry of the pairwise comparison matrix from which the priorities ( ) are derived includes the judgment from the Saaty’s fundamental scale [18] that is preferred to . captures the intensity with which By means of any of the existing prioritization procedures (the Row Geometric Mean in this case), we derive for each alternative the priorities of its distances to the ideal and antiideal solutions ( and , respectively). Using these values and the relative importance or weight associated to the distances to the ideal and the anti-ideal solutions (w+ and w), the priority for each alternative is obtained (16) and used to rank them. 4.2. Case Study

Table 5a. Pairwise comparison matrix for the Mahalanobis distances to the ideal Judgments dM(A+) 330.5 331.2 331.5 332.4 332.5 333.6 order dM(A+) Alter. A3 A6 A1 A2 A5 A4 1 330.58 A3 1 1 2 3 3 5 2 331.23 A6 0.500 1 1 2 2 4 3 331.53 A1 0.333 1.000 1 2 2 3 4 332.49 A2 0.250 0.333 0.333 1 1 2 5 332.51 A5 0.250 0.250 0.333 1.000 1 2 6 333.63 A4 0.167 0.200 0.200 0.250 0.333 1 Source: Authors

This procedure has been applied to the example (Profiles of Graduate Fellowship Applicants) used in Vega et al. [1]. The data corresponding to the distances to the ideal and the anti-ideal as well as the relative proximity (12) and the resulting rankings for the two distances (Euclidean and Mahalanobis) can be seen in Table 3. It can be observed that the rankings obtained using TOPSIS-T (distance and Euclidean normalization) and TOPSIS-M (Mahalanobis distance and non-normalized data) are clearly different. This is also true for very small dependencies [1]. In our example, the value of the GleasonStaelin measure of redundancy (Phi) is = 0.6736, which is higher than the 0.5 threshold necessary for the existence of dependence. In order to deal with this conflict and after ranking the distances to the ideal and the anti-ideal from the minimum to the maximum, we ask the decision maker to evaluate the relative importance of the distances (Euclidean and Mahalanobis) to the ideal and the anti-ideal solutions. The pairwise comparisons matrices provided by the decision maker are given in Tables 4a and 4b for the Euclidean distance and in Tables 5a and 5b for the Mahalanobis distance. Using the Row Geometric Mean method as the prioritization procedure, the local priorities for the two distances (Euclidean and Mahalanobis) are obtained in five different scenarios, which depend on the weights assigned to the priorities of the distances to the ideal and the anti-ideal (see Tables 6 and 7). As can be noted in Tables 6 and 7, the Relative Importance Index (Wi) and rankings obtained for the two

distances (Euclidean and Mahalanobis) with the new synthesis procedure (TOPSIS-AHP-E and TOPSIS-AHP-M) differ in the five considered scenarios, which depend on the weights given to the distances from the ideal and the antiideal. But both the cardinal (r) and ordinal (Spearman’s ) correlations between them (TOPSIS-AHP-E and TOPSISAHP-M) are greater than those obtained for the TOPSIS-T and TOPSIS-M (see Table 8), except for the linear correlation in the (1,2) = (0.25; 0.75) situation. It can also be verified that the cardinal and ordinal correlations between the values obtained with TOPSIS-T and with TOPSISAHP-E are greater than 90% in situation (1,2) = (0.25; 0.75). On the other hand, the greatest values for the correlations between TOPSIS-M and TOPSIS-AHP-M are reached for the situation (1,2) = (0; 1), with values closer to a 100%. With respect to the judgment of Saaty’s fundamental scale assigned by the decision maker to each comparison of distances, it should be mentioned that these judgments capture the holistic vision of the reality and are given in accordance with the decision maker’s experience and culture.

16


Aguarón-Joven et al / DYNA 82 (191), pp. xx-xx. June, 2015. Table 5b. Pairwise comparison matrix for the Mahalanobis distances to the anti-ideal Judgments dM(A-) 372.884 372.243 dM(A-) Alter. A3 A6 1 372.884 A3 1 1 2 372.243 A6 1.000 1 3 371.966 A1 0.500 1.000 4 371.003 A2 0.333 0.500 5 370.955 A5 0.333 0.500 6 369.834 A4 0.200 0.250 Source: Authors

Table 6. Priorities and positions for the Euclidean distance in five different scenarios (1, 2) (1;0) (0.75;0.25) Euclidean Wi+ WiPriority Position Priority Position A1 0.050 0.040 0.050 5 0.048 5 A2 0.126 0.298 0.126 4 0.169 4 A3 0.409 0.077 0.409 1 0.326 1 A4 0.159 0.211 0.159 3 0.172 3 A5 0.036 0.077 0.036 6 0.046 6 A6 0.219 0.298 0.219 2 0.238 2 Source: Authors

Table 7. Priorities and positions for the Mahalanobis distance in five different scenarios (1, 2) (1;0) (0.75;0.25) Mahalanobis Wi+ WiPriority Position Priority Position A1 0.200 0.194 0.200 3 0.199 3 A2 0.098 0.107 0.098 4 0.100 4 A3 0.337 0.305 0.337 1 0.329 1 A4 0.046 0.058 0.046 6 0.049 6 A5 0.094 0.107 0.094 5 0.097 5 A6 0.225 0.229 0.225 2 0.226 2 Source: Authors

Table 8. Cardinal (r) and ordinal () correlations Correlations Cardinal AHP (E vs M) (1;0) (0.75;0.25) (0.5;0.5) (0.25;0.75) (0;1) TOPSIS T vs M Source: Authors

()

0.725 0.598 0.339 -0.010 -0.278 0.047

0.533 0.533 0.467 -0.067 -0.267 -0.067

371.003 A2 3 2 2 1 1.000 0.500

370.955 A5 3 2 2 1 1 0.500

369.834 A4 5 4 3 2 2 1

(0.5;0.5) Priority Position 0.045 6 0.212 3 0.243 2 0.185 4 0.056 5 0.258 1

(0.25;0.75) Priority Position 0.042 6 0.255 2 0.160 4 0.198 3 0.067 5 0.278 1

(0;1) Priority Position 0.040 6 0.298 1 0.077 4 0.211 3 0.077 4 0.298 1

(0.5;0.5) Priority Position 0.197 3 0.103 4 0.321 1 0.052 6 0.100 5 0.227 2

(0.25;0.75) Priority Position 0.196 3 0.105 4 0.313 1 0.055 6 0.104 5 0.228 2

(0;1) Priority Position 0.194 3 0.107 4 0.305 1 0.058 6 0.107 5 0.229 2

Table 9. A suggestion for the judgments assigned to the ratio of distances With dr/ds between ars Euclidean Mahalanobis 1 1,000 1,025 1,000 2 1,025 1,075 1,002 3 1,075 1,150 1,004 4 1,150 1,300 1,006 5 1,300 1,500 1,008 6 1,500 1,750 7 1,750 2,000 8 2,000 4,000 9 4,000 Source: Authors

Ordinal

(r )

371.966 A1 2 1 1 0.500 0.500 0.333

It is not easy to assign these judgments in a systematic way because of the diversity of the distances for both metrics. For the values of our example, we can suggest the following practical procedure, which depends on the values of the Range of the distances (R = d*-d0) and the ratio d*/d0, where d* = Maxi di and d0= Mini di with i=1.….m. Let dr and ds be the two compared distances (dr ds) and assuming that the criterion for the comparisons is the higher the better, the judgment (ars) associated to the comparison between dr and ds for the Euclidean distance (d*/d0 =1.544 to the ideal and 1.381 to the anti-ideal) and the Mahalanobis distance are given in the scales included in Table 9.

1,002 1,004 1,006 1,008 1,010

Obviously, as it has already been mentioned, this suggestion depends on the considered distances (d*-d0 and d*/d0) and a more detailed study would be necessary in order to establish a systematic rule. 5. Conclusions One of the limitations of TOPSIS in its initial proposal (Euclidean distance and normalization), known as traditional TOPSIS (TOPSIS-T), is the problem of dependence among the attributes. In order to solve this problem and capture the dependence among the attributes, [1] proposed the use of the 17


Aguarón-Joven et al / DYNA 82 (191), pp. 11-19. June, 2015.

Mahalanobis distance (no need to normalize the data) instead of the Euclidean. This extension of TOPSIS-T, known as TOPSIS-M, provides rankings for the alternatives being compared which can be significantly different, even for small degrees of dependence. To deal with this problem, this paper proposes a new synthesis procedure for the distances of the alternatives from the ideal and the anti-ideal ones. The new Relative Importance Index integrates the relative importance of the distances to these two points and provides results for the Euclidean and the Mahalanobis distances that are closer than those obtained with TOPSIS-T and TOPSIS-M. The new proposal aims to be a stepping-stone in the process of obtaining a synthesis procedure for the distances to the ideal and the anti-ideal that allows us to reduce the gap between the results obtained with dependent and independent attributes. The results, that appear to be justified by the relative importance captured by the AHP, should be tested with some other examples.

[9] [10] [11] [12] [13] [14] [15]

[16]

Acknowledgments

[17]

This work was partially financed by: 2. Project “Industrial Optimization Processes Thematic Network”, supported by the Mexican National Council for Science and Technology by agreement number 242104. 3. Project “Social Cognocracy Network” (Ref. ECO201124181), supported by the Spanish Ministry of Science and Innovation. The authors also would like to acknowledge the work of English translation professional David Jones in preparing the final text.

[18] [19] [20]

[21]

[22]

References [1] [2]

[3] [4] [5]

[6] [7]

[8]

Vega, A., Aguarón, J., García-Alcaraz, J. and Moreno-Jiménez, J.M., Notes on dependent attributes in TOPSIS. Procedia Computer Science, 31, pp. 308-317. 2014. DOI: 10.1016/j.procs.2014.05.273 Behzadian, M., Otaghsara, S.K., Yazdani, M. and Ignatius, J.A Stateof the-art survey of TOPSIS applications. Expert Systems with Applications, 39, pp. 13051-13069. 2012. DOI: 10.1016/j.eswa.2012.05.056 Hwang, C.L. and Yoon, K.P., Multiple attribute decision making: methods and applications survey. New York. NY: Springer-Verlag. 1981. DOI: 10.1007/978-3-642-48318-9 Kiang, M.Y., A Comparative assessment of classification methods. Decision Support Systems, 35, pp. 441-454, 2003. DOI: 10.1016/S0167-9236(02)00110-0 De Maesschalck, R., Jouan-Rimbaud, D. and Massart, D.L., The Mahalanobis distance. Chemo AC. Pharmaceutical Institute. Department of Pharmacology and Biomedical Analysis, 50, pp. 1-18. 2000. Mahalanobis, P.C., On the generalised distance in statistics. Proceedings National Institute of Science, 2 (1), pp. 49–55. 1936. Kovács, P., Petres, T. and Tóth, L., A. new measure of multicollinearity in linear regression models. International Statistical Review, 73 (3), pp. 405-412. 2005. DOI: 10.1111/j.17515823.2005.tb00156.x Moreno-Jiménez, J.M., El proceso analítico jerárquico. fundamentos. metodología y aplicaciones. RECT@ Revista Electrónica de Comunicaciones y Trabajos de ASEPUMA - Serie Monografías, 1, 35. 2002.

[23] [24] [25] [26]

[27]

[28]

Garza-Ríos, R. and González-Sánchez, C., Apply multicriteria methods for critical alternative selection. DYNA, 81 (188), pp. 125130. 2014. DOI: 10.15446/dyna.v81n188.41220 Zeleny, M., Multiple criteria decision making. New York. NY. USA: Penguin Books. 1982. Belton, V. and Steward, J., Multiple criteria decision analysis - an integrated approach. Bolton: KluwerAcademic Pres. 2002. DOI: 10.1007/978-1-4615-1495-4 Yu, P.L. and Leitmann, G., Compromise solutions domination structures and Salukvadze's solution. Journal of Optimization Theory and Applications, 13, pp. 362-378. 1974. DOI: 10.1007/BF00934871 Zeleny, M., Compromise programming. In Multicriteria Decision Making. ed. by J.L. Cochrane and M. Zeleny, South Carolina. USA: University of Columbia Press. 1973. Charnes, A., Cooper, W.W. and Ferguson, R., Optimal estimation of excutive compensation by linear programming. Management Science, 1, pp. 138-151. 1955. DOI: 10.1287/mnsc.1.2.138 Opricovic, S. and Tzeng, G., Multicriteria planning of postearthquake sustainable reconstruction. Computer-Aided Civil and Infrastructure Engineering, 17, pp. 211-220. 2002. DOI: 10.1111/1467-8667.00269 Chen, S.J. and Hwang, C.L., Fuzzy multiple attribute decision making: Methods and applications. Berlin: Springer-Verlag. 1992. DOI: 10.1007/978-3-642-46768-4 Yoon, K.P. and Hwang, C.L., Multiple attribute decision making. Thousand Oaks. CA: Sage Publication. 1995. Saaty. T.L., The analytic hierarchy process. McGraw-Hill. New York. 1980. Saaty. T.L., Fundamentals of decision making and priority theory with the analytic hierarchy process. RWS Publications. Pittsburgh. PA. 1994. Aguarón, J. and Moreno-Jiménez, J.M., The geometric consistency index: Approximated thresholds. European Journal of Operational Research, 147 (1), pp. 137-145. 2003. DOI: 10.1016/S03772217(02)00255-2 Bilbao-Terol, A., Arenas-Parra, M., Cañal-Fernández, V. and Antomil-Ibias, J., Using topsis for assessing the sustainability of government bond funds. Omega, 49, pp. 1-17. 2014. DOI: 10.1016/j.omega.2014.04.005 Lima, F.R., Osiro, L. and Ribeiro, L.C., A comparison between Fuzzy AHP and Fuzzy TOPSIS methods to supplier selection. Applied Soft Computing, 21, pp. 194-209. 2014. DOI: 10.1016/j.asoc.2014.03.014 Wang, J.W., Cheng, C.H. and Huang, K.C., Fuzzy hierarchical TOPSIS for supplier selection. Applied Soft Computing, 9, pp. 377386. 2009. DOI: 10.1016/j.asoc.2008.04.014 Lipovetsky, S. and Michael Conklin, W., Multiobjective regression modifications for collinearity. Computers & Operations Research, 28, pp. 1333-1345. 2001. DOI: 10.1016/S0305-0548(00)00043-5 Pasini, A., A Bound for the collinearity graph of certain locally polar geometries. Journal of Combinatorial Theory. Series A, 58, pp. 127130. 1991. DOI: 10.1016/0097-3165(91)90077-T Liu, R.X., Kuang, J., Gong, Q. and Hou, X.L., Principal component regression analysis with Spss. Computer Methods and Programs in Biomedicine, 71, pp. 141-17. 2003. DOI: 10.1016/S01692607(02)00058-5 Li, B., Morris, A.J. and Martin, E.B., Generalized partial least squares regression based on the penalized minimum norm projection. Chemometrics and Intelligent Laboratory Systems, 72, pp. 21-26. 2004. DOI: 10.1016/j.chemolab.2004.01.026 Jackson, J., A User's guide to principal components. New York: John Wiley &Sons. 1991. DOI: 10.1002/0471725331

J. Aguarón-Joven is an Associate Professor of Statistics and Operations Research at the Business and Economics Faculty of the University of Zaragoza, Spain. He holds a PhD in Mathematics by the University of Zaragoza, Spain and belongs to the Multicriteria Decision Making Group at this same University. His main research interests are Multicriteria Decision Making, Decision Support Systems and Information Systems. He has published several papers in journals such as the European Journal of Operational Research, Group Decision and Negotiation, or Annals of

18


Aguarón-Joven et al / DYNA 82 (191), pp. 11-19. June, 2015. Operations Research, as well as many book chapters and proceedings. ORCID: 0000-0003-3138-7597.

the theme “Green Engineering and Sustainable Development” and he was Member of Mechatronics Engineering Student Society SAIMT 09-10. His research has ranged over multicriteria decision making theory and industrial environment philosophy.

M.T. Escobar-Urmeneta is an Associate Professor of Statistics and Operations Research at the Business and Economics Faculty of the University of Zaragoza, Spain. She holds a PhD in Mathematics by the University of Zaragoza, Spain. She belongs to the Multicriteria Decision Making Group at this same University (http://gdmz.unizar.es), and her main research interests are Multicriteria Decision Making, especially with the Analytic Hierarchy Process (AHP) and its application to several fields such as Logistics, Environment, E-Government, etc. She has published several papers in journals such as the European Journal of Operational Research, Omega, Group Decision and Negotiation, Computers and Human Behavior or Annals of Operations Research, as well as many book chapters and proceedings. ORCID: 0000-0003-4419-1905. J.L. García-Alcaraz is currently a full professor in the department of Industrial and Manufacturing Engineering at the Autonomous University of Ciudad Juarez, Mexico. He is a founding member of the Mexican Society of Operations Research, active member of the Mexican Academy of Industrial Engineering and honorary member of the Mexican Association of Logistics and Supply Chain. Dr. Garcia is recognized as researcher level I by the National Council of Science and Technology of Mexico (CONACYT). He received a Master’s Degree in Industrial Engineering Sciences from the Technological Institute of Colima, Mexico, a Ph.D. in Industrial Engineering Sciences from the Technological Institute of Ciudad Juarez, Mexico and a Post-Doctorate at the University of La Rioja, Spain. His main researches areas are related to the modeling production processes and multi-criteria decision-making applied supply chain. He is author-coauthor of over 100 articles published in journals, conferences and congresses. ORCID: 00000002-7092-6963.

Área Curricular de Ingeniería Administrativa e Ingeniería Industrial Oferta de Posgrados

Especialización en Gestión Empresarial Especialización en Ingeniería Financiera Maestría en Ingeniería Administrativa Maestría en Ingeniería Industrial Doctorado en Ingeniería - Industria y Organizaciones

J.M. Moreno-Jiménez completed degrees in mathematics and economics as well as a PhD. degree in applied mathematics from the University of Zaragoza, Spain. He is a Full Professor of Operations Research and Multicriteria Decision Making in the Faculty of Economics and Business of the University of Zaragoza, Spain. He is also the Chair of the Zaragoza Multicriteria Decision Making Group (http://gdmz.unizar.es), a research group attached to the Aragon Institute of Engineering Research (I3A), the Coordinator of the Spanish Multicriteria Decision Making Group and the President of the International Society on Applied Economics ASEPELT. His main fields of interest are multicriteria decision-making, environmental selection, strategic planning, knowledge management, evaluation of systems, logistics, and public decision making (e-government, eparticipation, e-democracy and e-cognocracy). He has published more than 200 papers in scientific books (LNCS, LNAI, CCIS, Advances in Soft Computing…) and journals such as Operations Research, European Journal of Operational Research, Omega, Group Decision and Negotiation, Annals of Operations Research, Computer, Standards and Interface, Mathematical and Computer Modeling, Computers in Human Behavior, International Journal of Production Research, Journal of Multi-Criteria Analysis, Int. J. Social and Humanistic Computing, Production Planning & Control, Government Information Quaterly, EPIO (Argentina), Pesquisa Operativa (Brasil), Revista Eletrônica de Sistemas de Informação (Brasil), TOP (España), Estudios de Economía Aplicada (España), and Computación y Sistemas (Méjico), among others. He is a member of the Editorial Board of two national (Spanish) journals (EEA y CAE) and five international ones (IJAHP, IJKSR, RIJSISEM, JMOMR, ITOR). ORCID: 0000-0002-50376976.

Mayor información: E-mail: acia_med@unal.edu.co Teléfono: (57-4) 425 52 02

A. Vega-Bonilla is a Master of Science in Industrial Engineering by Autonomous University of Ciudad Juárez, Mexico and Bachelor of Science in Mechatronics Engineering by Monterrey Institute of Technology and Higher Education campus Chihuahua, Mexico. He had also a research fellowship by the National Council for Science and Technology (CONACyT), during his Master degree studies. Vega-Bonilla has work experience in the manufacturing, logistics and transportation fields. He has served as a volunteer in the New Mexico International Business Accelerator (USA government association) organizing the main conference “NAFTA Institute / Supplier meet the Buyer” in 2011. He was also General Coordinator in organizing the 3rd. International Engineering Congress with 19


Capacitated vehicle routing problem for PSS uses based on ubiquitous computing: An emerging markets approach Alberto Ochoa-Ortíz a, Francisco Ornelas-Zapata b, Lourdes Margain-Fuentes b, Miguel Gastón Cedillo-Campos c, Jöns Sánchez-Aguilar d, Rubén Jaramillo-Vacio e & Isabel Ávila a b

a Universidad Autónoma de Ciudad Juárez, Ciudad Juárez, México. alberto.ochoa@uacj.mx Universidad Politécnica de Aguascalientes, Aguascalientes, México. francisco.ornelas@upa.edu.mx, lourde.margain@upa.edu.mx, mc140002@alumnos.upa.edu.mx c Instituto Mexicano del Transporte, Salfandia, Querétato; México. gaston.cedillo@imt.com d Instituto Tecnológico de Querétaro, Querétaro, México. jonssanchez@gmail.com e CIATEC, Guanajuato, México. ruben.jaramillo@cfe.gob.mx

Received: January 28th, 2015. Received in revised form: March 26th, 2015. Accepted: April 30th, 2015.

Abstract The vehicle routing problem under capacity constraints based on ubiquitous computing in a perspective of deploying PSS (Product-Service Systems) configurations for urban goods transport, is addressed. It takes into account the specificities of city logistics under an emerging markets context. In this case, it involved: i) low logistical capabilities of decision makers; ii) limited availability of data; and iii) restricted access to high performance technology to compute optimal transportation routes. Therefore, the use of free download software providing inexpensive solutions (time and resources) is proposed. The paper shows the implementation of results to a software tool based on Graph Theory used to analyze and solve a CVRP (Capacitated Vehicle Routing Problem). The case of a local food delivery company located in a large city in Mexico was used. Based on small fleet vehicles with the same technical specifications and comparable load capacity. Keywords: Graph theory; vehicle routing; city logistics; ubiquitous computing and product-service system.

Problema de enrutamiento de vehículos basado en su capacidad para SPS utilizando cómputo ubicuo: Un enfoque de mercados emergentes Resumen El problema de ruteo de vehículos bajo las limitaciones de capacidad y basado en computación ubicua desde una perspectiva relacionada con PSS (Producto-Servicio de Sistemas) para desarrollar configuraciones para el transporte urbano de mercancías es abordado. Éste trabajo considera las especificidades de la logística urbana bajo un contexto de mercados emergentes. En este caso, involucra: i) bajas competencias logísticas de los tomadores de decisiones; ii) la limitada disponibilidad de datos; y iii) restringido acceso a tecnología de alto desempeño para calcular rutas de transporte óptimas. Por lo tanto, se propone el uso de un software libre que proporciona soluciones de bajo costo (en tiempo y recursos). El artículo muestra la aplicación de los resultados de una herramienta de software basado en la Teoría de Grafos utilizado para analizar y resolver un CVRP (Capacitated Vehicle Routing Problem). Se utilizó el caso de una empresa local de distribución de alimentos situada en una gran ciudad de México. Sobre la base de una flora de vehículos pequeños, todos con las mismas especificaciones técnicas y una capacidad de carga comparable Palabras clave: Teoría de grafos; rutas de vehículos; logística de la ciudad; computación ubicua y el sistema producto - servicio.

1. Introduction For companies operating in emerging markets, the continuous upgrades in client’s requests are result from a challenging continuous improvement process. First, because

they need to propose the right level of satisfaction to their clients, and second, because they are constrained to achieve the financial goals required to maintain their operations in spite of the infrastructure faults and other external components reducing their logistics performance [1].

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 20-26. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.51141


Ochoa-Ortíz et al / DYNA 82 (191), pp. 20-26. June, 2015.

explanations related to solution procedure are shown. In Section 4, the computational results are presented and a discussion thereof is made. Finally, in Section 5, as conclusion, the importance of technology-based productservices more adapted to challenges that enterprises face when operating in emerging markets, as well as future research are presented.

However, because of the important market opportunities to PSS, companies are concerned to improve their processes and capabilities. Under a dynamic competition environment, to increase logistics capabilities has become even more urgent, especially due to the costs of allocating clients to routes and the distribution expenses included then in the total cost [2]. This can be done with the help of new technologies (ICT, ITS), which seem necessary mainly for final distribution of goods to urban areas [3]. Moreover, the access to new technologies, already difficult in developed economies, is extremely expensive for new economies, so compensation of those costs by a better possibility of optimization can be a lever to the deployment of those technologies. This can be done via the implementation of Product-Service-Systems (PSS). PSS is defined by as “a system of products, services, networks of players and supporting infrastructure that continuously strives to be competitive, satisfy customer needs and have lower environmental impact than traditional business models” [4]. In this way, the quantity of materials consumed in the product’s entire life can be reduced [5], but also the service component can be an important complement to the product itself [6]. This can be then an argument to deploy driving support devices that would be interfaced to a central server where the entire route sets of the company are optimized, GPS-based devices that propose route optimization for drivers or other ICT and ITS devices (product) that include route optimization solutions (service) for drivers or companies. Those services will then be based on vehicle routing. Vehicle routing is not a new problem; its study arose in the mid-20th century with the model of the traveling salesman problem (TSP). In fact, a lot of research has been entirely devoted to these problems [7][8][9][10]. In the last few decades the development of software tools for solving vehicle routing problems has increased, essentially based on conceptual models inspired by biological systems, artificial intelligence (data mining, bio inspired algorithms and augmented reality), mathematical theories, among others [11]. However, this specialized knowledge is out of reach for decision makers in emerging markets where logistics business is continuously growing. Moreover, software is in general developed in a “service” perspective, without taking into account the “product” that will be associated to this service. Thus, based on the regional case of a third party logistics company located in Ciudad Juarez (State of Chihuahua, Mexico), the objective of this paper is to expose the results of an implementation of a software tool to analyze and solve CVRP (Capacitated Vehicle Routing Problem). At the same time, based on ubiquitous computing, a friendlyuser platform running on an intelligent device was developed. Coupling the device to the vehicle fleet optimization service provided by the CVRP algorithm with ubiquitous computing is then a possible PSS configuration that is here tested. The paper is then adapting advanced techniques of optimization to a real case with the aim of testing a possible PSS configuration. This paper is organized as follows. In Section 2, a conceptual revision of graph theory and CVRP, as well as a brief description of free-download software is described. In Section 3, the problems to be solved as well as detailed

2. Background 2.1. Graph Theory Graph theory is a powerful tool to solve vehicle routing problems. As well as distribution routes, graphs are discrete structures composed by vertices, which are connected by arcs. Thus, a directed graph is denoted by G = (V,A), where V is an empty set of elements called vertices and A is a set of arcs. Each α  A has two vertices of V, i, j, i≠j associated, where i is the initial point of the arc, and j is the terminal point. The arc α is also denoted by (i, j), thus referring to the source vertex to the destination vertex and arc [12]. 2.2. Vehicle routing problems with capacity constraints (CVRP) The CVRP is a fundamental combinatorial problem with applications in logistics optimization. In the last two decades, because of innovative algorithms, and increased capabilities of computer equipment, major advances in solution techniques have been achieved [2]. From a general point of view, in the CVRP, a finite set of cities and the costs of travels between them are given. Thus, a specific node is identified as the vehicle depot and the rest as clients. Each client corresponds to a location where an amount of a single product is delivered. The amounts required by customers are predetermined and cannot be divided. In other words, they have to be delivered by one vehicle at a time. In the simplest version it is assumed that vehicles are homogeneous and therefore, have the same maximum capacity [12]. Based on the graph theory, CVRP is formulated as a complete graph G = (V, E) where V = {0, 1, ..., n} is considered to be the set of vertices and E a set of edges between two vertices. In this paper, vertex corresponding to vehicles is noted as 0, and vertices {1,..., n} are different customers. At the same time, for an arc e = [i, j], ce is the cost of going from node i to node j. Furthermore, a fleet of K vehicles is supposed, each of capacity Q. Finally, customer demands i are denoted by di. In fact, d is defined as the distance between two nodes, while, i is defined as the quantity of product or service to be delivered to the customer. A binary variable xe indicates whether edge e is on the path of a vehicle or not [12][13]. The mathematical formulation is expressed as follows: ,

1 2 21

\\ 0

2


Ochoa-Ortíz et al / DYNA 82 (191), pp. 20-26. June, 2015.

0 2

2

3

0

Table 1. Load demands. ZONE

4

∈ ∅ ∈

0,1

2

5

LOAD DEMANDS

Minarete

307 Ton.

Anapra

027 Ton.

Paraíso

057 Ton.

La Cuesta

187 Ton. Sources: Dataset from CIS Research Center, UACJ.

Thus, equation (2) states that each client is visited exactly once by a vehicle. Equation (3) states that the depot’s capacity is 2K. Furthermore, when inequalities in (4) and (5) are done, it drives a bi-connectivity of the entire solution. As a result, the number of customers to be served exceeds the maximum capacity (Q), and the same vehicle cannot visit them [14]. However, the range of models for solving vehicle routing problems is extensive. This is caused by the large diversity of variables of every specific problem and the uncertainty involved in [7], [15] and [16].

3. Real world applications For third party logistics companies operating in emerging markets, especially small and medium enterprises (SME), vehicle routing problems are frequently solved without a strong scientific basis. In fact, most of the SME companies design their logistics solutions based on the so-called “expert judgment”. In that context, the most usual problems in decision-making are to reduce transportation costs, and at the same time, to improve customer service by finding the best routes that minimize the total distance or transit time. However, in SME companies running operations in emerging markets, modern software tools are frequently seen as expensive and complex tools to design logistics solutions. Consequently, free download software as “Graphs” represents a useful first step to improve solving capabilities in logistics.

2.3. Software tool Due to complexity when solving CVRP, our research approach took into account the specific characteristics of city logistics operations under an emerging markets context that in this case involved: i) low logistical capabilities of decision makers; ii) limited availability of data; and iii) restricted access to high performance technology to compute optimal transportation routes. Thus, since the use of software is a powerful tool to improve logistics competitiveness when logistics capabilities of decision makers are low, the opportunity to use a friendly-user and low-cost software was identified. “Graphs” is a free software tool for building, editing and analyzing graphs. It is also useful for teaching, learning, and practicing graph theory as well as related disciplines such as operations research, network design, industrial engineering, and logistics systems, among others. It incorporates algorithms and functions that allow analyzing real problems [16]. The process of software development as modules has an interface for building and editing graphs (*.dll). The user can freely draw a selected graph to analyze it without worrying about the solving process used later. This software displays a signal if a particular analysis is not feasible. However, due to the freedom provided by the software to the user in designing graphs, if decided, any scenario could be taken into account (including not feasible solutions). It clearly implies a great computational complexity to the software. “Graphs” allows building both directed graphs as undirected. The user decides the distribution of the graph, although the program can support him with other utilities that automatically draw the graph (tree format, radial, organic force directed flow, random, among others). The user can also import or export the coordinates of the nodes, with the possibility of adding a map as part of the graph’s background layer [19][20][21]. It can also calculate the distance between nodes and enter this value automatically (or as a proportional cost) in the edges of the graph. Thus, based on the information associated to every node, it identifies the shortest path, and then, it defines the next node to be reach.

3.1. Case description The information here analyzed was provided for a third party logistics company with operations in ciudad Juarez (State of Chihuahua, Mexico). Due to the complexity when analyzing all its logistics operations, a more specific case concerning the distribution of perishable food was created. A fleet of 17 vehicles with the same technical specifications and similar load capacity (15 tons) composed this case. Juarez City and its Metropolitan Area were defined as the geographical zone for this analysis. Destinations are concentrated on four zones of the city: i) Minarete ii) Anapra; iii) Paraíso; and iV) La Cuesta. The load demands are presented in Table 1. To design a CVRP solution using the software “Graphs”, the three steps suggested by Rodríguez [20]were used: i) to build the matrix of distances between cities; ii) to state all the nodes with their respective supply and demand of load and constraints; iii) to establish the operational variables of vehicles. 3.2. Matrix of distances First, taking into account the distances between origin and destination nodes, the matrix of distances was built (see Table 2) and all the data were programed into the software.

22


Ochoa-Ortíz et al / DYNA 82 (191), pp. 20-26. June, 2015.

9.15 7.88 10.57 Minarete 9.15 5.76 6.74 Anapra 7.88 5.76 5.12 Paraíso 10.57 6.74 5.12 La Cuesta 9.89 3.44 7.24 2.56 Manantial 13.97 18.97 8.85 9.88 Rododendro 5.34 10.45 4.48 12.46 Riveras Sources: Dataset from CIS Research Center, UACJ.

9.89 3.44 7.24 2.56 4.14 8.36

13.97 18.97 8.85 9.88 4.14

Variables Riveras

Rododendro

Manantial

La Cuesta

Table 3. Variables Paraíso

Anapra

Minarete

Neighborho od/Distance

Table 2. Matrix of distances in kilometers.

5.34 10.45 4.48 12.46 8.36 6.67

6.67

Units

Demand and supply of cargo

Tons / Month

Fixed and variable costs

$ 7.00 per month

Capacity of vehicles

Tons / Month

Maximum distance

Miles / Month

Fixed Cost

$ 2’500,000.00 per 17 months for each vehicle

Variable Cost

$ 2,750.00 / km for each vehicle

Capacity

385 Tons / month

Sources: Dataset from CIS Research Center, UACJ.

Figure 2. Implementation results. Source: [8]. Figure 1. Graphs related to the problem Source: [12].

More specifically, the model here proposed adjusts the optimal values of each node. As a result, it was the sixth iteration that showed the optimal route by taking into account: i) time restrictions; ii) weather conditions; and iii) waiting time to load and unload.

3.3. Stating the nodes Second, based on the matrix of distances and demand loads, a set of nodes Origin-Destination for each neighborhood was stated. As a result, the critical graphs related to the problem were identified (see Fig. 1).

3.5. Ubiquitous computing Since a user could move from one computing environment to another, ubiquitous computing poses a number of challenges in designing software architecture. Thus, to design user interfaces for mobile devices should take into account a number of constraints. Indeed the limited screen is to show the information. Since there is nowadays a wide range of screen sizes and resolutions, an interface

3.4. Operational constraints Third, the vehicle constraints were given by the company's business context. In fact, legal, financial, and operational aspects of the operational context were analyzed. These constraints were calculated taking into account several specific company’s operational policies as: i) a vehicle runs operations 10 hours a day; ii) it runs 6 days a week; iii) it operates 4 weeks per month; iv) the average operational speed is 60 km/hr; and v) maximum distance for all the programmed trips were 1,485 km/per month. Other variables are exposed in Table 3. Figs.2 and 3 present how different nodes were generated as well as possible graphs related to the implementation of an optimal route for each journey. The model here presented considered the weather implications, which are considered as difficult 4 months a year

Figure 3. Driver using our Intelligent CVRP to Ciudad Juarez. Sources: Intelligent software development in CIS Research Center, UACJ. 23


Ochoa-Ortíz et al / DYNA 82 (191), pp. 20-26. June, 2015.

should be designed to be used on most of devises available on the market. For this project, the proposed model was developed in Android API, and the programming interface was in XML. Data Server Communication is the most important component because it communicates to the server that allows the devise to send the GPS position obtained by receiving the processed image and location map (see Fig. 3).

5. Conclusions In an emerging market business context where most of local companies do not have a clear logistics strategy. Operational issues are driven by a dynamic competition to provide the right level of satisfaction to clients, and constrained by external difficulties (infrastructure inefficiencies, legal framework, etc.) to achieve their financial goals. Nevertheless, favorable market opportunities to PSS provide an incentive to improve their processes. Thus, since most of the local practitioners do not have enough logistics capabilities, the use of free download software to solve complex logistics problems, is the first step to increase logistics competitiveness. In fact, the results achieved with this implementation convinced the company about the importance of the software technology to improve their logistics operations. The results proved that this approach optimizes costs and consequently, provides a competitive advantage to a company running operations under high operational costs [11]. Clearly, setting the restrictions for modeling vehicle routing problems plays an important role in obtaining an optimal and practical solution. In these cases, it is critical to balance theoretical approaches with a proper planning of routes to avoid, for example, movements of empty vehicles. Technology-based product-services more adapted to challenges that enterprises face when operating in emerging markets could be a competitive advantage. Our work offered a tool that considered solution scenarios to design routes in emerging markets cities, where logistic abilities are low. Possible product-service systems applications of VRP-based solving tools that can be examined in future include vehicles (leasing services with fleet optimization based on CVRP), mobile devices (the Android application can be associated to mobile phone offers for distribution companies), GPS-based vehicle devices or vehicle-sharing systems, among others. The proposed model provides useful information to decision makers to define and reduce waiting times when distributing perishable goods in metropolitan zones. Results highlight the quantitative benefits achieved when technological tools are implemented taking into account the specific needs of the operational context. In addition, the importance of tools more adapted to logistical challenges that enterprises face when operating in emerging markets is highlighted. Finally, as future research, from a theoretical perspective, the solution here proposed will be improved by the use of estimation of distribution algorithm [21][22][23][24].

4. Results Because of the restrictions related to each route, our tool defines additional routes feasible solution scenarios. Each scenario analysed considers possible changes in the route depending on the vales generated and assessed by Djikstra algorithm. This enables us to solve dynamically the CVRP. The results obtained are presented in Table 4. Route 1 is covered by a vehicle with a capacity utilization of 100% (385 tons per month). Likewise, Route 2 is covered by a vehicle with a capacity utilization of 100% (385 tons / month) as shown in Table 5. The results obtained using the software “Graphs” to solve the proposed CVRP problem were implemented by the third party logistics company with an improvement of 27% in their operations. Since vehicle routing problems were frequently solved based on the so-called “expert judgment”, the proposed model and the informatics tool developed based on the ubiquitous computing approach, significantly improved their logistics capabilities. They both reduced transportation costs, and at the same time, improved the capacity utilization rate of the vehicle fleet. Furthermore, ubiquitous computing solution is allowed company to improve customer service, and can be associated to ICT products and solutions like mobile phones or vehicle devices (GPS, driving support tools, etc.). It is now offering superior logistics services that other local logistics providers, with limited technological exposure, cannot provide. Since differentiating customer service is critical in the current competitive world, this tool could improve company’s profitability. Table 4. CVRP solution Description Travel 1 Travel 2 Travel 3

Route 1 Manantial, Minarete Minarete, Anapra Anapra -RododendroRiveras 247.7

Route 2 Minarete –Paraíso Paraíso – Anapra Anapra – La Cuesta – Riveras 158.7

Total distance in KM Total cost in $29,750 $29,750 USD Variable cost $57,425 $70,285 Capacity 385/385 = 100% 385/385 = 100% Utilization Rate Sources: Dataset from CIS Research Center, UACJ.

Acknowledgments The authors thank Flora Hammer for comments and suggestions that improved this paper.

Table 5. Performance indicators

Variable costs due to delays (VC) Route’s fixed costs (FC) Total cost in USD (FC + VC)

Funding $135,700.00 $57,425.00 $193,125.00

As part of the National Research Network "Sistemas de Transporte y Logística”, the authors acknowledge all the support provided by the National Council of Science and

Sources: Dataset from CIS Research Center, UACJ. 24


Ochoa-Ortíz et al / DYNA 82 (191), pp. 20-26. June, 2015.

Technology of Mexico (CONACYT) through the research program “Redes Temáticas de Investigación”. At the same time, we acknowledge the determination and effort performed by the Mexican Logistics and Supply Chain Association (AML) and the Mexican Institute of Transportation (IMT) for providing us an internationally recognized collaboration platform, the International Congress on Logistics and Supply Chain [CiLOG].

[18] [19] [20]

References

[21]

[1]

[22]

[2]

[3] [4]

[5]

[6]

[7] [8] [9] [10] [11] [12] [13]

[14]

[15] [16] [17]

Cedillo-Campos M. and Sanchez-Ramírez, C., Dynamic selfassessment of supply chains performance: An emerging market approach. Journal of Applied Research & Technology, JART, 11(3); 2013. pp. 338-347. DOI: 10.1016/S1665-6423(13)71544-X Velarde, M., Cedillo-Campos, M. and Litvinchev, I., Design of territories and vehicle routing: an integrated solution approach. International Congress on Logistics and Supply Chain (CiLOG 2014), Mexican Logistics & Supply Chain Assoc. (AML); pp. 160-176, 2014. Gonzalez-Feliu, J., Semet, F. and Routhier, J.L., Sustainable urban logistics: Concepts, methods and information systems. Heidelberg: Springer; 2014. DOI:10.1007/978-3-642-31788-0 Goedkoop, M.J., van Halen, C.J.G., te Riele. H.R.M. and Rommens, P.J.M., Product service systems, ecological and economic basics. Report for Dutch Ministries of Environment (VROM) and Economic Affairs (EZ); 1999. Beuren, F.H., Gomes-Ferreira, M.G. and Cauchick-Miguel, P.A., Product-service systems: A literature review on integrated products and services. Journal of Cleaner Production, 47, pp. 222–231, 2013. DOI:10.1016/j.jclepro.2012.12.028 Baines, T.S., Lightfoot, H.W., Evans, S., Neely, A., Greenough, R., Peppard, J. and Wilson, H., State-of-the-art in product-service systems. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, 221 (10), pp. 1543–1552. 2007. DOI: 10.1243/09544054JEM858. Toth, P. and Vigo, D., The vehicle routing problem, Philadelphia: SIAM; 2002. DOI: 10.1137/1.9780899718515., Cordeau, J.F., Laporte, G., Savelsbergh, M.W. and Vigo, D., Vehicle routing. In Transportation. Handbooks in operations research and management science, 14. Elsevier, pp. 367-428, 2006. Golden, B., Raghavan, S. and Wasil, E., The vehicle routing problem: Latest advances and new challenges. Berlin: Springer; 2008. DOI: 10.1007/978-0-387-77778-8. Cattaruzza, D., Absi, N., Feillet, D. and Gonzalez-Feliu, J., Vehicle routing problems for city logistics, EURO Journal of Transportation and Logistics; 2015. DOI: 10.1007/s13676-014-0074-0. Partyka, J. and Hall, R., On the road to connectivity. OR/MS Today, 37 (1), pp. 42-49, 2010. Diestel, R., Graph theory. Second edition. New York, Springer; 2000. DOI: 10.1007/978-3-642-14279-6. Chandran, B. and Raghavan, S., Modelling and solving the capacitated vehicle routing problem on trees. In: Golden, B.L., Raghavan, S., Wasil, E.A. (Eds.) The vehicle routing problem: Latest advances and new challenges, New York: Springer, pp. 239-274, 2008. DOI:10.1007/978-0-387-77778-8_11. Hernández, H., Procedimientos exactos y heurísticos para resolver problemas de rutas con recogida y entrega de mercancía. Ph.D. Thesis. Universidad de la Laguna, Facultad de Matemáticas. Departamento de Estadística, Investigación Operativa y Computación, España, 2004. Golumbic, M., Algorithmic graph theory and perfect graphs. Second edition. London: Elsevier, 2004. Baldacci, R., Toth, P. and Vigo, D., Recent advances in vehicle routing exact algorithms. 4OR, 5 (4), pp. 269-298, 2007. DOI:10.1007/s10288-007-0063-3. Baldacci, R., Mingozzi, A. and Roberti, R., Recent exact algorithms for solving the vehicle routing problem under capacity and time

[23]

[24]

window constraints. European Journal of Operational Research, 218 (1), pp. 1-6, 2012. DOI:10.1016/j.ejor.2011.07.037. Pintea, C., Petrica, C. and Camelia, C., The generalized traveling salesman problem solved with ant algorithms. Journal of Universal Computer Science, 13 (7), pp. 1065-1075, 2007. Rodríguez, A., Grafos. [On Line]. [date of reference: December 2nd, 2014]. Available on: http://personales.upv.es/arodrigu/grafos/. Rodríguez, A., Grafos: Herramienta informática para el aprendizaje y resolución de problemas reales de teoría de grafos. X Congreso de Ingeniería de Organización, Valencia: Universidad Politécnica de Valencia, pp. 1-8, 2006. DOI: 10.1016/j.eswa.2008.01.072 Paredes, C., Análisis del software Grafos. Universidad Politécnica de Cataluña, 2008. Wang, C. and Lu, J., A hybrid genetic algorithm that optimizes capacities related with vehicle routing problems. Expert Systems with Applications, 36, pp. 2921-2936, 2009. Pérez, R., Sanchez, J., Hernández, A. y Ochoa, C., Un algoritmo de estimación de distribuciones para resolver un problema real de programación de tareas en configuración jobshop. Un enfoque alternativo para la programación de tareas. Komputer Sapiens Revista de Divulgación de la Sociedad Mexicana de Inteligencia Artificial, 1, pp. 23-36, 2014. Pérez-Rodríguez, R., Sanchez, J., Hernández-Aguirre, A. and Ochoa, C., Simulation optimization for a flexible jobshop scheduling problem using an estimation of distribution algorithm. The International Journal of Advanced Manufacturing Technology, 7 (4), pp. 3-21, 2014. DOI:10.1007/s00170-014-5759-x.

C.A. Ochoa-Ortiz, B.S. ’94, Eng. Master ’00, Ph.D. ’04, Postdoctoral Researcher,’06, and Industrial Postdoctoral Research ’09. He joined the Juarez City University in 2008. He has 7 BOOK, and 27 chapters in books related with AI. He has supervised 37 Ph.D. theses, 47 M.Sc. theses and 49 undergraduate theses. He participated in the organization of several International Conferences. His research interests include evolutionary computation, natural processing language, anthropometrics characterization and Social Data Mining. In his second Postdoctoral Research participated in an internship in ISTC-CNR in Rome, Italy. He collaborates with researchers from Irán, Kyrguistán and Cyprus. Dr. Ochoa is National Researcher (National Council of Science and Technology of Mexico – CONACYT) during eight years. F.J. Ornelas-Zapata, received the Bs. Degree in Informatics (2001); Master in Computer Science (2007); PhD in Computer Science (2010). He is currently a research professor at the Polytechnic University of Aguascalientes and the Autonomous University of Aguascalientes. Her research area is Artificial Intelligence. Her research is on vehicle routing problems and parallel computing. He has 4 chapters in books related whit AI. He participated in the COMCEV’2007, COMCEV´2008, HIS´2009, MICAI’2010, CILOG´2014. M.L.Y. Margain-Fuentes, inspired by the Software Engineering and Quality Systems Standards, begins her Mastery on Informatics and Information Technology in the Aguascalientes Autonomous University and finishes in the Windsor University in Ontario, Canada. Through the Doctor’s degree in Computation Sciences she gets the interest in researching Learning Objects and e-learning. She has outstand for her professional experience in the industry and government. She has worked as professor in bachelors and master’s degree. She has imparted several international conferences. She has participated in multiple software projects as assessor. She was for six years as Director of the Information Strategic Systems career and Software Engineering and for the last two year is a Director of Research and Postgraduate Department in the Aguascalientes Polytechnic University. M.G. Cedillo-Campos, is a Professor in Logistics Systems Dynamics and Founding Chairman of the Mexican Logistics and Supply Chain Association (AML). Dr. Cedillo is National Researcher (National Council of Science and Technology of Mexico - CONACYT), Innovation Award 2012 (UANLFIME) and National Logistics Award 2012. In 2004, he received with honors a Ph.D. in Logistics Systems Dynamics from the University of Paris, France. Recently, he collaborated as Visiting Researcher at Zaragoza Logistics Center as well as a keynote speaker at Georgia Tech Panama. He works in 25


Ochoa-Ortíz et al / DYNA 82 (191), pp. 20-26. June, 2015. logistics systems analysis and modeling, risk analysis, and supply chain management, which are the subjects he teaches and researches in different prestigious universities in Mexico and abroad. Dr. Cedillo is the Scientific Chairman of the International Congress on Logistics and Supply Chain (CiLOG) organized by the Mexican Logistics and Supply Chain Association (AML), and coordinator of the National Logistics Research Network in Mexico supported by the program “Redes Temáticas de Investigación” of CONACYT. J. Sanchez-Aguilar, Bs. Electrical Engineer; Master of Industrial Engineering; PhD in Industrial Engineering and Manufacturing; Currently works at the Technological Institute of Queretaro; He has published over 38 articles in refereed journals and international dissemination, IMPI awarded him five titles patents, has directed more than 10 graduate theses, and is a founding member of the Mexican Association of Logistics. Currently belongs to the National System of Researchers.

Área Curricular de Ingeniería Administrativa e Ingeniería Industrial

R. Jaramillo-Vacio, Has received BSc (2002), Master in Electrical Engineering (2005), Master in Management Engineering and Quality (2010). He has joint CIATEC (Conacyt Research Center) in 2010 to carry out his PhD research in the Industrial Engineering and Manufacturing. Since 2005 he is Test Engineer in CFE-LAPEM in dielectric test, partial discharge diagnosis at power cables. He is an author and coauthor of numerous published works, including book chapters, and over 27 articles related to his research. His main research interest includes partial discharge diagnosis, intelligence artificial, data mining, learning theory and neural networks.

Oferta de Posgrados

Especialización en Gestión Empresarial Especialización en Ingeniería Financiera Maestría en Ingeniería Administrativa Maestría en Ingeniería Industrial Doctorado en Ingeniería - Industria y Organizaciones

L. I, Ávila, Bs. Industrial Engineering; Masters of Science in Engineering Student. His research is on vehicle routing problem used bio-inspired algorithms. She participated in CILOG’2014.

Mayor información: E-mail: acia_med@unal.edu.co Teléfono: (57-4) 425 52 02

26


Structural analysis for the identification of key variables in the Ruta del Oro, Nariño Colombia Aida Mercedes Delgado-Martínez a & Freddy Pantoja-Timarán b a

Profesional Especializada CORPONARIÑO, Colombia. adelgado@corponarino.gov.co,aidamercedesdelgado@yahoo.com b Universidad de Nariño, Pasto, Colombia. fpantoja@udenar.edu.co, fpantoj@gmail.com Received: September 12th, 2014. Received in revised form: January 20th, 2015. Accepted: February 09th, 2015

Abstract As proposed by Michael Godet, an Structural Analysis exercise applied to achieve a representation of the system as close to reality as possible, and reducing its complexity to its essential macrovariables in the research on sustainable tourist called “Ruta del Oro” in Nariño Colombia, which seeks to conjugate the geologic mining patrimony with other resources existing in the field. It was discovered that the macrovariables that determine the system are the geology, the geomorphology and the climate; those on which it is possible to intervene to equilibrate it are, the water, the territorial structure, the vegetation and to a lesser extent, the soil. The dependent macrovariables are the landscape, the cultural resources and the fauna. These results were contrasted with the correspondent cartography, secondary information and fieldwork. It is concluded that the macrovariables of the investigation and the role identified each one are the adequate. Keywords: research variables; influence and dependence map; “Ruta del Oro” Nariño Colombia; structural analysis; thematic route.

Análisis estructural para la identificación de variables claves en la Ruta del Oro, Nariño Colombia Resumen Conforme a lo propuesto por Michael Godet, se realizó un ejercicio de análisis estructural para lograr una representación lo más cercana a la realidad y reducir su complejidad a las macrovariables esenciales en la investigación de turismo sostenible denominada “Ruta del Oro” en Nariño Colombia, la cual busca conjugar el patrimonio geológico minero con los demás recursos existentes en el medio. Se encontró que las macrovariables que determinan el sistema son la geología, la geomorfología y el clima; aquellas sobre las cuales es posible intervenir para equilibrarlo son el agua, la estructura territorial, la vegetación y en menor medida, el suelo; las macrovariables dependientes son el paisaje, los recursos culturales y la fauna. Estos resultados fueron contrastados con la cartografía e información disponible y trabajo en campo. Se concluye que las macrovariables de investigación y el rol identificado para cada una es el adecuado. Palabras clave: variables de investigación; mapa de influencia dependencia; “Ruta del Oro” Nariño Colombia; análisis estructural; ruta temática.

1. Introducción La técnica de Análisis Estructural está basada en ver la realidad como un sistema, una estructura y un fenómeno complejo, en el establecimiento de las relaciones de causalidad entre las diferentes variables [1]. El Análisis Estructural, conforme lo expresa Godet (1999), ofrece la posibilidad de describir un sistema mediante el uso de una matriz de doble entrada que interconecta todos

sus componentes, estudia las relaciones entre éstos e identifica las variables, factores o componentes esenciales. Permite poner de relieve la estructura del sistema, es decir, la red de relaciones entre sus elementos [2]. Según Michel Godet [2] entre los objetivos del Análisis Estructural se encuentra el de ayudar a un grupo para plantearse las buenas preguntas y construir su reflexión colectiva, lograr una representación lo más exhaustiva posible del sistema estudiado y reducir la complejidad del

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 27-33. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.45532


Delgado-Martínez & Pantoja-Timarán / DYNA 82 (191), pp. 27-33. June, 2015.

motricidad, teniendo en cuenta la influencia ejercida por cada variable, y por orden de dependencia, teniendo en cuenta la influencia recibida por cada una de ellas [6]. Para determinar cuáles de estas macrovariables explican el sistema que controla la Ruta del Oro a proponer, y dado que el análisis es fundamentalmente técnico, el análisis estructural se lo realizó a través de un taller con expertos, quienes conocen el territorio (retomando a Godet [6], se utiliza el término «taller», para designar sesiones organizadas de reflexión colectiva). La función de los expertos consiste en verificar si un factor está afectando a los otros, es decir si lo está modificando [1]. De otra parte, es importante tener en cuenta que, con relación a la minería a pequeña escala, como es el caso de la región de estudio, no existe una definición única en la legislación colombiana, se encuentran algunas referidas a:  Herramientas que manejan, en este sentido se entiende como aquella que se realiza con herramientas e implementos simples de uso manual, accionados por la fuerza humana  Legalidad, la mayoría no tiene título minero.  Naturaleza jurídica: personas naturales.  Ingresos: de subsistencia [8].

sistema a sus variables esenciales. En la práctica, se han desarrollado dos formas de utilización del Análisis Estructural: En la forma de decisiones (investigación, identificación de las variables y actores sobre los cuales es necesario actuar para alcanzar los objetivos fijados). En el proceso prospectivo (investigación de las variables clave sobre las cuales debe basarse prioritariamente la reflexión sobre el futuro) [2]. Este método, generalmente se utiliza para la segunda opción en la formulación de planes prospectivos. En esta investigación se lo aplica para la determinación de las macrovariables claves para la investigación, encontrando que también es potente en este aspecto. Específicamente se busca determinar las macrovariables relevantes que inciden en el territorio para el diseño y posterior puesta en marcha de una propuesta de turismo sostenible, que se ha denominado Ruta del Oro, la cual busca poner en valor los recursos que ofrece el medio: geológicos, mineros, biodiversos, históricos y culturales, con énfasis en la minería del oro a pequeña escala que se desarrolla en el departamento de Nariño-Colombia, en la Cordillera de los Andes, considerado ecosistema de importancia internacional pero que a su vez se encuentra muy amenazado. Este Análisis Estructural también permitió establecer las macrovariables que han sido determinantes para la existencia hoy de esta riqueza en biodiversidad, lo cual fue validado con la información y cartografía disponible. La Cordillera de los Andes, por altitud, es la segunda más importante del mundo después de la Himalaya, por longitud la primera, bordea el Océano Pacífico en aproximadamente 7.500 km y hacen parte de ella 7 países. Hasta hace pocos años se indicaba que se formó por el movimiento de subducción de la placa de Nazca debajo de la Placa Sudamericana hace unos 40 millones de años; sin embargo, según estudios recientes [3,4], los Andes se elevaron de manera progresiva durante decenas de millones de años y luego, repentinamente, el macizo montañoso sufrió un brusco salto geológico entre hace 6 y 10 millones de años, es decir, mucho más rápido que lo sugerido por la teoría sobre el engrosamiento de las capas tectónicas. El levantamiento de los Andes fue el evento más importante de la evolución biogeográfica de la flora en América del Sur, no sólo favoreció la diversificación de especies de montaña sino que también afectó a los animales y plantas de las llanuras amazónicas [5].

2.1. Macrovariables Una vez evaluadas las macrovariables relacionadas en la Guía para la Elaboración de Estudios del Medio Físico mencionada, se llegó a la conclusión que estas son las fundamentales para el diseño de la Ruta, con algunas pequeñas modificaciones (Tabla 1). A continuación se explican las modificaciones introducidas:  Con relación a la primera macrovariable, Geología, puesto que ésta incluye los recursos mineros, se considera que dado el tema de investigación, en donde el patrimonio geológico y minero son el eje principal de la Ruta, es preciso enfatizarlo en el enunciado.  Con relación a la segunda macrovariable, la Geomorfología, se modifica por geoformas y procesos del relieve para que sea más explicito su contenido.  Con relación a la quinta y sexta macrovariables relacionadas con el agua, se considera no separar las aguas superficiales de las subterráneas, ya que no hay información disponible sobre el área de estudio, las aguas subterráneas no son aprovechadas en el área de investigación y no existen atractivos con base en ellas. Según el Instituto de Hidrología, Meteorología y Estudios Ambientales [9], el aprovechamiento de las aguas subterráneas en la mayor parte del territorio colombiano es todavía muy incipiente. Nariño hace parte de dos provincias hidrogeológicas: Tumaco y Cauca Patía. La mayor parte del territorio de Nariño se encuentra ubicado en el área que se ha considerado como barrera impermeable (se refiere a la Cordillera de los Andes) que afecta la continuidad de las unidades regionales.  Respecto a la fauna, se especifica que se trata de la fauna silvestre, excluyendo de esta manera a los animales domesticados.

2. Materiales y métodos Para llevar a la práctica el análisis estructural, se utilizó el método de Michel Godet [6] denominado Matriz de Impactos Cruzados Multiplicación Aplicada para una Clasificación. Se tomó como base las macrovariables registradas en la Guía para la Elaboración de Estudios del Medio Físico (España) [7]. El método MICMAC, es un programa de multiplicación matricial aplicado a la matriz de análisis estructural que permite estudiar la difusión de los impactos y, por consiguiente, jerarquizar las variables por orden de 28


Delgado-Martínez & Pantoja-Timarán / DYNA 82 (191), pp. 27-33. June, 2015.

procesos y clasifica resultados [13]. Suelos (SUE). Es la capa más superficial de la corteza terrestre que resulta de la descomposición de las rocas por los cambios bruscos de temperatura y por la acción del agua, del viento y de los seres vivos. Es el producto de la interacción entre la litosfera, la atmósfera, la hidrosfera y la biosfera. Clima (CLI). Conjunto de condiciones atmosféricas que se presentan típicamente en una región a lo largo de los años. Agua (AGU). Se considera tanto en disponibilidad como en calidad. De sus formas superficiales, propiedades, distribución y circulación. Vegetación (FLO). Mosaico de plantas que cubre el suelo en un territorio dado. Fauna silvestre (FAU). Especies animales en estado salvaje que viven en una región determinada, que forman poblaciones estables e integradas en comunidades estables. Paisaje (PAI). Incluye dos aspectos: Indicador y síntesis de las relaciones entre los elementos inertes y vivos del medio y expresión espacial y visual del medio. Recursos culturales (REC C). Aquellos que tienen un significado cultural y representación física. Estructura territorial (EST T). Incluye elementos con características más dependientes de la actividad del hombre que naturales. La Tabla 1 y las definiciones de las variables son importantes en el proceso de análisis estructural por ser el punto de partida. La claridad que se tenga en su contenido, orienta la calificación de influencia o dependencia que se dará posteriormente a cada una de ellas. De otra parte, y muy relacionado con la Geología, se encuentra el concepto de patrimonio geológico que, si bien no está considerado variable clave en este estudio, es preciso conocerlo; los expertos lo han definido como el conjunto de recursos naturales geológicos con valor científico, cultural y/o educativo, paisajístico o recreativo ya sean formaciones y estructuras geológicas, formas del terreno, minerales, rocas, meteoritos, fósiles, suelos y otras manifestaciones geológicas que permiten conocer, estudiar e interpretar: el origen y evolución de la tierra, los procesos que la han modelado, los climas y paisajes del pasado y presente y el origen y evolución de la vida [14,15].

Tabla 1. Grupos de macrovariables base para el análisis estructural Grupos de macro Variables La Tierra

Macrovariables iniciales  Geología  Geomorfología  Suelos

La  Clima atmosfera

Macrovariables finales  Geología y recursos mineros  Geoformas y procesos del relieve

Registro en MICMAC GEO GEOM

 Suelos

SUE

 Clima

CLI

El agua

 Superficiales  Subterráneas

 Agua

AGU

Medio biótico

 Vegetación  Fauna

 Vegetación

FLO

 Fauna silvestre

FAU

El paisaje  Paisaje natural y humano

 Paisaje

PAI

El paisaje  Recursos culturales de  Estructura territorial influencia Humana

 Recursos culturales

REC C

 Estructura territorial

EST T

Fuente: Elaboración propia con base en: [7,10]

En la Tabla 1 se retoma el grupo de macrovariables y las macrovariables registradas en el libro ya citado, Guía para la Elaboración de Estudios del Medio Físico, las macrovariables finales que se utilizan para esta investigación acordes con las modificaciones que se acaban de mencionar y el nombre corto con el que se registran en el software MICMAC. De otra parte, según el procedimiento establecido para adelantar un Análisis Estructural, es importante tener consenso sobre el significado de cada una de las variables a considerar; para ello, igualmente se partió de lo señalado en la Guía mencionada y se complementó o adaptó a las condiciones del territorio y de la investigación, con los aportes de los participantes en el taller y con fuentes secundarias. A continuación se presenta una síntesis de las definiciones dadas a cada macrovariable. Geología y recursos mineros (GEO). Geología: Estudia la forma exterior e interior del globo terrestre; la naturaleza de las materias que lo componen y su formación; los cambios o alteraciones que estas han experimentado desde su origen, y la colocación que tienen en su actual estado [11]. Recursos mineros: son yacimientos y ocurrencias minerales de valor económico que podrían ser sujetas a explotación y todos aquellos elementos mineros que presenten un valor social, científico, paisajístico, patrimonial y/o didáctico. Definición adaptada de Mata-P y Mata, L. [12]. Geoformas y procesos del relieve (GEOM). Las formas del relieve sólo pueden entenderse de modo global e integradas en la totalidad a la naturaleza. Da cuenta de la génesis del relieve y tipifica sus geoformas: explica fuerzas y

2.2. Motricidad y dependencia Según Mojica [1], el Análisis Estructural maneja dos conceptos: motricidad y dependencia. La motricidad es la influencia que una variable ejerce sobre las demás, la cual puede ser fuerte, moderada, débil, nula o potencial, según lo constaten los expertos. La dependencia es la incidencia de las diferentes variables sobre una en particular, es la subordinación al impacto de las demás. El resultado de esta calificación se registra en una matriz de doble entrada; los expertos determinan la motricidad de cada variable verificando la causalidad que cada uno de ellas ejerce sobre las demás. La dependencia aparece indicada automáticamente cuando se estima la motricidad. Como resultado, cada variable posee dos calificaciones: una de motricidad (y) y otra de dependencia (x) [1]. En consideración a que entre las limitaciones del Análisis 29


Delgado-Martínez & Pantoja-Timarán / DYNA 82 (191), pp. 27-33. June, 2015.

1: influencia débil P: potencial La influencia que cada una de las macrovariables ejerce sobre las demás se registra en las filas. La influencia recibida se registra en las columnas. Los resultados obtenidos en esta investigación, se reportan en la Tabla 2, en la cual, con el propósito de facilitar su comprensión, se han integrado dos salidas del programa denominadas Matrix of Direct Influences MDI y Row and Column Sum. Esta matriz MDI, junto con la Matriz de Influencia Directa Potencial (MPDI) son las matrices fundamentales para la obtención de los demás resultados que reporta el MICMAC. En el caso de estudio, no se obtuvo ninguna calificación de “Potencial”. Por lo tanto, MPDI no tiene diferencia con la matriz de influencia directa MDI.

Estructural se encuentra la del carácter subjetivo tanto de las macrovariables seleccionadas como de las relaciones entre ellas [6], se validó los resultados con el análisis de la cartografía disponible de geología, geomorfología, cobertura del suelo, intervención antrópica, topografía y red hídrica, utilizando la técnica de superposición de capas. 3. Resultados A continuación se registra las salidas obtenidas en el análisis estructural, al aplicar el método MICMAC. Las matrices son los resultados que reporta el método, la interpretación registrada en texto es la realizada por el grupo de expertos. El método MICMAC, opera a partir de la matriz de influencias directas (MDI, por su sigla en inglés) que el grupo de experto califica. Siguiendo lo especificado en el software del MICMAC, la influencia que una macrovariable puede ejercer se la calificó de la siguiente manera: 3: influencia fuerte 2: influencia moderada

3.1. Mapa de influencia/dependencia directa Los valores obtenidos en la MDI en suma total por filas y total columnas se llevan al plano cartesiano conforme aparece registrado en la Fig. 1 y se denomina mapa de influencia dependencia directa.

Figura 1. Mapa de influencia dependencia directa Fuente: Elaboración propia utilizando software MICMAC

30


Delgado-Martínez & Pantoja-Timarán / DYNA 82 (191), pp. 27-33. June, 2015.

diferentes intervenciones por intermedio de las demás macrovariables del sistema. Llama la atención especialmente el paisaje, la cual es una macrovariable completamente dependiente, que para el caso de esta investigación es la principal debido a la estrecha relación que existe entre paisaje y turismo. En el cuarto cuadrante se ubican las variables que tienen poca motricidad y poca dependencia, para el caso de la presente investigación, no se ubica ninguna macrovariable; esto indica que hubo una buena selección de las macrovariables candidatas, ya que, en caso que alguna o algunas pertenecieran a este cuadrante, se descartan. Teniendo en cuenta estos resultados de las influencias directas, el grupo de expertos concluye que el mapa es una muy buena representación de la realidad. Un examen de la matriz antes presentada permite identificar las variables que ejercen la mayor acción directa, pero no es suficiente; además de las relaciones directas, existen relaciones indirectas entre las variables, mediante cadenas de influencia, de retroalimentación [1]. Según Godet [6], Si la variable i influye directamente sobre la variable k, y si k influye directamente sobre la variable j, cualquier cambio que afecte a la variable i puede repercutir sobre la variable j. Hay entonces una relación indirecta entre i y j. En la matriz de análisis estructural existen numerosas relaciones indirectas del tipo i  j que no pueden tenerse en cuenta según la clasificación directa. La elevación al cuadrado de la matriz, pone en evidencia la relación de orden dos, entre i y j. Al calcular A3, A4,…, An, donde A es la matriz, se obtiene de la misma manera el número de caminos de influencia de orden 3, 4 que unen las variables entre sí. En cada repetición se deduce una nueva jerarquía de las variables, clasificadas esta vez en función del número de influencias indirectas que ejercen sobre las demás variables. Se comprueba que a partir de una cierta potencia, en general a partir de la potencia 4 o 5, la jerarquía permanece estable. Esta jerarquía es la que constituye la clasificación MICMAC.

Tabla 2. Matrix of potential indirect influences (MPII) Total rows

Variables

1

2

3

4

5

6

7

8

9 10

1 GEO

0

3

2

0

1

2

0

2

1

2

13

2 GEOM

0

0

3

2

2

1

1

3

2

3

17

3 SUE

0

0

0

0

1

3

2

1

1

3

11

4 CLIM

0

3

2

0

3

3

3

3

1

2

20

5 AGU

1

3

2

1

0

3

3

2

1

3

19

6 FLO

0

1

2

1

3

0

3

3

2

1

16

7 FAU

0

0

1

0

1

3

0

1

2

0

8

8 PAI

0

0

0

0

0

0

0

0

1

1

2

9 REC C

0

0

1

0

0

1

1

0

0

1

4

10 EST T

0

1

2

0

2

2

2

3

2

0

14

Total columns

1

11 15 4 13 18 15 18 13 16

Fuente: Elaboración propia utilizando software MICMAC

Según este mapa, en el primer cuadrante se ubican las variables motrices. Estas variables ejercen mucha influencia y son poco influenciadas por las demás variables. Estas son consideradas las variables que determinan el sistema pero no se pueden tomar acciones sobre ellas para mejorarlo, ya que son muy poco influenciables, para este proyecto de investigación son las macrovariables clima y geología. Con relación a la geología, están las acciones recomendadas hacia la conservación de los sitios considerados de importancia geológica o de patrimonio minero. Con respecto al clima, las acciones pueden estar encaminadas hacia la adaptación al cambio climático y, si es posible, hacia la mitigación de sus impactos. Sin embargo, estas acciones recaen directamente sobre las demás variables como los factores bióticos o los factores productivos y de manera indirecta y en el largo plazo sobre el clima. En el segundo cuadrante, se ubican las variables que ejercen una influencia fuerte sobre las demás variables pero a su vez también son muy influenciables, se denominan variables conflicto. Estas son el agua, la flora, la estructura territorial, el suelo, la geomorfología. Tiene mucho sentido su ubicación en este cuadrante ya que se pueden tomar acciones significativas para mejorar el sistema, especialmente orientadas hacia la conservación, descontaminación y mejora de los atractivos que cada una de ellas posee. Sin embargo, el grupo de expertos evalúa que tanto el suelo como la geomorfología pueden tener mayor influencia a través de las demás variables ubicadas en este mismo cuadrante por acciones que puedan realizar los actores sociales interesados en este proyecto. En el tercer cuadrante, se ubican las variables explicadas o las variables que son más dependientes que motrices. Son el resultado de la actuación de todo el sistema, es decir, de las variables ubicadas en el cuadrante I y II. Las macrovariables que se ubican en este cuadrante son la fauna, los recursos culturales y el paisaje. Si se quiere conservar o mejorar lo que hoy existe con relación a estos recursos, es preciso realizar

3.2. Influencia directa e indirecta de las macrovariables El MICMAC permite juntar la calificación de la relación directa con la de los impactos indirectos y, de esta manera, determinar la verdadera calificación de la influencia de cada variable. Está basado en encontrar el verdadero valor de la motricidad de las variables, lo cual se obtiene elevando la matriz a una potencia. Por lo tanto, es indispensable encontrar esa potencia a partir de la cual la matriz ya no arroja más información [16] En el caso del sistema de La Ruta del Oro, este se estabiliza a partir de la tercera iteración, según reporte del MICMAC que se registra en la Tabla 3. Tabla 3. Mdi stability Iteration Influence Dependence 1 107 % 91 % 2 100 % 103 % 3 100 % 100 % Fuente: Elaboración propia utilizando software MICMAC

31


Delgado-Martínez & Pantoja-Timarán / DYNA 82 (191), pp. 27-33. June, 2015.

obtiene el mapa de influencias indirectas al representar los r esu ltados de la T abla 4 en e l plano c ar te siano . Variables 1 2 3 4 5 6 7 8 9 10 Al tener en cuenta las influencias indirectas, las 1 GEO 2120 14200 25176 7401 22930 32191 29917 30956 26345 26822 2 GEOM 2471 17014 30843 9140 28305 38637 36204 38021 32274 32074 variaciones más significativas la presentan la geomorfología 3 SUE 1732 11258 20047 6005 18155 25607 23820 24589 21023 21534 y el suelo (Fig. 2). La primera cambia al cuadrante I por lo 4 CLIM 3157 21393 38245 11286 34949 48484 45258 47068 40014 40409 5 AGU 3041 20163 35904 10685 32681 45631 42718 44121 37568 38341 cual se aumenta la motricidad y se reduce la dependencia. 6 FLO 2154 15201 27675 8168 25532 34436 32408 34165 28921 28546 Este resultado mejora la representación del sistema porque, 7 FAU 1185 7894 14019 4116 12735 17979 16581 17230 14690 14885 efectivamente, la geomorfología puede ser poco modificada 8 PAI 228 1494 2691 812 2442 3417 3178 3299 2823 2878 9 REC C 636 4227 7543 2248 6877 9557 8971 9279 7893 8027 con acciones en el corto plazo que se tomen para conservar o 10 EST T 1806 12514 22494 6599 20651 28341 26497 27733 23513 23473 superar lo que existe en la actualidad. Fuente: Elaboración propia utilizando software MICMAC Con relación al suelo, aunque su variación es pequeña respecto al mapa de influencia directa, demuestra la tendencia a ser más una macrovariable explicada en el modelo que una En la Tabla 4, se presenta el reporte de MICMAC, en el macrovarible relevante frente al diseño de la Ruta del Oro; sin cual se tiene en cuenta, además de las influencias directas, las embargo se ubica casi en el límite de los dos cuadrantes por lo influencias indirectas, resultado de aplicar tres interacciones que es preciso tenerla en cuenta en especial en su relación con la entre las macrovariables. flora, fauna y paisaje y estructura territorial. De manera similar al mapa de influencias directas, se Tabla 4. Matrix of potential indirect influences (MPII)

Figura 2 Mapa de desplazamiento influencias directas/indirectas Fuente: Elaboración propia utilizando software MICMAC

desarrolló con la participación de un grupo de expertos, se tiene que las macrovariables que determinan el sistema son: el clima, la geología y la geomorfología. Estas macrovariables, denominadas macrovariables motrices, son muy influyentes en todas las demás y, a su vez, las otras ejercen poca influencia sobre ellas. Lo anterior, implica que son muy escasas las acciones que se pueden tomar en torno a

4. Resultados finales del Análisis Estructural Como resultado de este ejercicio de análisis estructural destinado a determinar las macrovariables que inciden en el territorio, para el diseño y posterior puesta en marcha de una propuesta de turismo sostenible, que se ha denominado Ruta del Oro, en el cual se utilizó el método MICMAC y se lo 32


Delgado-Martínez & Pantoja-Timarán / DYNA 82 (191), pp. 27-33. June, 2015.

éstas, con el fin de mejorar el sistema. Por su parte, las macrovariables agua, vegetación y estructura territorial, denominadas conflicto, se caracterizan por ser muy influyentes sobre las demás, pero también muy influenciables. Sobre estas macrovariables se pueden desarrollar acciones para mantener o mejorar el sistema. La variable suelo, ejerce una acción similar, pero en menor medida por encontrarse en la frontera entre este grupo de macrovariables conflicto y entre las variables dependientes. Las macrovariables fauna, recursos culturales y paisaje, son macrovariables dependientes y son el resultado de las acciones de las motrices y de las conflicto. Al intervenir sobre estas últimas, estarían repercutiendo en las macrovariables dependientes. La variable paisaje representa a la variable que se explica en la investigación, lo cual es coherente con el concepto que dice que el paisaje es la síntesis de lo que pasa en el territorio. El resultado más importante de este ejercicio de Análisis Estructural, es su contribución para determinar el rol que cada una de las macrovariables desempeñan en el territorio, representado en el mapa de influencias indirectas, en el cual se adelanta la investigación para el diseño de la Ruta del Oro. Una de las limitaciones que tiene el método es la subjetividad que se puede presentar por parte de los expertos, por este motivo, para reducir el posible error que esto puede significar en la investigación, se contrastó los resultados de las macrovariables motrices y conflicto, a excepción del clima por no disponer de la información necesaria, con la cartografía disponible y trabajo en campo, confirmando el rol que cada una de ellas desempeña según lo encontrado en el Análisis Estructural. La aplicación del Análisis Estructural, es más difundido como parte del proceso de formulación de planes prospectivos, sin embargo, con los resultados alcanzados en el caso de la Ruta del Oro, se encuentra que es igualmente muy potente para la determinación de las variables de cualquier investigación y por lo tanto contribuye a facilitar este trabajo, en especial en la fase de formulación. De otra parte, es importante tener en cuenta que es fácil de utilizar e interpretar sin perder la rigurosidad científica y técnica que se requiere ya que se cuenta con un software de respaldo.

[6]

[7]

[8]

[9]

[10] [11] [12]

[13]

[14] [15]

[16]

Referencias [1]

[2] [3]

[4]

[5]

Mojica, F., La construcción del futuro: Concepto y modelo de prospectiva estratégica, territorial y tecnológica. Bogotá, Colombia, 2005. Convenio Andrés Bello – Universidad Externado de Colombia. 322 P. ISBN 958-616-929-4 Godet, M., De la anticipación a la acción: Manual de prospectiva y estrategia. Santa Fe de Bogotá, Colombia, Alfaomega S.A., 1999. 359 P. ISBN 958-682-004-1 [3] Garzione, C., et al., Rise of the Andes. Science [online] 320 pp. 1304-1307, 2008. [Date of reference: May 19th of 2013]. Available at:http://www.ees.rochester.edu/SIREAL/CAUGHTwebsite/Publicat ions/2008/Garzione_et_al_2008.pdf. DOI: 10.1126/ science.1148615. Leier, A, et.al., Stable isotope evidence for multiple pulses of rapid surface uplift in the Central Andes, Bolivia. Science [online]. 371– 372 pp. 49–58, 2013. [Date of reference: January 15th of 2014]. Available at: http://authors.library.caltech.edu/ 39711/. DOI: 10.1016/ j.epsl.2013.04.025. Antonelli, A. et al., Tracing the impact of the Andean uplift on neotropical plant evolution. PNAS [online]. vol. 106 (24), pp. 9749–

9754. 2009. [Consulta August 19th of] 2. Available at: http://www.pnas.org/content/106/24/9749.full. DOI: 10.1073/pnas.0811421106. Godet, M., Prospectiva estratégica: Problemas y métodos, con la participación de Prospektiker, en colaboración con Philippe Durance, Cuadernos de LIPSOR. [en línea]. Cuaderno Nº 20, 2da Ed., 2007. [fecha de consulta: 12 de agosto de 2013]. Disponible en: http://www.prospektiker.es/prospectiva/caja-herramientas-2007.pdf Ministerio de Medio Ambiente, España. Aguilo, M. et al., Guía para la elaboración de estudios del medio físico: Contenido y metodología. 3ª Ed.: Centro de Publicaciones. Secretaría General Técnica 1998. 508 P. ISBN 84-8320-054-6. Güiza, L., La pequeña minería en Colombia: Una actividad no tan pequeña. DYNA, [en línea]. 80 (181), pp. 109-117. 2013. ISSN 00127353. Disponible en: http://www.scielo.org.co/pdf/dyna/v80n181/v80n181a12.pdf Ministerio de Medio Ambiente. IDEAM., Estudio nacional del agua 2010 [en línea]. Bogotá DC. 2010. 52 P. [fecha de consulta: 12 de agosto de 2013]. ISBN: 978-958-8067-32-2. Disponible en: https://www.siac.gov.co/ documentos/DOC_Portal/DOC_Agua/3_Estado/20120928_Estado_a gua_ENA2010PrCap1y2.pdf. Delgado, A., Memorias del taller de análisis estructural, proyecto Ruta del Oro, Pasto, Colombia 2013. Inédito. Glosario Geológico. En: Revista de Información Geológica [en línea]. [Consulta 20 de agosto de 2013] Disponible en: http://www.icog.es/_portal/glosario/sp_resultado.asp Mata, R., Mata-Perelló, J., Geología social: Una nueva perspectiva de la geología. En: SEDPGYM. Actas del Segundo Congreso Internacional sobre Geología y Minería en la Ordenación del Territorio y en el Desarrollo, Utrillas-España 5-10 de mayo 2009. [en línea]. Utrillas 2009. pp. 125-135. [fecha de onsulta: 25 de agosto de 2013]. Disponible en: http://www.sedpgym.es/descargas/libros_actas/UTRILLAS_2009/14 .UTRILLAS.pdf Duque-Escobar, G., Manual de geología para ingenieros [en línea]. Universidad Nacional de Colombia, Sede Medellín. 2003, versión revisada 2013. [fecha de consulta: 07 de mayo de 2014]. Disponible en: http://www.bdigital.unal.edu.co /1572/23/geo20.pdf. Rendón, R. A. et al., Propuesta metodológica para la valoración del patrimonio geológico, como base para su gestión en el departamento de Antioquia – Colombia. Vegas J. et al., España, Ley 42/2007 en: Guía metodológica para la integración del patrimonio geológico en la evaluación del impacto ambiental. Instituto Geológico y Minero de España; Ministerio de Agricultura, Alimentación y Medio Ambiente, Madrid, España, 2012. ISBN 978-84-7840-912-9 Godet, M, y Durance, P., La prospectiva estratégica para las empresas y los territorios. [en línea]. Naciones Unidas para la Educación, la Ciencia y la Cultura UNESCO, García-Cortina K [traducción]. 2011. [fecha de consulta 12 de agosto de 2013]. Disponible en: http://www.laprospective.fr/ dyn/traductions/ contents/1dunodunesco-vspan-ext-15-06-2011.pdf

A.M. Delgado-Martínez, Doctoranda. en Recursos Naturales y Medio Ambiente de la Universidad Politécnica de Cataluña, España; MSc. en Economía del Medio Ambiente y de los Recursos Naturales de la Universidad de los Andes, Bogotá, Colombia. Profesional Especializada CORPONARIÑO, Pasto, Colombia. F.Pantoja-Timaran, Dr. en Ciencias Químicas de la Universidad Autónoma de Madrid, España; MSc. en Contaminación Ambiental de la Universidad Politécnica de Madrid, España. Profesor asociado de la Universidad de Nariño, Pasto, Colombia.

33


Intuitionistic fuzzy MOORA for supplier selection Luis Pérez-Domínguez a, Alejandro Alvarado-Iniesta b, Iván Rodríguez-Borbón c & Osslan Vergara-Villegas d b

a Department of Industrial and Manufacturing Engineering, Autonomous University of Ciudad Juarez, Juarez, México. luis.dominguez@uacj.mx Department of Industrial and Manufacturing Engineering, Autonomous University of Ciudad Juarez, Juarez, México. alejandro.alvarado@uacj.mx c Department of Industrial and Manufacturing Engineering, Autonomous University of Ciudad Juarez, Juarez, México. ivan.rodriguez@uacj.mx d Department of Mechatronics Engineering, Autonomous University of Ciudad Juarez, Juarez, México. overgara@uacj.mx

Received: January 28th, 2015. Received in revised form: March 26th, 2015. Accepted: April 30th, 2015

Abstract The supplier selection is a critical activity within the administration of the supply chain. It is considered a complex problem given that it involves different aspects such as the alternatives to evaluate, the multiple criteria involved as well as the group of decision makers with different opinions. In this sense, the literature reports several methods to help in this difficult activity of selecting the best supplier. However, there are still some gaps in these methods; therefore, it is imperative to further develop research. Thus, the purpose of this paper is to report a hybrid method between MOORA and intuitionistic fuzzy sets for the selection of suppliers with a focus on multi-criteria and multi-group environment. The importance of decision makers, criteria and alternatives are evaluated in terms of intuitionistic fuzzy sets. Then, MOORA is used in order to determine the best supplier. An experimental case is developed in order to explain the proposed method in detail and to demonstrate its practicality and effectiveness. Keywords: MCDM; MOORA; Intuitionistic fuzzy set; Supplier Selection.

MOORA versión difuso intuicionista para la selección de proveedores Resumen La selección de proveedores es una actividad crítica dentro de la administración de una cadena de suministro. Es considerado un problema complejo dado que involucra diferentes aspectos como son las mismas alternativas a evaluar, los múltiples criterios en consideración, así como el grupo de decisores con opiniones diferentes. En este sentido, la literatura reporta diversos métodos para ayudar en esta difícil actividad de seleccionar el mejor proveedor. Sin embargo, aún existen algunas brechas en dichos métodos por lo que resulta necesario seguir desarrollando investigación. Por lo tanto, el propósito del presente artículo es reportar un nuevo método basado en la hibridación de la técnica MOORA con conjuntos difusos intuicionistas para la selección de proveedores, con un enfoque de decisión multi-criterio y multigrupal. La importancia de los decisores y criterios así como las alternativas se evalúan en términos de conjuntos difusos intuicionistas, posteriormente MOORA es usado para determinar la mejor alternativa. Un caso experimental es desarrollado con el fin de explicar el método propuesto de manera detallada y para demostrar su viabilidad y eficacia. Palabras clave: MCDM; MOORA; Conjuntos difusos intuicionistas; Selección de Proveedores.

1. Introduction Nowadays, market demands have motivated the industries to provide a fast and reliable service in order to meet customer requests that are sensitive in their requirements [1,2]. In this sense, supply chain management has fostered great interest in terms of company strategies to stay in global business. The works presented by [3- 5] claim

that an important key issue for the administration of the supply chain is the correct selection of suppliers. [6,7] mention that the selection of suppliers is considered a multicriteria problem in the field of decision-making. Additionally, the work of [8] argues that effective supplier selection is significant because it helps reduce operational costs. According to [9], the selection of the best supplier is a competitive advantage that can help to increase productivity

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 34-41. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.51143


Pérez-Domínguez et al / DYNA 82 (191), pp. 34-41. June, 2015.

Particularly, IFS, proposed by Atanassov (1986), are a generalization of the classical fuzzy sets reported by Zadeh (1965). A literature review shows that over the past few years, the use of IFS in multi-criteria decision making (MCDM) has increased significantly [42-57]. IFS are more capable than traditional fuzzy sets at handling vague and uncertain information in practice [55-57]. Hence, the aim of this work is to present the hybrid of the MOORA (Multi-Objective Optimization on the basis of Ratio Analysis) method with IFS as an alternative methodology for MCDM. The rest of this paper is organized as follows. Section 2 briefly presents the concepts related to IFS. Section 3 presents the description of MOORA. Section 4 describes the method proposed in this paper. In Section 5 a numerical case is presented in order to explain the proposed methodology. Finally, the conclusions and further works are presented in section 6.

and efficiency for any organization. Hence, given the importance of selecting the best supplier, there are several multi-criteria methodologies that seek to provide support in the difficult task of making this decision. Such methodologies are being used by purchasing managers of companies, and even by top managers. Some of the most frequently reported methodologies in the literature are: artificial neural networks (1943, ANN), fuzzy sets (1965, FS), quality function deployment (1966, QFD), elimination and choice expressing reality (1968, ELECTRE), design of decision support systems (1971, DSS), simple multi-attribute rating technique (1971, SMART), genetic algorithm (1975, GA), data envelopment analysis (1978, DEA), analytic hierarchy process (1980, AHP), technique for order of preference by similarity to ideal solution (1981, TOPSIS), viseKriterijumsa optimizacija i kompromisno resenje (1998, VIKOR), dimensional analysis (1993, DA), analytic network process (1996, ANP), multi-objective optimization by ratio analysis (2006, MOORA), preference selection index (PSI, 2010), among several others [10-24]. We can therefore infer that the issue of decision-making has been investigated for many years and, as a result, several methodologies have been developed as support tools [25]. [26,27] report a classification of these methodologies within two groups: individual and hybrid. The first group is made up by AHP, TOPSIS, DEA, DSS, VIKOR, ELECTRE, MOORA, SMART, DA, MOORA, FS, ANN, GA, among many more. The second group is represented by AHPTOPSIS, TOPSIS-VIKOR, DS-AHP, FDEMATEL, FVIKOR, FELECTRE, FAHP, AHP-GA, AHP-ANN, FTOPSIS, DEA-AHP, IFTOPSIS, IFAHP, among others. The literature reviewed shows that AHP and TOPSIS are two of the most frequently reported methodologies especially in the task of supplier selection [25,28,29]. Furthermore, it can be observed that the most frequently reported hybrids contain these methodologies along with fuzzy sets [3,6,30-33]. However, these methodologies by themselves as well as the hybrids with fuzzy sets present some deficiencies. First, the AHP method requires decision makers (DMs) to utilize perception capabilities to undertake pair-wise comparisons, but AHP lacks the ability to reflect the way humans think. In other words, AHP uses crisp values to represent the subjective opinion from DMs, which is hard to estimate by exact numerical values [7,31-35]. Even though the fuzzy AHP method handles the fuzziness in the experts’ subjective judgment with the aid of fuzzy set theory, the mathematical foundation is rather poor; for example, in terms of the conditions of reciprocity and transitivity, particularly 1 where, is not equal to ∙ 1. Besides, defining the fuzzy Eigen-values tends to be complex [2, 3638]. Second, TOPSIS deals with Euclidean distance [39, 40]. Euclidean distance assumes that criteria in evaluation are independent, which is sometimes not the case [41]. The same issue remains in the fuzzy version. As a result, there exists the opportunity to further develop research in the field of decision making in order to counteract weaknesses that can lead to making a wrong decision. Recent papers show that multi-criteria methods are being used in combination with intuitionistic fuzzy sets (IFS).

2. Intuitionistic Fuzzy Set 〈 , 〉| ∈ A fuzzy set in is given by . Where : → 0,1 is the membership function of the ∈ 0,1 is the membership of ∈ in . fuzzy set ; According to [58] an IFS is proposed by means of two functions expressing the degree of membership and nonmembership of an element to the set . An IFS in 〈 , 〉| ∈ . is defined as , : → 0,1 . With the condition Where : → 0,1 ; 1, ∀ ∈ 0 The numbers , , ∈ 0,1 denote respectively the degree of membership and degree of non-membership of element to the set . The number 1 is called the intuitionistic index of in . It is a measure of the degree of hesitancy of element in set . It should be noted that 0 1 for each ∈ . Hence, an IFS in is fully defined with the form 〈 , 〉| ∈ . Where μ : X → 0,1 ; , , : → 0,1 and : → 0,1 . Different relations and operations are introduced over the IFSs [59] some of them are shown in eq. (1) and (2). ∙ 〈 ,

,

〈 ,

〉| ∈

(1)

(2)

.

,

.

〉| ∈

3. MOORA Multi-Objective Optimization on the basis of Ratio Analysis (MOORA) was introduced by Brauers and Zavadskas for first time in 2006 [60]. The basic idea of the MOORA method is to calculate the overall performance of each alternative as the difference between the sums of its normalized performances which belongs to cost and benefit criteria. Thus, the MOORA method can be expressed 35


Pérez-Domínguez et al / DYNA 82 (191), pp. 34-41. June, 2015.

concisely using the five following steps: Step 1. Determine the decision-making matrix. The method begins with the identification of available alternatives and criteria. Then, the decision-making matrix (DMM) is constructed, which contains rows that represent the alternatives … in evaluation, and columns that represent criteria under evaluation ( quantitative criteria and qualitative criteria). In this way, according to [61], the MDF is computed using eq. (3).

|∈

(8)

where is associated with cost criteria. Step 5 Compute the contribution of each alternative . The contribution of each alternative is obtained using eq. (9) proposed by [60].

(9)

, . .

.

… … . . …

… .

… . . …

where represents the contribution of each alternative 1….. , 1 … . . are the maximum criteria and 1, 2, … . are the minimum criteria Finally, the value can be positive or negative depending on the totals of its maxima (benefit criteria) and minima (cost criteria) in the decision matrix. An ordinal ranking of shows the final preferenc value.

(3)

represent the alternatives, for where, 1 … . and represents the entries of the alternative with respect to criterion . Step 2. Calculate the normalized decision-making matrix. It is possible that the evaluation attributes are expressed in different units or scales of measurement; thereby, normalization is carried out [60]. Where the Euclidean norm is obtained according to eq. (4) to the criterion .

4. Intuitionistic Fuzzy MOORA Let , ,…, ,…, be a set of alternatives and , ,…, ,…, be a set of criteria to be evaluated. The Intuitionistic Fuzzy MOORA (IF-MOORA) procedure is described in the following steps: Step 1. Constitute a group of Decision Makers and determine the importance of each one. Where , ,…, ,…, is the set of decision makers (DMs). The importance of each DM is rated through a linguistic term expressed by intuitionistic fuzzy numbers. The linguistic terms and their corresponding intuitionistic fuzzy number (IFN) used for rating the weight of each one of the DMs are shown in Table 1. Let , , be an intuitionistic fuzzy number for rating of kth DM. Then, the corresponding weight of kth DM is obtained using the concept proposed by [52]:

(4)

| |

Thus, the normalization of each entry in the MDF is undertaken according to eq. (5).

| |

(5)

The results obtained using eq. (5) are dimensionless values that lack scale, which allows the operations between criteria to be additive [61,62]. Step 3. Calculate the weighted normalized decisionmaking matrix. Considering [63] the different importance of criteria, the weighted normalized ratings are calculated with eq.(6).

.

(10) ∑ 1 where ∑ Step 2. Determine the importance of criteria. Normally, all criteria may not be assumed to be of equal importance, and DMs might give different opinions about the same criteria. Hence, all opinions need to be considered and combined into one. Linguistic terms shown in Table 1 are used to rate the importance of criteria by every DM.

(6)

Step 4. Calculate the overall ratings of cost and benefit criteria for each alternative. The overall ratings of benefit criteria are calculated as the sum of weighted normalized ratings of benefit criteria using eq. (7).

|∈ where is associated with benefit criteria. Similarly, the overall ratings of cost criteria calculated with eq. (8).

Table 1. Linguistic terms for rating the importance of decision maker and criteria. Linguistic Term IFN , , Beginner (B) / Very Unimportant (VU) {0.1, 0.9, 0} Practitioner (Pr) / Unimportant (U) {0.35, 0.6, 0.05} Proficient (Pt) / Medium (M) {0.5, 0.45, 0.05} Expert (E) / Important (I) {0.75, 0.2, 0.05} Master (M) / Very Important (VI) {0.9, 0.1, 0} Source: The authors.

(7) are

36


Pérez-Domínguez et al / DYNA 82 (191), pp. 34-41. June, 2015.

, , be an intuitionistic fuzzy Let number assigned to criterion by the kth DM. Then, the weights of the criteria are computed using the IFWA operator proposed by [53]: ,

,…,

, ,

⊕…

,

⋯ ⋱ ⋯

,

,

1

∙ 〈 ,

,…,

,

,

,

,

⋯ ⋱ ⋯

⋮ ,

〉|

(13)

,

,

, ⋮

,

,

Step 5. Compute the sum of costs and benefits. In this step, the benefit (BN) and cost criteria (C) are identified. In this sense, the benefit criteria are the ones where maximum values are desired. On the other hand, cost criteria are where minimum values are preferred. Thus, eq. (14) represents the sum of benefit criteria

IFN , , {0.1, 0.9, 0} {0.1, 0.75, 0.15} {0.25, 0.6, 0.15} {0.4, 0.5, 0.1} {0.5, 0.4, 0.1} {0.6, 0.3, 0.1} {0.7, 0.2, 0.1} {0.8, 0.1, 0.1} {1, 0, 0}

μ

,

,

(14)

are the benefit criteria for the alternative where, 1 … . . represent maximum criteria. Then, eq. 1….. . (15) define the sum of the cost criteria.

μ

,…,

,

All opinions of DMs need to be included into an aggregated intuitionistic fuzzy decision matrix (AIFDM). IFWA operator might be used. ,

, ⋮

where , , and , ,…, ,…, . Step 3. Construct the aggregated intuitionistic fuzzy decision matrix representing the rating of alternatives based on the opinions of decision makers. Let be an intuitionistic fuzzy decision matrix (IFDM) of each DM. The linguistic terms used to evaluate each one of the alternatives according to the criteria are shown in Table 2. Table 2. Linguist terms for rating alternatives. Linguistic Term Extremely Bad (EB) / Extremely Low (EL) Very Bad (VB) / Very Low (VL) Bad (B) / Low (L) Medium Bad (MB) / Medium Low (ML) Fair (F) / Medium (M) Medium Good (MG) / Medium High (MH) Good (G) / High (H) Very Good (VG) / Very High (VH) Excellent (E) / Extremely High (EH) Source: The authors.

,

Step 4. Compute the aggregated weighted intuitionistic fuzzy decision matrix (AWIFDM). In this step, the AWIFDM is computed by considering the AIFDM, obtained in Step 3, and the vector of criteria weights, obtained in Step 2. The elements of AWIFDM are calculated using Eq. (1).

(11) 1

, ⋮

1

Specifically,

,…,

⊕…⊕

⋯ ⋱ ⋯

,

,

(15)

⊕ …⊕

⊕ …⊕

,

1

represents the sum of the cost criteria for where, alternative 1 … . . , and 1, 2, … . are minimum criteria. Step 6. Defuzzification of the sum of benefits and costs. In this step, the maximum and minimum criteria are defuzzified using the eq. (16) proposed by [54].

1

where 1, 2, … , .

1

,

,

,

12

1

1, 2, … , ;

Then, the AIFDM is defined as: 37

1 1

μ (16)


Pérez-Domínguez et al / DYNA 82 (191), pp. 34-41. June, 2015.

1

1

μ

Table 3. The importance of decision makers Decision Maker 1 Linguistic Term E IF number {0.75,0.2,0.05} Weight 0.5 Source: The authors

1

. Step 7. Compute the contribution of each alternative Calculate the contribution of each alternative with eq. 17 .

Table 4. The importance of criteria. Decision Maker DM1 DM2 Source: The authors.

(17) The value can be positive or negative depending on the totals of beneficial criteria and cost criteria in the decision matrix. An ordinal ranking of shows the final contribution of each alternative. Thus, the best alternative has the highest value, while the worst alternative has the value. lowest Step 8. Rank the alternatives. Alternatives are sorted according to descending order of .

DM1

An assembly company dedicated to the production of submersible pumps has to assemble several components in its production line. In the process, there is a packing that unites two shells that maintain an electrical system running under water. A common failure of this packing is detected by a short circuit generated that disables the pump. In order to repair the pump, the packaging has to be removed to change it and replace the faulty electrical system. Following pre-evaluation to identify potential suppliers of the packaging, five suppliers were found that could easily supply the material in the region where the assembly company is established. A group decision (GD) with two decision makers has been integrated. The GD has been determined to evaluate five suppliers for the new packing component. The procedure followed to choose the best supplier is shown below: Four criteria are considered for representing the most significant features of the suppliers. Thus, the criteria considered are:  Cost (x1): Low values are desired.  Service (x2): Good evaluations are desired.  Management (x3): Good evaluations are desired.  Technology (x4): Good evaluations are desired. Five suppliers are considered for evaluation. Thus, the set , , , , . of suppliers are denoted by Step 1. Constitute a group of DMs and determine the importance of each one. Two DMs constitute the group and their importance is shown in Table 3. Linguistic terms used for rating are shown in Table 1. In order to obtain the weight of each DM, eq. (10) was used, and for this case in particular all DMs have the same importance.

,

0.75

0.05

0.05

0.75 0.75 0.2

0.75 0.75 0.2 0.75

0.05

0.75 0.75 0.2

x2 M I

x1 I M

x3 U U

Table 5. The ratings of qualitative criteria. Decision Maker Supplier

5. Numerical Case

0.75

2 E {0.75,0.2,0.05} 0.5

DM2

x4 VU U

Criteria

A1 A2

X1 VL ML

X2 B G

X3 E F

X4 VG G

A3

VH

VG

VG

VB

A4

MH

VB

VB

B

A5

M

MB

MG

F

A1 A2

L M

MB VG

E B

E VG

A3

H

E

G

B

A4

MH

B

EB

MB

A5

L

F

MB

MB

Source: The authors.

Step 2. Determine the importance of criteria. The evaluation of each DM about the importance of the criteria represented as linguistic terms is shown in Table 4. Opinions of DMs are integrated by eq. (11) as follows:

,

,

0.646,0.300,0.054 0.646,0.300,0.054 0.350,0.600,0.050 0.235,0.735,0.030

,

Step 3. Construct the AIFDM representing the rating of alternatives based on the opinions of decision makers. The ratings given by every DM are given in Table 5. The AIFDM is obtained by eq. (12) and it is shown as follows, 0.2,0.7,0.2 0.5,0.5,0.1 0.8,0.1,0.1 0.6,0.3,0.1 0.4,0.5,0.1

0.5

0.3,0.6,0.1 0.8,0.1,0.1 1.0,0.0,0.0 0.2,0.7,0.2 0.5,0.5,0.1

1.0,0.0,0.0 0.4,0.5,0.1 0.8,0.1,0.1 0.1,0.8,0.1 0.5,0.4,0.1

1.0,0.0,0.0 0.8,0.1,0.1 0.2,0.7,0.2 0.3,0.6,0.1 0.5,0.5,0.1

Step 4. Compute the aggregated weighted intuitionistic fuzzy decision matrix. The AWIFDM is obtained using eq. (13). 38


Pérez-Domínguez et al / DYNA 82 (191), pp. 34-41. June, 2015. 0.1,0.8,0.1 0.3,0.6,0.1 0.5,0.4,0.1 0.4,0.5,0.1 0.3,0.6,0.1

0.2,0.8,0.1 0.5,0.4,0.1 0.6,0.3,0.1 0.1,0.8,0.1 0.3,0.6,0.1

0.4,0.6,0.1 0.1,0.8,0.1 0.3,0.7,0.1 0.0,0.9,0.0 0.2,0.8,0.1

0.2,0.7,0.0 0.2,0.8,0.1 0.0,0.9,0.0 0.1,0.9,0.0 0.1,0.9,0.0

Table 10. Contribution and rank for each alternative. Supplier MOORA A1 0.434 A2 0.321 0.227 A3 -0.124 A4 0.215 A5 Source: The authors.

Step 5. Compute the sum of benefits and costs. Service, Management and Technology are benefit criteria , , ; while Cost is a cost criteria . Table 6 shows the sum of benefit criteria for each alternative in evaluation is obtained using eq. (14). Table 7 shows the sum of cost criteria for each alternative in evaluation calculated using eq. (15). Step 6. Defuzzification of the sum of benefits and costs. The maximum (benefit criteria) and minimum criteria (cost criteria) are defuzzified by using eq. (16). The results are shown in the Table 8 and Table 9. Step 7. Compute the contribution of each alternative . is computed by The contribution of each alternative using eq. (17). Table 10 shows the ratio for each alternative and its ranking. Table 6. Sum of Benefit criteria. Supplier A1 A2 A3 A4 A5 Source: The authors.

μ 0.609 0.636 0.751 0.212 0.481

ν 0.301 0.245 0.18 0.629 0.395

Rank 1 2 3 5 4

Step 8. Rank the alternatives. The alternatives are ranked as ≻ ≻ ≻ ≻ . Then, alternative 1 is selected as the best supplier among the rest. 6. Conclusions This paper presents a hybrid of MOORA and intuitionistic fuzzy sets for supplier selection. The method consists of eight steps and a numerical example was shown in order to illustrate it. The proposed methodology provides a robust hybrid technique that can assist decision makers in selecting the best alternative since IFS has the capability of handling subjective information that provides greater flexibility to solve problems in the field of decision-making. As future work, it would be interesting to apply IFMOORA in different areas where there are factors of decision-making regarding the selection of alternatives; for example, robot selection, personnel selection, projects selection, etc. Finally, it is suggested that comparisons are made with other methods and that the results be evaluated.

π 0.09 0.119 0.069 0.159 0.125

References Table 7. Sum of Cost criteria. Supplier A1 A2 A3 A4 A5 Source: The authors.

Table 8. Defuzzification of Benefits Supplier A1 A2 A3 A4 A5 Source: The authors

Table 9. Defuzzification of Costs Supplier A1 A2 A3 A4 A5 Source: The authors.

[1] μ 0.115 0.292 0.488 0.388 0.251

ν 0.77 0.613 0.399 0.51 0.643

π 0.115 0.095 0.113 0.102 0.106

[2] [3]

[4] . Crisp 0.641 0.675 0.767 0.32 0.538

[5] [6]

[7] Crisp 0.207 0.354 0.54 0.445 0.323

[8] [9]

39

Chou, S. and Chang, Y., A decision support system for supplier selection based on a strategy-aligned fuzzy SMART approach, Expert Systems with Applications, 34 (4), pp. 2241–2253, 2008. DOI: 10.1016/j.eswa.2007.03.001 Arango, M., Adarme, W. and Zapata J., Collaborative inventory in supply chain optimization, DYNA, 80 (181), pp. 71–80, 2013. Deng, X., Hu, Y., Deng, Y. and Mahadevan, S., Expert systems with applications supplier selection using AHP methodology extended by D numbers. Expert Systems with Applications, 41 (1), pp. 156–167, 2014. DOI: 10.1016/j.eswa.2013.07.018 Kannan, D., Beatriz, A., Sousa, L. De José, C. and Jabbour, C., Selecting green suppliers based on GSCM practices: Using fuzzy TOPSIS applied to a Brazilian electronics company. European Journal of Operational Research, 233, pp. 432–447, 2014. DOI: 10.1016/j.ejor.2013.07.023 Avelar, L., García, J., Cedillo, G. and Adarme, W. Effects of regional infrastructure and offered services in the supply chains performance: Case Ciudad Juarez, DYNA, 81 (186), pp1-13, 2014 Rouyendegh, B. and Saputro, T., Supplier selection using integrated fuzzy TOPSIS and MCGP: A Case Study, Procedia - Social and Behavioral Sciences, 116, pp. 3957–3970, 2014. DOI: 10.1016/j.sbspro.2014.01.874 Lima, F., Osiro, L. and Carpinetti, L., A comparison between Fuzzy AHP and Fuzzy TOPSIS methods to supplier selection, Applied Soft Computing, 21, pp. 194–209, 2014. DOI: 10.1016/j.asoc.2014.03.014 Dargi, A., Anjomshoae, A., Galankashi, M., Memari, A. and Tap, M., Supplier selection: A Fuzzy-ANP approach, Procedia - Computer Science, 31, pp. 691–700, 2014. DOI: 10.1016/j.procs.2014.05.317 Khodadadzadeh, T. and Sadjadi, S., A state-of-art review on supplier selection problem, Decision Science Letters, 2 (2), pp. 59–70, 2013. DOI: 10.5267/j.dsl.2013.03.001


Pérez-Domínguez et al / DYNA 82 (191), pp. 34-41. June, 2015. [10] McCulloch ,W.S. and Pitts W., A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biology, 5 (4), pp.115-133, 1943. [11] Zadeh, L., Fuzzy set. Information and Control, 8, pp. 333-353, 1965. DOI: 10.1016/S0019-9958(65)90241-X [12] Akao, Y., Development history of quality function deployment, The customer driven approach to quality planning and deployment. Minato, Tokyo 107 Japan: Asian Productivity Organization. 339 P., 1966. [13] Roy, B., Classement et choix en présence de points de vue multiples (la méthode ELECTRE), La Revue d'Informatique et de Recherche Opérationelle (RIRO), 8, pp.57–75, 1968. [14] Gorry, G. and Scott, M., A framework for management information systems, Sloan Management Review, 13, pp. 50 – 70, 1971. [15] Holland, J., Adaptation in natural and artificial systems, University of Michigan Press, Ann Arbor, 1975. [16] Charnes, A., Cooper, W. and Rhodes, E., Measuring efficiency of decision-making units, European Journal of Operational Research, 3, pp. 429–444, 1978. DOI: 10.1016/0377-2217(78)90138-8 [17] Saaty, T., The analytic hierarchy process, New York: McGraw-Hill, 1980. [18] Hwang, C. and Yoon, K., Multiple attribute decision making: Methods and applications. New York: Springer-Verlag, 1981. DOI: 10.1007/978-3-642-48318-9 [19] Bahl, C. and Hunt, G., Decision-making theory and DSS design, ACM Sigmis Database, 15, pp.10–14, 1984. DOI: 10.1145/1017726.1017728 [20] Willis, T., Huston, C. and Pohlkamp F., Evaluation measures of justin-time supplier performance, Production and Inventory Management Journal, 34 (2), pp. 1-5, 1993. [21] Saaty, T., Decision making with dependence and feedback: The analytic network process, Pittsburgh, Pennsylvania: RWS Publications, 1996. [22] Opricovic, S., Multicriteria optimization in civil engineering, Belgrade, Faculty of Civil Engineering, (in Serbian), 302 P., 1998. [23] Brauers, W. and Zavadskas, E. ,The MOORA method and its application to privatization in a transition economy by a new method: the MOORA method, Control and Cybernetics, 35, pp. 445-469, 2006. [24] Maniya, K. and Bhatt, M., A selection of material using a novel type decision-making method: Preference selection index method. Materials and Design, 31 (4), pp. 1785–1789, 2010. DOI: 10.1016/j.matdes.2009.11.020 [25] Ho, W., Xu, X. and Dey, P., Multi-criteria decision making approaches for supplier evaluation and selection: A literature review. European Journal of Operational Research, 202 (1), pp. 16-24, 2010. DOI: 10.1016/j.ejor.2009.05.009 [26] Govindan, K., Rajendran, S., Sarkis, J. and Murugesan, P., Multi criteria decision making approaches for green supplier evaluation and selection: A literature review, Journal of Cleaner Production, article in press, DOI:10.1016/j.jclepro.2013.06.046, 2013. DOI: 10.1016/j.jclepro.2013.06.046 [27] Liou, J., Chuang, Y.-C. and Tzeng, G.-H., A fuzzy integral-based model for supplier evaluation and improvement, Information Sciences, 266, pp. 199–217, 2014. DOI: 10.1016/j.ins.2013.09.025 [28] Aksoy, A., Sucky, E. and Öztürk, N., Dynamic strategic supplier selection system with fuzzy logic, Procedia - Social and Behavioral Sciences, 109, pp. 1059–1063, 2014. DOI: 10.1016/j.sbspro.2013.12.588 [29] Ayhan, M., A fuzzy AHP approach for supplier selection problem: A case study in a Gear motor company, International Journal of Managing Value and Supply Chains, 4 (3), pp. 11–23, 2013. DOI: 10.5121/ijmvsc.2013.4302 [30] Chai, J., Liu, J. and Ngai, E., Application of decision-making techniques in supplier selection: A systematic review of literature, Expert Systems with Applications, 40 (10), pp. 3872–3885, 2013. DOI: 10.1016/j.eswa.2012.12.040 [31] Patil, S. and Kant, R., A fuzzy AHP-TOPSIS framework for ranking the solutions of knowledge management adoption in supply chain to overcome its barriers, Expert Systems with Applications, 41 (2), pp. 679–693, 2014. DOI: 10.1016/j.eswa.2013.07.093

[32] Beg, I. and Rashid, T., Multi-criteria trapezoidal valued intuitionistic fuzzy decision making with choquet integral based TOPSIS, Opsearch, 51 (1), pp. 98–129, 2013. DOI: 10.1007/s12597-013-01345 [33] Behzadian, M., Khanmohammadi, S., Yazdani, M. and Ignatius, J., A state-of the-art survey of TOPSIS applications, Expert Systems with Applications, 39 (17), pp. 13051–13069, 2012. DOI: 10.1016/j.eswa.2012.05.056 [34] Önüt, S., Kara, S. and Isik, E., Long term supplier selection using a combined fuzzy MCDM approach: A case study for a telecommunication company, Expert Systems with Applications, 36 (2), pp. 3887-3895, 2009. DOI: 10.1016/j.eswa.2008.02.045 [35] Rex, E., Wu, T. and Shunk, D., A stochastic AHP decision making methodology for imprecise preferences, Information Sciences, 270, pp. 192–203, 2014. DOI: 10.1016/j.ins.2014.02.077 [36] Du, Y., Gao, C., Hu, Y., Mahadevan, S. and Deng, Y., A new method of identifying influential nodes in complex networks based on TOPSIS. Physica A, 399, pp. 57–69, 2014. DOI: 10.1016/j.physa.2013.12.031 [37] Kurka, T., Application of the analytic hierarchy process to evaluate the regional sustainability of bioenergy developments, Energy, 62, pp. 393–402, 2013. DOI: 10.1016/j.energy.2013.09.053 [38] Wang, T.-C. and Chen, Y.-H., Applying fuzzy linguistic preference relations to the improvement of consistency of fuzzy AHP, Information Sciences, 178,(19), pp. 3755–3765, 2008. DOI: 10.1016/j.ins.2008.05.028 [39] Vega, A., Aguarón, J., García-Alcaraz, J. and Moreno-Jiménez, J. Notes on Dependent Attributes in TOPSIS. Procedia - Computer Science, 31, pp. 308–317, 2014. DOI: 10.1016/j.procs.2014.05.273 [40] Ashtiani, B., Haghighirad, F., Makui, A. and Montazer, G., Extension of fuzzy TOPSIS method based on interval-valued fuzzy sets, Applied Soft Computing, 9 (2), pp. 457–461, 2009. DOI: 10.1016/j.asoc.2008.05.005 [41] Villanueva-Ponce, R. and García-Alcaraz, J.L., Evaluation of technology using TOPSIS in presence of multi-collinearity in attributes: Why use the Mahalanobis distance, Revista de la Facultad de Ingeniería. Univ. Antioquia, 67, pp. 31–42, 2013. [42] Li, K. and Wang, Z., Notes on multicriteria fuzzy decision-making method based on a novel accuracy function under interval-valued intuitionistic fuzzy environment, Journal of Systems Science and Systems Engineering, 19 (4), pp. 504–508, 2010. DOI: 10.1007/s11518-010-5152-8 [43] Li, D.-F., Multiattribute decision-making models and methods using intuitionistic fuzzy sets, Journal of Computer and System Sciences, 70 (1), pp. 73–85, 2005. DOI: 10.1016/j.jcss.2004.06.002 [44] Liu, H.-W. and Wang, G.-J., Multi-criteria decision-making methods based on intuitionistic fuzzy sets, European Journal of Operational Research, 179 (1), pp. 220–233, 2007. DOI: 10.1016/j.ejor.2006.04.009 [45] Xu, Z. and Yager, R., Dynamic intuitionistic fuzzy multi-attribute decision making, International Journal of Approximate Reasoning, 48 (1), pp. 246–262, 2008. DOI: 10.1016/j.ijar.2007.08.008 [46] Chen, R.-Y., A problem-solving approach to product design using decision tree induction based on intuitionistic fuzzy, European Journal of Operational Research, 196 (1), pp. 266–272, 2009. DOI: 10.1016/j.ejor.2008.03.009 [47] Liao, H. and Xu, Z., Intuitionistic fuzzy hybrid weighted aggregation operators, International Journal of Intelligent Systems, 29 (11), pp. 971–993, 2014. DOI: 10.1002/int.21672 [48] Chen, Y. and Li, B., Dynamic multi-attribute decision-making model based on triangular intuitionistic fuzzy numbers, Scientia Iranica, 18 (2), pp. 268–274, 2011. DOI: 10.1016/j.scient.2011.03.022 [49] Yu, D., Group decision making based on generalized intuitionistic fuzzy prioritized geometric operator, International Journal of Intelligent Systems, 27, pp. 635–661, 2012. DOI: 10.1002/int.21538 [50] Yu, D., Wu, Y. and Lu, T., Interval-valued intuitionistic fuzzy prioritized operators and their application in group decision making, Knowledge-Based Systems, 30, pp. 57–66, 2012. DOI: 10.1016/j.knosys.2011.11.004 [51] Zhang, H. and Yu, L., MADM method based on cross-entropy and extended TOPSIS with interval-valued intuitionistic fuzzy sets, 40


Pérez-Domínguez et al / DYNA 82 (191), pp. 34-41. June, 2015.

[52]

[53] [54]

[55]

[56]

[57] [58] [59] [60] [61] [62] [63]

Knowledge-Based Systems, 30 (l), pp. 115–120, 2012. DOI: 10.1016/j.knosys.2012.01.003 Boran, F., Genç, S., Kurt, M. and Akay, D., A multi-criteria intuitionistic fuzzy group decision making for supplier selection with TOPSIS method, Expert Systems with Applications, 36, pp. 11363– 11368, 2009. DOI: 10.1016/j.eswa.2009.03.039 Xu, X., Intuitionistic fuzzy aggregation operators, IEEE Transactions on Fuzzy Systems, 15 (6), pp. 1179-1187, 2007. DOI: 10.1109/TFUZZ.2006.890678 Zhang, X. and Xu, Z., A new method for ranking intuitionistic fuzzy values and its application in multi-attribute decision making, Fuzzy Optimization and Decision Making, 11 (2), pp. 135–146, 2012. DOI: 10.1007/s10700-012-9118-9 Aloini, D., Dulmin, R. and Mininno, V., A peer IF-TOPSIS based decision support system for packaging machine selection, Expert Systems with Applications, 41 (5), pp. 2157-2165, 2014. DOI: 10.1016/j.eswa.2013.09.014 Chen, T., The extended linear assignment method for multiple criteria decision analysis based on interval-valued intuitionistic fuzzy sets, Applied Mathematical Modelling, 38 (7-8), pp. 2101-2117, 2014. DOI: 10.1016/j.apm.2013.10.017 Yue, Z., TOPSIS-based group decision-making methodology in intuitionistic fuzzy setting, Information Sciences, 277, pp. 141-153, 2014. DOI: 10.1016/j.ins.2014.02.013 Atanassov, K., Intuitionistic fuzzy sets. Fuzzy Sets and Systems, 20, pp. 87-96, 1986. DOI: 10.1016/S0165-0114(86)80034-3 Atanassov, K., New operations defined over the IFS, Fuzzy Sets and Systems, 61, pp. 137-142, 1994. DOI: 10.1016/0165-0114(94)902291 Brauers, W., Zavadskas, J., Peldschus, F. and Turskis Z., Multi objective decision-making for road design, Transport, 23 (3), pp. 183193, 2008. DOI: 10.3846/1648-4142.2008.23.183-193 Kalibatas, D. and Turkis, Z., Multicriteria evaluation of inner climate by using MOORA method, Information Technology and Control, 37 (1), pp. 79-83, 2008. García, J., Villanueva, R., Alvarado, A. and Maldonado, A., Evaluación y Selección de Proveedores Usando el Método MOORA. Academia Journals, (2), pp. 454–459, 2012. Stanujkic, D., Magdalinovic, N., Stojanovic, S. and Jovanovic, R., Extension of ratio system part of MOORA method for solving decision-making problems with interval data, Informatica, 23 (1), pp. 141–154, 2012.

I. Rodríguez-Borbón is currently an assistant professor in the Department of Industrial and Manufacturing Engineering in the Autonomous University of Ciudad Juarez, Mexico. He obtained his PhD. in Industrial Engineering. His researches are related to quality and reliability fields. O. Vergara-Villegas was born in Cuernavaca, Morelos, México, on July 3, 1977. He completed his BSc. degree in computer engineering at the Instituto Tecnológico de Zacatepec, México, in 2000; his MSc. in computer science at the Center of Research and Technological Development (CENIDET) in 2003 and PhD. in computer science at CENIDET in 2006. He currently serves as a professor at the Autonomous University of Ciudad Juárez, Chihuahua, México. He has been a senior member of the IEEE Computer Society since 2012. His fields of interest includes pattern recognition, digital image processing, augmented reality and mechatronics.

Área Curricular de Ingeniería Administrativa e Ingeniería Industrial Oferta de Posgrados

Especialización en Gestión Empresarial Especialización en Ingeniería Financiera Maestría en Ingeniería Administrativa Maestría en Ingeniería Industrial Doctorado en Ingeniería - Industria y Organizaciones

L. Pérez-Domínguez was born in Jalapa, Tabasco, México, on August 1, 1977. Completed a BSc. in industrial engineering at Instituto Tecnológico de Villahermosa, Tabasco, México in 2000 and MSc. degrees in industrial engineering from Instituto Tecnológico de Ciudad Juárez, Chihuhua, México, in 2003 respectively. He is currently a Doctoral student in the Science of Engineering, at the Autonomous University of Ciudad Juárez, Chihuahua, México. His research interests include multiple criteria decision making and continuous improvement tools applied in the manufacturing field.

Mayor información: E-mail: acia_med@unal.edu.co Teléfono: (57-4) 425 52 02

A. Alvarado-Iniesta is currently an assistant professor in the Department of Industrial and Manufacturing Engineering in the Autonomous University of Ciudad Juarez, Mexico. He obtained his Bachelor’s degree in Electronics Engineering, a MSc. degree in Industrial Engineering and a PhD. in Engineering with specialization in Industrial Engineering. His research interests are in the optimization and control of manufacturing process such as plastic injection molding. His areas of research focus on methodologies such as fuzzy logic and artificial neural networks employed as surrogate models, and evolutive algorithms as well as swarm intelligence.

41


A relax and cut approach using the multi-commodity flow formulation for the traveling salesman problem Makswell Seyiti Kawashima a, Socorro Rangel b, Igor Litvinchev c & Luis Infante d a UNESP - Univ Estadual Paulista, São José do Rio Preto, SP, Brazil, maksmx@gmail.com, UNESP - Univ Estadual Paulista, São José do Rio Preto, SP, Brazil, socorro@ibilce.unesp.br c UANL-Universidad Autónoma de Nuevo León. San Nicolás de los Garza, NL, México, igorlitvinchev@gmail.com d UANL-Universidad Autónoma de Nuevo León. San Nicolás de los Garza, NL, México luisinfanterivera@gmail.com b

Received: January 28th, 2015. Received in revised form: March 26th, 2015. Accepted: April 30th, 2015.

Abstract In this paper we explore the multi-commodity flow formulation for the Asymmetric Traveling Salesman Problem (ATSP) to obtain dual bounds. The procedure employed is a variant of a relax and cut procedure proposed in the literature that computes the Lagrangean multipliers associated to the subtour elimination constraints preserving the optimality of the multipliers associated to the assignment constraints. The results obtained by the computational study are encouraging and show that the proposed algorithm generated good dual bounds for the ATSP with a low execution time. Keywords: traveling salesman problem; relax and cut; Lagrangean relaxation.

Un enfoque relax and cut usando una formulación de flujo multiproductos para el problema del agente viajero Resumen En este artículo nosotros exploramos una formulación de flujo multiproductos para el Problema del Agente Viajero Asimétrico (ATSP) en la obtención de cotas duales de este problema. El procedimiento empleado es una variante del método relax and cut propuesto en la literatura que computa los multiplicadores lagrangianos asociados a las restricciones de eliminación de subrutas preservando la optimalidad de los multiplicadores asociados a las restricciones de asignación. Los resultados obtenidos con la experimentación computacional son alentadores y muestran que el algoritmo propuesto genera buenas cotas duales con un tiempo de ejecución bajo. Keywords: problema del agente viajero; relax and cut; relajación lagrangiana.

1.

Introduction

The Traveling Salesman Problem (TSP) has been the subject of many works beginning with the seminal paper of Dantzig, Fulkerson, and Johnson in 1954 [3]. The applications can vary from everyday routing problems e.g. [1,17], to production planning problems [13]. Many of these works also discuss models and solution approaches for the TSP. For the majority of solution approaches, it is important to have good primal and dual bounds. The latter can be obtained by exploring different types of relaxations. The linear relaxation of a Mixed Integer formulation to an optimization problem can provide a dual bound and its quality

depends on how close the formulation is to the convex hull of solutions. Oncan et al. [14] review several mathematical formulations for the ATSP and discuss the quality of the associated bounds. The difference among the formulations is how the subtour elimination constraints are formulated. The formulation presented in [3], known as DFJ, provides a stronger dual bound and has been the basis for several solution methods for the ATSP [e.g. 16]. However it has an exponential number of subtour elimination constraints. Another formulation is the multi-commodity flow formulation (MC-ATSP) and it uses a polynomial number of constraints to eliminate subtours. It is as strong as the DFJ formulation; however, it might present difficulties when it comes to solving the associated linear relaxation.

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 42-50. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.51144


Kawashima et al / DYNA 82 (191), pp. 42-50. June, 2015.

for MC-ATSP formulation presented in Section 2.2. A numerical study comparing the two procedures is presented in Section 3 and concluding remarks are given in Section 4. This paper is an extension of work presented at the CILOG 2014 [10].

Due to the computational effort necessary to solve the linear relaxation of the MC-ATSP, Rocha, Fernandes, and Soares [16] apply a Lagrangean relaxation to derive dual bounds for the ATSP. In the present paper, we also explore Lagrangean bounds for the MC-ATSP formulation. However, instead of dualizing all the subtour elimination constraints at once, we dualize only the ones that are violated by the current solution of the relaxed problem. This idea has been denominated the Relax and Cut procedure. It was introduced in the works of Balas and Christofides [2] and Gavish [7], and it has been the subject of many studies in recent years, although it has not always been referred to as such [2,4,-6,8,9, 16,18]. To briefly describe the relax and cut method, consider an integer optimization problem (IP) defined by (1)-(4). min

2.

The non-delayed relax and cut method applied to the ATSP

Consider a Graph , with | | ,| | and a cost for each arc , ∈ . A generic mathematical formulation for the ATSP problem is stated in (9)-(13) and is denominated (GATSP). min

(9) ∈

(1) Subject to

(2)

1, ∈

(10)

1, ∈

(11)

, ∈

(12)

, ∈ .

(13)

(3) ∈

(4)

A relaxation of (IP) can be obtained by removing the constraints (2), and is denominated (RP). Let ̅ be the optimal solution of (RP), and let be a constraint of (IP) that is violated by ̅ . A Lagrangean type relaxation of problem (IP) can be built by dualizing the violated constraint using ∈ as stated in problem (LRP) defined by (5)-(7). min

0/1,

The variable defines whether city succeeds city in the Hamiltonian cycle. The objective function (9) states the search for the minimum cost Hamiltonian cycle. Constraints (10)-(11) guarantees that each city is included exactly once in the Hamiltonian cycle. The constraints set (12) state the usual subtour elimination constraints in a generic format [2]. If constraints (12) are dropped we obtain a relaxation for the ATSP and this problem is known as the Assignment Problem (AP). Given the properties of the constraint matrix of (AP), it can be solved as a continuous linear optimization problem. Let be the optimal primal solution to the continuous version of (AP), , be the associated optimal dual solution and be the associated optimal basis. If is feasible to the GATSP, we are done. Otherwise, it is of interest to compute strong primal and dual bound for the GATSP. In what follows, the non-delayed relax and cut method will be applied to the GATSP formulation in order to obtain dual bounds. Let , ∈ , be the Lagrangean multipliers associated to constraints (12) and be the feasible set associated to the relaxation (AP). Dualizing the constraints (12) we get the Lagrangean function (14).

(5)

(6)

(7)

Fixing the value of it is possible to obtain a dual bound for problem (IP). The best bound that can be obtained by the relaxation (LRP) is found by solving the associated dual problem stated in (8). max λ

.

(8)

Having solved the problem (8), it might be possible to improve the dual bound obtained so far by identifying new valid inequalities for (IP) that are not satisfied by the current solution of (8), reformulating the relaxation (LRP) by dualizing them in a Lagrangean fashion, and solving the new Lagrangean dual problem. This procedure has been coined by Lucena [11] as a Delayed Relax and Cut method. A different procedure, coined as Non-Delayed Relax and Cut in [12], identifies new violated valid inequalities and reformulates the relaxation each time a multiplier is updated. The remainder of this paper is organized as follows. The non-delayed relax and cut method applied to a generic formulation of the ATSP is presented in section 2. In Section 2.1, the procedure presented in [2] for the DFJ formulation is briefly described followed by the description of our proposal to adapt it

min ∈

(14)

min ∈

43


Kawashima et al / DYNA 82 (191), pp. 42-50. June, 2015.

solutions and u, v , respectively, primal and dual optimal for (AP). The procedure identifies valid inequalities that:

A dual bound (DL) for the ATSP can be obtained by solving the associated Lagrangean dual (15). max

.

λ

(15)

Several methods can be used to solve (15), among them the subgradient method and the Volume algorithm (e.g. [8], [16]). Balas and Christofides [2] propose a non-delayed relax and cut algorithm in which the search for the optimal Lagrangean multipliers is done by searching for λ values that maintains the optimality of the primal solution for the relaxation (AP). That is, the search for 0 that improve the dual bound given by the relaxation (AP), , is limited to dual solutions such that conditions (16) and (17) are satisfied. ̅

1,

0,

,

(24)

.

(25)

Once valid inequalities that satisfy (24) and (25) are identified, they are included in the (AP) formulation and dualized in a Lagrangean fashion with the maximum possible value that satisfies (16) and (17). The addition of new constraints to the reformulated (AP) implies in addition of new variables to the dual problem (18)-(20), in the term ∑ ∈ λ a of (23), so improving the dual bound given by the partial Lagrangean function L λ . Given that violated constraints are identified (cut) and used to build a new Lagrangean relaxation to the problem it is a relax and cut procedure. Moreover, since the cuts are (16) identified each time a new Lagrangean solution is found, the procedure proposed in [2] can be called a non-delayed relax and cut procedure, or simply RCP. In what follows, we will specify how to obtain valid inequalities that satisfy (24) and (17) (25).

̅

That is, conditions (16) and (17) impose the search for 2.1. The relax and cut procedure for the formulation λ among the values that guarantee feasible solutions to the DFJ-ATSP dual of the problem (AP), problem (DAP) defined by (18)Balas and Christofides [2] develop the RCP procedure for (20). three types of subtour elimination constraints (cut set, clique and articulation point). To identify violated inequalities that max (18) admit positive multipliers, they consider an auxiliary ∈ ∈ ∈ spanning graph , in which there is an arc in for each variable with zero reduced cost (i.e.an arc for each Subject to variable that satisfies (19) at equality). In the case of the cut set constraints, if , is the cut , , ∈ (19) set associated to  , ∀ ∈ . Then the cut constraint (26) ∈ admits a positive multiplier if and only if condition (27) is satisfied. (20) 0. Let , ,

Λ

: 16 ; 17 .

1.

(21)

The Lagrangean dual problem (15) can be restated as (22).

max

.

∈Λ

(22)

.

(27)

2.2. The Relax and Cut procedure for the ATSP multicommodity formulation A strong formulation for the ATSP, with a polynomial number of constraints, is based on the multi-commodity network flow problem (e.g. [14]). In this formulation, the subtour elimination constraints are formulated in terms of a flow of 1 commodities in a network. The reasoning is based on the assumption that there are 1 commodities available at city 1 and a demand of one unit of commodity , 1 at city . The formulation is an extended one in the sense that besides the binary assignment variables , a set of continuous variables

∅.

More details of how to identify violated cut set constraints can be found in [2] and [10].

Note 1. We say that a constraint admits a positive multiplier if there is λ 0 such that B is optimal to (AP) and when it is dualized in a Lagrangean fashion it gives a better dual bound. Proposition 1. [2] If only a subset T ⊆ T of constraints (12) are used to build the Lagrangean function L(λ), then:

(26)

, ∈

(23)

∈ ′

The procedure proposed in [2] attempts to improve the dual bounds to the (ATSP) iteratively while keeping the 44


Kawashima et al / DYNA 82 (191), pp. 42-50. June, 2015.

are defined to represent the flow of a commodity through the arc , . The multi-commodity subtour elimination constraints are defined by (28)-(31).

1,

1

(28)

1,

1

(29)

0, , ∈ \

1,

(30)

∈ \

0

, , ,

∈ ,

1.

1, 0,

(31)

Figure 1. The columns of the flow variables Source: The authors

The constraints (28) guarantee that there is one unit of commodity available at node 1 and this product cannot flow back to node 1. For each node ∈ \ 1 , constraint (29) imposes that the demand for product in node is met. Constraints (30) are the flow conservation constraints. Finally, constraints (31), impose that the flow of product goes through arc , only if this arc is included in a path from node 1 to node . The multi-commodity formulation for the ATSP, MC−ATSP, is given by (9)-(11), (13) and (28)-(31). In what follows, we detail the RCP procedure defined in Section 2 to obtain dual bounds for the ATSP considering the MC−ATSP formulation. In an optimal solution to the Assignment Problem (AP) that includes subtours, the multi-commodity flow constraints (28)-(31) are violated for any node ∈ that is not in the same subtour that includes node 1, denoted hereafter as . \ . That is, condition (24) is satisfied for every ∈ Let us now derive the conditions to identify among the violated constraints the ones that admit positive multipliers. In order to do that, let us build the dual problem associated to the linear relaxation of MC-ATSP. For each commodity ∈ \ 1 , define , and as the dual variable associated to (28), (29) and (30) respectively. For each node ∈ \ 1, , is the dual variable associated to (31). Let , , and be the nonzero coefficients of the flow variable in constraints (28), (29), (30) and (31) respectively. A column of the constraints’ matrix of the MC-ATSP associated to the variable is represented in Fig. 1, in which the coefficients are defined according to (32)-(35).

1,

1, 1, 0,

1; 1; .

1, 1, 0,

;

0,

;

,

,

(35) .

max

(36) ∈

Subject to ̂ , , ∈

0,

, ,

∈ ,

0, , ,

(37)

(38)

1

∈ ,

1.

(39)

The main idea of the RCP is to dualize only the multicommodity constraints that have a positive multiplier and therefore guarantee an improvement in the quality of the Lagrangean dual bound. As we can see in the objective function (36), to obtain better bounds it is necessary to identify constraints such that condition (40) is met, in which is the set of dualized constraints.

(32)

0.

(40)

∈ ′

The multiplier , associated to (27), has sign be fixed to constraints. Therefore to guarantee (39), let the current value of the reduced cost, ̂ , associated to the primal variable , see expression (41).

(33) ;

.

; .

The dual problem associated to the MC-ATSP formulation is defined by (36)-(39).

.

1,

1,

(34)

.

(41)

′∈ ′

To simplify the notation, fixing 45

to the value defined in


Kawashima et al / DYNA 82 (191), pp. 42-50. June, 2015.

(41), constraints (37) and (38) can be replaced by (42) in the dual problem.

and do not belong to the same subtour, then 0. Moreover, as is the optimal solution to (AP), it is possible to say that ̂ 0, that is, . As , to keep the dual feasibility and ̂ according to (51): As

(42)

, , ∈ .

Now, it is necessary to derive feasible values to the dual variables , and , for ∈ \ 1, . There are two cases. 1. To simplify the In the first case, we consider exposition, suppose that no subtour elimination constraints have been dualized yet, ∅, and take a subtour that contains 1. node . Let ∈ , , , … , | | such that Then the dual constraint associated to the variable (42) is: .

,

̂

(44)

.

(45)

such that

min ̂ , . ∈ Similarly, taking the arc

.

.

max

for , ∈ , , And so, and In general, for a subtour , distinct nodes , ∈ we have (50).

Consider now the case when two distinct subtours and the nodes from 1 and . Consider also the arc constraint (42):

,

.

(53)

̂

(54) ̂

(55)

(56) ,

(57)

,

(58)

From (54) and (59) we have: max

̂ , ∈ , ∈ min ̂ , ∈

, ∈

.

(59)

To obtain (60), we supposed that and were different from both and . Using a similar argument and as representing the subtour we get considering node , and for ∈ , . Then we can derive:

. , and any

max

(49)

0. Let and be and , both different , and the respective

.

(52) and ∈

.

(48) 1,

.

̂ , ∈

(47)

.

We can restate (58) as,

As the associated slack, reduced cost, is zero ( ̂ 0) , which is also valid for the we get the equality other nodes ∈ , that is: ,∀

,

min ̂ , ∈ ∈

(46)

Consider now a node in the same subtour as node 1, ∈ 1, , , … , | | such that 1. According to (42): ‐

(51)

we have:

min ̂ , ∈

. Continuing this reasoning we have: ,∀ ∈

1,

, obtaining

,

. ∈

The inequality (53) is valid for any Then:

,

0

.

Since (51) and picking one node in each subtour and ∈ we have:

(43)

The associated slack variable is zero and so for and we obtain (44) and (45).

Consider now a second node similarly we have:

̂

̂ ,

∈ , ∈ min ̂ , ∈ . ∈

,

(60)

which gives bounds to that keeps the dual feasibility of the optimal basis . If min ̂ , ∈ , ∈ 0 then the constraints associated to do not admit positive multipliers. Otherwise, the maximum possible values for can led to an improved dual bound.

(50) 46


Kawashima et al / DYNA 82 (191), pp. 42-50. June, 2015.

Section 2.1) and the multi-commodity constraints (Procedure MC-RCP described in Section 2.2). The multi-commodity

To summarize the optimization problem associated to the definition of the best values for and , let:

(61)

.

(62)

.

(63)

.

(64)

Input: ATSP instance (G V, A , |V| n, |A| m, c , ∀ i, j ∈ A) 2 Output: Lower Bound, multi-commodity violated constraints. 3 Begin 4 Solve the AP associated to the ATSP 5 LB v AP 6 Identify the set of subtours associated to x, . 7 If | | 1, stop. x is an optimal solution. 8 S is the subtour containing vertex 1. 9 ∅ T 10 N ∅ 11 W V\S 12 While W ∅ 13 Begin 14 Select k ∈ W V\ S ∪ T ∪ N 15 If min c , l ∈ S , m ∈ S 0, then N N ∪ k 16 Else 17 Begin 18 Solve problem (66)-(70) 19 If α∗ , β∗ , γ ∗ 0 then N N ∪ k 20 Else 21 Begin 22 For r, q ∈ A do c c γ γ ,S ,S ∈ , . 23 β LB LB α 24 T ∪ k T 25 N ∅ 26 W W\ k 27 End 28 End 29 End 30 x is a solution to the AP, LB is a lower bound for the ATSP and the constraints in T were dualized to find LB 31 End Figure 2. Pseudocode of MC-RCP procedure. Source: The authors 1

From (47) and (62) we have that: ,∀ ∈ Similarly, from (49) and (63) we get: ,∀

The problem to identify violated multi-commodity inequalities with positive multipliers is given by (66)-(70). max

(65)

Subject to (66) (67) ,

,

∈ , ,

≔ max

In (69),

,

, ,

.

(68)

.

(69)

,

;

∈ ≔ min , , , ; is a collection of subtours associated to ; and are nodes in and , . respectively,with If the optimal solution ∗ , ∗ , ∗ of (66)-(70) is greater than zero, the set of constraints (28)-(31) associated to is violated by and dualized with positive multipliers. The reduced costs can be updated according to (71). ̂

̂

, ,

∈ ,

.

formulation for the ATSP, model MC-ATSP, was written in the syntax of the AMPL modeling language. The CS-RCP and the MC-RCP algorithms were coded in the C++ programming language, using the CPLEX 12.5 libraries and run on a machine with Intel Core i5 2.67 GHz with 3.80 GB of RAM, operating system Windows 7 Ultimate. A maximum of 30 minutes (1800 seconds) of CPU time was allowed in each run. Thirteen instances of the TSPLIB library [15] were used in the tests (br17, ftv33, ftv35, ftv38, p43, ftv44, ftv47, ft53, ftv55, ftv64, ft70, ftv70, ftv170) ranging from 17 to 171 nodes. The instances size and the corresponding optimal solution values are given in [15]. As described in section 2, the relax and cut algorithms start from a solution of the Assignment Problem (AP) and search for violated valid inequalities that admits positive multipliers. For the cut set subtour elimination constraints (procedure CS-RCP), identifying valid inequalities is related to the existence of a reachable set of a node in the admissible graph [10]. As for the multi-commodity subtour elimination constraints (procedure MC-RCP) the search is undertaken through all the subtours that do not include node 1 (see Fig. 2). At first the instances of the AP and the MC-ATSP models, as well as the linear relaxation of the model MC-ATSP (RMCATSP) were solved by the solver CPLEX using the default parameters, except for the instances of the relaxation RMC-

(70)

The MC-RCP procedure consists in iteratively evaluating all the subtours through the nodes ∈ \ 1, . According to (36), at the end of the procedure an improved dual bound ( ) for the ATSP is given by (72).

.

(71)

The pseudocode of the MC-RCP procedure is shown in Fig. 2. 3. Computational Study In this section, we present results of the computational implementation of the procedure relax and cut considering the cut set constraints (Procedure CS-RCP described in 47


Kawashima et al / DYNA 82 (191), pp. 42-50. June, 2015.

ATSP that were solved by the barrier method, as suggested in [14]. It was not possible to find feasible solution for 6 instances (p43, ftv44, ftv55, ft70, ftv70 and ftv170) of the model MC-ATSP in 30 minutes (the allowed execution time). Also, the solver runs out of memory when solving instance ftv170 of the MC-ATSP model. The linear relaxation of the multi-commodity formulation is indeed very strong, it provided an average gap of 1.03%. The gap associated to the relaxation AP of the instances br17 and p43 is very high, 100% and 97% respectively, which influences the results of the algorithms relax and cut as will be discussed next. To compare two bounds and we compute their relative value , as in (73). ,

100

bounds, for each instance. For most instances (9 out of 13) the MC-RCP procedure provided dual bounds with relative value of less than 10% of the bound given by the linear relaxation of the MC-ATSP model. For all but one instance, the average CPU time taken to solve the linear relaxation RMC-ATSP was 258.71 seconds while the average time to run the MC-RCP procedure was 4.96 seconds. The linear relaxation of the ftv170 instance of model MC-ATSP could not be solved in the allowed execution time for the solver (1800 seconds). Taken the optimal value for the instance ftv170 given in the TSPLIB, the MC-RCP provided a dual bound with a gap of 4.39% in 16.53 seconds. We also compared the bounds given by the two relax and cut procedures presented in this work. Table 2 shows, for each instance, the dual bounds associated to the procedures CS-RCP and MC-RCP, CS and MC, respectively, and the corresponding CPU time (in seconds). For each bound (CS and MC), we also show the number of violated inequalities that are dualized in each procedure (cut), the integer gap and their relative value , .

(72)

.

Table 1 presents comparisons between the RMC-ATSP relaxation (RMC) and the MC-RCP procedure (MC), presenting the obtained dual bounds, computational time, in seconds, and the relative value of the RMC and MC dual Table 1. Linear Relaxation and Relax and Cut results. Dual Bounds Instance

MC

RMC

Table 2. Procedures CS-RCP and MC-RCP - computational results. Dual Bounds Time(sec.) Instance br17 ftv33 ftv35 ftv38 p43 ftv44 ftv47 ft53 ftv55 ftv64 ft70 ftv70 ftv170 Source: the authors

37.00 1204.00 1398.00 1465.00 5583.00 1538.00 1664.00 6693.00 1451.00 1735.00 38311.00 1773.00 2634.00

MC

14.00 1185.00 1384.00 1441.00 501.00 1530.00 1708.00 5979.00 1459.00 1756.00 38194.00 1794.00 2634.00

CS

0.96 1.10 1.48 1.76 11.16 2.50 2.31 7.81 2.98 4.18 8.36 3.77 16.50

(%)

MC

br17 39.00 14.00 ftv33 1286.00 1185.00 ftv35 1457.33 1384.00 ftv38 1514.33 1441.00 p43 5611.00 501.00 ftv44 1584.87 1530.00 ftv47 1748.61 1708.00 ft53 6905.00 5979.00 ftv55 1584.00 1459.00 ftv64 1807.50 1756.00 ft70 38652.50 38194.00 ftv70 1909.00 1794.00 ftv170 * 2634.00 *No value found within the allowed execution time (1800 sec.) Source: The authors

CS

RMC, MC

Time (sec.)

RMC

0.39 9.67 16.99 25.40 55.16 49.17 111.49 183.91 267.93 680.41 728.10 975.85 *

0.56 1.38 1.76 1.92 12.26 3.67 8.77 2.10 4.82 6.77 6.43 9.14 16.53

Cuts MC

CS

0.56 1.38 1.76 1.92 12.68 3.67 8.77 2.10 4.82 6.77 6.43 9.14 16.53

The CS-RCP and MC-RCP procedures gave similar results. In 10 out of the 13 instances, the relative value of the

11.00 2.00 3.00 3.00 24.00 4.00 3.00 10.00 3.00 3.00 5.00 2.00 1.00

64.10 7.85 5.03 4.84 91.07 3.46 2.32 13.41 7.89 2.85 1.19 6.02 *

gap(%) MC

3.00 0.00 1.00 1.00 8.00 2.00 7.00 1.00 4.00 5.00 1.00 4.00 1.00

CS

5.13 6.38 5.09 4.25 0.68 4.65 6.31 3.07 9.76 5.66 0.94 9.08 4.39

MC

64.10 7.85 6.04 5.82 91.09 5.15 3.83 13.41 9.27 4.51 1.24 8.00 4.39

, (%) 62.16 1.58 1.00 1.64 91.02 0.52 -2.64 10.67 -0.55 -1.21 0.31 -1.18 0.00

associated bounds was no greater than 3%. However, the particularities of some instances resulted in big differences in 48


Kawashima et al / DYNA 82 (191), pp. 42-50. June, 2015.

RCP. The implementation of the CS-RCP was important for two reasons: it served as a benchmark for the MC-RCP and updated the work of Balas and Christofides [2] since it was tested with instances of the TSPLIB while in the original work it was tested only with random data. The dual bounds obtained with the relax and cut procedures presented here can be useful to speed up the solution of large instances of the ATSP by the implicit enumeration methods present in commercial and noncommercial solvers.

the results of the two procedures. For the instance, br17 v(AP) = 0, and the bound given by the MC-RCP procedure reduced the AP gap from 100% to 64.10%, whereas the CS-RCP procedure reduced it to 5.13%. The number of valid inequalities that can be identified in the CS-RCP procedure is higher than for the MC-RCP procedure. However, the proportion of cuts generated in relation to possible total is very close in both algorithms. The CS-RCP procedure identified 11 cuts out of 20, a ratio of 0.55, while the MCRCP identified 3 out of 5, with a ratio of 0.6. It is noteworthy that for the instance ftv170 the dual bound and the number of cuts was the same in both procedures. The work of Rocha, Fernandes and Soares [16] features an application of the Volume Algorithm to solve the dual Lagrangean problem associated to the formulation MCATSP. They test the procedure on several TSPLIB instances, which includes the ones used in the present work. The number of iterations vary from instance to instance hanging from 1000 to 10000, depending on the size of the instance. They do not report dual bounds for the ftv170 instance and therefore this instance is not included in the comparison. The average relative value of the best bounds obtained with the Volume Algorithm (κ) in comparison with the MC-RCP procedure (ρ) is 17%, and the standard deviation is 28.75%. Considering the number of iterations required to the MC-RCP, the relax and cut procedure proposed in this work provides good dual bounds with a reduced computational effort. The bounds obtained with the MC-RCP and the associated relaxed solution could be used as a starting point to the Volume Algorithm. This might improve the performance of the algorithm in terms of reducing the total number of iterations and execution time.

Acknowledgements This research was partly supported by the Brazilian research agencies Capes, CNPq (306194/2012-0) and Fapesp (2013/07375-0, 2010/10133-0). It also received partial support from the RFBR (12-01-00893) and CONACYT (167019). Special thanks are due to Michelli Maldonado that collaborated in the early stages of this research. References [1] [2] [3] [4] [5]

4. Concluding Remarks In this work, we studied two procedures to obtain dual bounds to the Asymmetric traveling salesman problem. The procedures are based on the relax and cut method that starting from the optimal solution to the assignment problem, identifies violated inequalities and dualizes them to build a Lagrangean function. Two classes of valid inequalities are used. The CS-RCP procedure is based on cut set subtour elimination constraints, and the MC-RCP procedure is based on the multi-commodity subtour elimination constraints. The procedures were tested on a set of instances from the TSPLIB. The computational results obtained with both procedures are encouraging. The quality of the bounds given by the two algorithms is similar. The CPU time required to compute the dual bounds with both the CS-RCP and MCRCP are small when compared to the time necessary to obtain dual bounds solving the linear relaxation of the MC-ATSP formulation. The formulation CS-ATSP and MC-ATSP are equivalent and give the same linear relaxation values. Still, a combination of the two types of subtour elimination constraints can be useful in a relax and cut procedure. They could be used sequentially, since the valid inequalities identified are distinct. The matrix of reduced costs resulting from the CS-RCP can be used as starting point for the MC-

[6] [7] [8] [9] [10]

[11] [12] [13]

[14]

49

Álvarez, P.J., Calderón, C.A.G. and Calderón, G.G., Route optimization of urban public transportation. DYNA, 88 (180), pp. 4149, 2013. Balas, E., Christofides, N., A restricted Lagragean approach to the traveling salesman problem, Mathematical Programming, 21, pp. 1946, 1981. DOI: 10.1007/BF01584228 Dantzig G., Fulkerson R. and Johnson S., Solution of a large-scale traveling-salesman problem, Operations Research 2, pp. 393-410, 1954. DOI: 10.1287/opre.2.4.393 De Souza, C.C. and Cavalcante, V.F., Exact algorithms for the Vertex separator problem in graphs, Networks 57 (3), pp. 212-230, 2011. DOI: 10.1002/net.20420 Escudero, L.F., Guinard, M. and Malik, K., A Lagrangian relax-andcut approach for the sequential ordering problem with precedence relationships, Annals of Operations Research 50, pp. 219-237, 1994. DOI: 10.1007/BF02085641 Fischetti, M., Salvagnin, D., A relax-and-cut framework for gomory mixed-integer cuts, Mathematical Programming Computation 3 (2), pp. 79-102, 2011. DOI: 10.1007/s12532-011-0024-x Gavish, B., Augmented Lagrangean based algorithms for centralized network design, IEEE Transactions on Communications 33, pp. 12471257, 1985. DOI: 10.1109/TCOM.1985.1096250 Guignard, M., Lagrangean relaxation, Top 11 (2), pp. 151-228, 2003. DOI: 10.1109/TCOM.1985.1096250 Kawashima, M.S., Relax and cut: Limitantes duais para o problema do caixeiro viajante, MSc. Thesis, UNESP - Universidade Estadual Paulista, São José do Rio Preto, Brasil, 2014. Kawashima, M.S., Rangel, S,. Litvinchev, I. and Infante, L., Relax and cut: Dual bounds for the traveling salesman problem. Memorias del CILOG 2014. San Luis Potosí, México: Mexican Logistics & Supply Chain Association, 2014. Lucena, A., Steiner problem in graphs: Lagrangean relaxation and cutting planes, COAL Bulletin 21, pp. 2-8, 1993. Lucena, A., Non-Delayed relax and cut algorithms, Annals of Operations Research 140, pp. 375-410, 2005. DOI: 10.1007/s10479005-3977-1 Maldonado. M., Rangel, S., Ferreira, D., A Study of different subsequence elimination strategies for the soft drink production planning, Journal of Applied Research and Technology 12, 631‐641, 2014. DOI: 10.1016/S1665-6423(14)70080-X Öncan, T., Altinel, I.K., Laporte G., A comparative analysis of several asymmetric traveling salesman problem formulations, Computers &Operations Research, 36, pp. 637-654, 2009. DOI: 10.1016/j.cor.2007.11.008


Kawashima et al / DYNA 82 (191), pp. 42-50. June, 2015. [15] Reinelt, G., TSPLIB - A traveling salesman problem library, ORSA Journal Computing, [on line] 3, pp. 376-384, 1991. Available at: https://www.iwr.uniheidelberg.de/groups/comopt/software/TSPLIB9 5/, last visited in11/04/2014 [16] Rocha, A.N., Fernandes, E.M.G.P. and Soares, J., Aplicação do algoritmo volumétrico à resolução aproximada e exacta do problema do caixeiro viajante assimétrico, Investigação Operacional 25 (2), pp. 277-294, 2005. [17] Serna, M.D.A., Jaimes, W.A. and Cortes, J.A.Z., Commodities distribution using alternative types of transport. A study in the Colombian bread SME’s. DYNA, 77 (163), pp. 222-233, 2010. [18] Sherali, H.D. and Smith, J.C., Dynamic Lagrangean dual and reduced RLT constructs for solving 0-1 mixed-integer programs, Top 20 (1), pp. 173-189, 2012. DOI: 10.1007/s11750-011-0199-3 M.S. Kawashima, completed his MSc. degree from the Universidade Estadual Paulista (UNESP), Brazil. His research focuses on the study of large-scale combinatorial optimization problems.

Área Curricular de Ingeniería Administrativa e Ingeniería Industrial

S. Rangel, is an associate professor at UNESP, Brazil. She completed her PhD. at Brunel University. Since then she has been mainly involved in the study of large-scale combinatorial optimization problems. The focus has been on building efficient models for practical applications, and developing solving techniques based on hybrid algorithms that combine several methods such as partial enumeration, cutting planes, heuristics, pre-processing and aggregation/ decomposition.

Oferta de Posgrados

Especialización en Gestión Empresarial Especialización en Ingeniería Financiera Maestría en Ingeniería Administrativa Maestría en Ingeniería Industrial Doctorado en Ingeniería - Industria y Organizaciones

I. Litvinchev, is a professor at UANL, Mexico and a Head of Department at Computing Centre, Russian Academy of Sciences (CCRAS), Moscow. He completed his MSc. degree from the Moscow Institute of Physics and Technology (Fizteh), PhD. and Dr. Sci. (Habilitation) degrees in systems modeling and optimization from CCRAS. His research focuses on largescale system modeling, optimization, and control with applications to logistics and supply chain management. Dr. Litvinchev is a member of Russian Academy of Natural Sciences and Mexican Academy of Sciences.

Mayor información:

L. Infante, completed his MSc. degree from the Universidad Autónoma de Nuevo León (UANL), México. His research focuses on the study of largescale combinatorial optimization problems.

E-mail: acia_med@unal.edu.co Teléfono: (57-4) 425 52 02

50


A genetic algorithm to solve a three-echelon capacitated location problem for a distribution center within a solid waste management system in the northern region of Veracruz, Mexico María del Rosario Pérez-Salazar a, Nicolás Francisco Mateo-Díaz b, Rogelio García-Rodríguez c, Carlos Eusebio Mar-Orozco d & Lidilia Cruz-Rivero e a

División de Posgrado e Investigación, Instituto Tecnológico Superior de Tantoyuca Veracruz, México, rosario.perez.salazar@gmail.com División de Ingeniería en Gestión Empresarial, Instituto Tecnológico Superior de Tantoyuca Veracruz, México, pacomatthew06@gmail.com c División de Ingeniería en Sistemas Computacionales, Instituto Tecnológico Superior de Tantoyuca Veracruz, México, rgarciardz@gmail.com d División de Posgrado e Investigación, Instituto Tecnológico Superior de Tantoyuca Veracruz, México, carlos.mar.orozco@gmail.com e División de Posgrado e Investigación, Instituto Tecnológico Superior de Tantoyuca Veracruz, México, lilirivero@gmail.com b

Received: January 28th, 2015. Received in revised form: March 26th, 2015. Accepted: April 30th, 2015.

Abstract Mexico is the world’s third largest consumer of Polyethylene Terephthalate (PET), only preceded by the United States and China. PET is commonly used in plastic containers such as beverage bottles and food packaging. It can be argued that the main problem regarding pollution generated by PET waste lies in the lack of appropriate solid waste management. The decision regarding facility location is the central issue in solid waste management. A mixed integer linear programming model of the capacitated facility location problem is proposed and then a genetic algorithm is designed to optimize the model. The problem is described as follows: given the quantities of PET generated in the northern region of Veracruz, Mexico, by considering five cities and each as a single generation source, a collection center has to be selected among a set of pre-identified locations in the town of Tempoal, Veracruz; in order to serve a set of demand points in the re-use market; demands are assumed to be uncertain. The aim is to minimize the system’s overall cost. Keywords: genetic algorithm, solid waste management; capacitated location problem.

Algoritmo genético para resolver el problema de localización de instalaciones capacitado en una cadena de tres eslabones para un centro de distribución dentro de un sistema de gestión de residuos sólidos en la región norte de Veracruz, México Resumen México es el tercer consumidor mundial de Tereftalato de Polietileno (PET), sólo después de Estados Unidos y China. El PET es utilizado comúnmente para fabricar recipientes de plástico tales como botellas para bebidas y empaques para alimentos. Se puede argumentar que el principal problema con respecto a la contaminación generada por los residuos de PET radica en una inadecuada gestión de residuos sólidos. Proponemos un modelo de programación entera mixta del problema de localización de instalaciones capacitado y luego un algoritmo genético es desarrollado para optimizar este modelo. El problema se describe de la siguiente manera: dada la cantidad de PET generado en la región norte de Veracruz, México, considerando cinco ciudades y cada una como una fuente de generación única, un centro de recolección tiene que ser seleccionado entre un conjunto de lugares previamente determinados en la ciudad de Tempoal, Veracruz; con el fin de servir a un conjunto de puntos de demanda en el mercado re-uso; se asume que las demandas como parámetros de incertidumbre. El objetivo es minimizar el costo total del sistema. Palabras clave: algoritmo genético; gestión de residuos sólido; problema de localización capacitado.

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 51-57. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.51146


PĂŠrez-Salazar et al / DYNA 82 (191), pp. 51-57. June, 2015.

trucks within the region, the frequency of collection, crew size, truck sizes, number of operating trucks, transportation of collected waste to a transfer station, an intermediate processing facility or a landfill and a host of other problems [14]. Within solid waste management (SWM), we can identify some key activities such as the selection of the number and locations of transfer stations, intermediate processing facilities, landfill sites, their capacities, capacity expansion strategies and routing of the waste across point sources (district or counties within the region) and routing of the waste through the facilities to ultimate disposal on a macroscopic level. Regarding these two routing choices, we recognize two perspectives in SWM, regional and by district [14]. Limited suitable land area and resources, growing public opposition, and deterioration of environmental conditions are invariably the main constraints for the proper functioning of an SWM. In this context, SWM has often been viewed from the narrow perspective of counties or districts rather than a regional perspective [18]. Some applications and examples have been observed in literature [10]. The phases of SWM can be divided into four distinct phases [12]: pre-collection, collection, transportation and treatment. The pre-collection is the proper storage, handling, sorting and presentation of waste suitable for collection and transfer conditions. This phase is essential for the accurate functioning of the following stages. Collection and transportation stages are often the most costly and hence require careful planning. Fifty to 70 % of the transport and disposal of solid waste was spent on the collection phase [19]. Waste is compacted and transported directly to the points of treatment or transfer plants. Treatment includes disposal operations or use of the materials contained in the waste. One of the main issues in SWM involves facility capacity location, where a related optimization analysis will typically require the use of integer variables to carry out the decision process of locating a particular facility development or expansion options to be used. Thus, MIP techniques are useful for this purpose [15]. Uncertainty is an important issue to discuss in SWM, primarily in waste generation and economic criteria. Waste generation is a function of population distribution and growth, and per capita waste generation rates, while economic estimates are a function of the technology used, economies of scale, land availability, and local labor and equipment prices. Deterministic and stochastic mathematical programming models have been applied for SWM [18]. Some of the approaches concerning deterministic models are linear programming, MIP, dynamic programming, and multi - objective programming; in contrast, techniques used for stochastic models involve probability, fuzzy and grey system theory [5]. A probabilistic approach is also presented in an algorithm for probabilistic analysis of unbalanced three-phase weakly-meshed distribution systems is presented; this algorithm uses the technique of Two-Point Estimate Method for calculating the probabilistic behavior of the system random variables [33]. Regarding MIP techniques and incorporating stochastic parameters, a MSW capacity planning problem formulation has been proposed to be solved in three main stages [18]; first the formulation of a MIP model for the given MSW management planning problem providing the optimal solutions as bases for decision making, then a modeling for

1. Deciding the location of solid waste system facilities Decisions regarding the location of facilities can be considered as a strategic issue with an inherent risk for almost every company. The problem of locating facilities establishes alternatives in order to evaluate the conditions for the proper management of transportation and inventory levels, considering the company's ability to manufacture and market its products. The capacitated facility location problem (CFLP) is a well-known variant of the FLP, and has been studied by several authors. According to the list above we can find multiple examples in scientific literature regarding CFLP; discrete [24] and continuous [4], multi-facility [6, 30], multiechelon [13,28], single source [3,23] and multi- source [1], multi-commodity [24], and dynamic [9, 27]. The modeling process that requires the facility location decisions has to consider the fluctuation and inherent stochastic nature of the parameters involved in the problem analysis [12,24]. Costs, demands, travel times, supplies, and other inputs to classical facility location models may be highly uncertain; these input data are based on a forecast that results in taking into account uncertain parameters whose values are governed by probability distributions that are known by the decision maker, and, hence, are likely to be more realistic. Otherwise, if input data is assumed to be known with certainty, deterministic models are considered [24,25]. The random parameters can be either continuous, in which case they are generally assumed to be statistically independent of one another, or described by discrete scenarios, each with a fixed probability [17,20,24,29,31]. There are different methods to find the optimal solution to the problem regarding the location of facilities within network design, such as multi-criteria programming, branch and bound algorithm, dynamic programming, among others, mixed integer linear programming (MIP) being one of the most popular methods used in commercial location models [2]. Linear programming based techniques have been applied successfully to uncapacitated facility locations problems to obtain constant factor approximation algorithms for these problems; however, linear programming based techniques have not been successful when dealing with capacitated FLP. Continuing with this analysis of the type of solution methodology that has been used for solving the FLP, many variants of this problem have been studied extensively from the perspective of approximation algorithms, one of the most recently proposed is heuristics [1,4,8,22,28]. Regarding multicriteria analysis and optimization, a combined methodology based on multicriteria decision analysis and optimization for the distribution centers location problem, this model provides a set of relevant quantitative and qualitative attributes used for the decision of locating distribution centers [32]. A waste is something that has no value of use. Solid waste (SW), commonly known as trash or garbage consists of everyday items such as product packaging, grass clippings, furniture, clothing, bottles, food scraps, newspapers, appliances, paint and batteries [10]. A solid waste collection system is concerned with the collection of waste from sources, routing to 52


Pérez-Salazar et al / DYNA 82 (191), pp. 51-57. June, 2015.

generating alternative methods was used for generating a near-optimal alternative, and finally a simulation technique was used for incorporating random waste generation in order to compare optimal solutions and simulation results.

1 ∀

(3)

2. Model formulation Model formulation for the multiple-source, capacitated facility location problem is described as follow: given a number of sources that generate quantities of SW, a collection center has to be selected among a set of locations, in order to serve a set of demand points. The objective is to locate the collection center that minimizes the fixed and variable cost of handling and transport products through the selected network. The index, parameters and variables of this model are shown in Table 1, Table 2 and Table 3. The mathematical formulation is as follows:

,

(1)

0

(6)

∈ 0,1

(7)

3. Situations description

(2)

(3)

Table 1. Model sets Set

(5)

The objective function (1) aims to minimize fixed costs and variable costs. The supply constraint states that available supply cannot be exceeded (2), and the demand of all demands points (3) must be satisfied. Regarding operability of the collection center, each customer must be served only by one collection center (4). Also, for each collection center there should be a minimal activity in order to begin operation and a maximum activity as well, set by the established capacity (5).

Mexico is the world’s third largest consumer of Polyethylene Terephthalate (PET), only preceded by the United States and China. PET is commonly used in plastic containers such as beverage bottles and food packaging. In Mexico, every person uses an average of 225 bottles per year, additionally around 800 thousand tons of PET is consumed per year, with an annual growth of 13%. Due to the problem of SW and PET contamination, new public policies have been created in the country; for example, in Veracruz, the program for the prevention and integrated waste management uses the public policy in addition to good management. Approximately 4451 tons of SW is collected on a daily basis in the state of Veracruz, which represents 5% of the national collection. The country houses 241 collection centers and in Veracruz, there are only 5 towns with such centers. To supply the demand for PET bottles in Mexico, there are 5 manufacturing plants and about 190 bottling plants, serving nearly one million outlets [4]. The generation of SW has increased over the past few years growing by 25% between 2003 and 2011. In 2011, Veracruz was the fourth largest producer of SW, nationwide, with 5.5%, just after Estado de Mexico (16%), Distrito Federal (12%) and Jalisco (7%), and Nuevo Leon (5%). For this work, a basic generic supply chain network is considered. The source echelon is represented by five towns, the next echelon is denoted by the three pre-determined locations for the collection center selection, and finally the customer echelon consists of three identified demand points. Fig. 1 depicts a three echelon supply chain network. Due to the fact that the evaluation zone for this project is developed in the north of Veracruz, specifically in the town of Tempoal, we considered five towns as possible sources: el Higo (S1), Tantoyuca (S2), Platón Sánchez (S3), Huejutla (S4) and Tempoal (S5). The productions per town are 4000, 18300, 4400, 32920 and 7700 tons per year, respectively. From these amounts only 2% from each town corresponds to

Description Sources Collection Center Customer

Source: The authors Table 2. Model parameters Parameters Description Amount of solid waste supplied by the source Demand of the customer Minimum annual capacity for the collection center Maximum annual capacity for the collection center Fixed part of the annual operating cost for the collection center Variable unit cost of activity for the collection center Cost of processing and transporting a unit of solid waste from the source through the collection center to the customer Source: The authors Table 3. Model variables Index Description Amount of units of solid waste from the source through the collection center to the customer Variable equal to 1 if the collection center j serves the customer k, and 0 otherwise Variable equal to 1 if the collection center j is open, and 0 otherwise Source: The authors 53


PĂŠrez-Salazar et al / DYNA 82 (191), pp. 51-57. June, 2015.

Three potential customers for the re-use of PET were established taking into account the volume of purchase (demand), the cost of transportation, cost of processing, storage capacity and fixed costs. For this case, a customer located in the city of Tampico (C1), another in the city of Madero (C2), and the third in Altamira (C3) in the state of Tamaulipas were selected. The demand data were collected through a field study in which random customers were taken in the state of Tamaulipas, specifically in the towns of Tampico and Altamira. The data obtained were analyzed and it was observed that the three sets of data follow a normal distribution. These parameters allowed us to obtain the annual growth rate, which was estimated by calculating the percentage change for each year and then taking absolute values averaged when they showed a decrease in demand. In this way, the percentages estimated regarding the demand growth rates per customer are: C1 = 13.99%, C2 = 13.72% and C3 = 14.27%, these values allowed us to calculate the increase in demand over a period of 5 years. The same procedure is used to determine the cost of transport from source to the collection center and from the collection center to the customer, in this case from A1, A2, and A3 to C1, C2, and C3.

Figure 1. A three echelon supply chain network Source: The authors

PET and other plastics [4]; hence, the amounts to be considered as parameter the "Amount of solid waste supplied by each source," which for this case are 80, 366, 88, 658, 154 tons per year, respectively. These amounts increase by 13% per year [3]; thus, the increase was calculated over a period of 5 years. To determine the cost of transportation from source to the collection center, the distance between the two points is determined, and then divided between the performance of vehicle 1 to be used, and then multiplied by the current cost of gasoline.1 ($12.9 liter) and finally is multiplied by 2, which represents the back and forth of the vehicle from alternative 1 (A1) to the sources of the waste, similarly we determined the cost for each source (S1, S2, S3, S4 and S5) and alternative (A1, A2 and A3). Considering an increase of gasoline of $ 0.11 per month, the transportation costs were obtained from the source to the collection center and from the distribution center to the customer annually with a 5 year projection. Unlike other models of storage location or facilities, whereby the location is chosen according to the sources of supply, the model for this research takes into account three previously established potential sites by which the nature of the problem is which alternative to choose between these three possible locations for storage centers: two outside the town of Tempoal (A1 and A2), Veracruz, and the third in the town of Huejutla, Hidalgo (A3). To determine the fixed costs of each collection center the following concepts were observed: Initial investment, labor, electricity, water, telephone, whose amounts were A1 = $ 2, 668,000 A2 = $ 3, 173,000 and A3 = 4, 176,000, of these amounts, annual operating costs are A1 = 168,000, A2 = 173,000 and A3 = 176,000, where the capacity of each of the collection centers are 7,000, 17,000, 34,000 tons per year, respectively. Thus the variable costs per ton of material handled within the distribution center are $ 381.14, $ 186.64, $ 122.82, respectively.

4. Genetic algorithm principles Genetic algorithms (GAs) are mathematical optimization techniques that simulate a natural evolution process. GAs constitute one of the artificial intelligence exhaustive searching techniques; they are stochastic algorithms whose search methods model some natural phenomena: genetic inheritance and Darwin strife for survival [27]. Their search procedure consists of maintaining a population of potential solutions while conducting a parallel investigation for nondominated solutions [22]. Considering network strategy design, which contemplates the logistic chain network problem formulated by a MIP model, GAs have been applied as an alternative procedure for generating optimal or nearoptimal solutions to location problems [7,16,26]. The GA implemented in this study uses quite common genetic operators. The proposed GA procedure implies the following steps: Encoding of solutions. Solutions were encoded by dividing the chromosome, i.e. a complete set of coded variables, into two parts. The first part represents the continuous variables represented by the trailers assignment percentage at six locations proposed in the case study. The second part of the chromosome represents the assignment of trailers (crates and cages) to planned destinations. Initial population creation. The procedure of creating the initial population corresponds to random sampling of each decision variable within its specific range of variation. This strategy guarantees a population various enough to explore large zones of the search area. Fitness Evaluation. The criteria for each optimization model are to minimize total cost. If one restriction isn’t

1 For this analysis, a cargo van Nissan NP300, chassis cab version (petrol 4x2) with a load capacity of 1,480 kg, with fuel efficiency of 11.19 km/lt , is

considered. The price of gasoline has a monthly increase of 11 cents per liter, according to the Ministry of Finance and Public Credit in Mexico.

54


Pérez-Salazar et al / DYNA 82 (191), pp. 51-57. June, 2015.

satisfied, the solution is marked as not feasible. Selection Procedure: The selection procedure consists of random sampling of pairs of individuals in the roulette wheel, one individual at a time. Individuals presenting higher fitness values have larger probability of propagating to next generation. Crossover: Two selected parents are submitted to the crossover operator to produce two children. The crossover is carried out with an assigned probability, which is generally rather high. If a number randomly sampled is superior to the probability, the crossover is performed. Otherwise, the children are copies of the parents. In case of a discrete variable, this is copied for the parents. For continuous variables, the child takes the value of both parents; the first child takes 80% of parent one and 20% of parent two, and the second child takes 20% of parent one and 80% of parent two. Mutation: The genetic mutation introduces diversity in the population by an occasional random replacement of the individuals. The mutation is performed on the basis of an assigned probability. A random number is used to determine whether a new individual will be produced to substitute the one generated by crossover. The mutation procedure consists of replacing one of the decision variable values of an individual, while keeping the remaining variables unchanged. The replaced variable is randomly chosen, and its new value is calculated by randomly sampling within its specific range. The new value is determined adjusting the old value; the adjustment is a low percentage of the old value that never causes an infeasible individual. The encoding solution is an array of values that describes one solution to the problem. The first value represents the location selected for the collection center. The subsequent values indicate the amount of units of SW from sources to customer. Table 4 shows one solution of the GA. The solution indicates that the best location for the collection center is A1. Moreover, the solution indicates that the amount of units of SW from source 1 to customer 1 is 10. It can be noted that transporting units of SW from S1 to C2 is more suitable than transport units from S5 to C2. Each instance of the problem consists of a file with all the information described in section 4, such as the number of sources, number of customers and corresponding demand; and in reference to the collection center locations information such as the initial investment for construction, annual operating cost, and minimum and maximum capacity. Each file represents one year of the 5 year planning horizon. Initially, instances are considered without uncertainty. Customer demand is known and doesn’t change over time. Each instance of the problem was executed 30 times with the GA, and the solutions were compared with the results obtained by GAMS. GAMS (General Algebraic Modeling System) is a high-level software tool for modeling and solving optimization problems and mathematical programming. The comparative process was used to tune the GA. Population size and percentage of mutation was set to 100 and 1%. A tournament selection process was carried out. Fig 2 shows the code of the genetic algorithm.

Table 4. Solution representation CC S1 S1 S1 … S5 S5 S5 C1 C2 C3 C1 C2 C3 1 10 12 50 … 90 1 34 Source: The authors 1 Generate initial population 2 Calculate fitness (cost) 3 while generation_number < 1000 do 4 while pop_size < 100 do 5 select 2 individuals and chose best 6 select 2 individuals and chose best 7 if random() < prob_crossover then 8 randomly select a crossover point 9 crossover the 2 best solutions and generate 2 children 10 if children are feasible 11 add children to new generation 12 pop_size+=2 13 end if 14 else 15 add best individuals to new 16 generation 17 pop_size+=2 end if if random() < prob_mutation then randomly select a individual and mutate end if end while update generation generation_number++ end while Figure 2. Genetic Algorithm code Source: The authors

The runtime in all cases was less than 1.5 seconds and it is noted that runtime was not relevant in finding the best solution. The second group of instances included uncertainty regarding customer demand, represented by a normal distribution. The second group of instances includes uncertainty regarding customer demand, represented by a normal distribution. In order to obtain the parameters of the probability distribution, goodness of fit tests were executed. 5. Results Table 5 shows the execution results along with the number set of the collection center selected to be open. GAMS results indicate to open the collection center in A1, while the GA proposed to open the collection center in A2. The average cost for each instance is presented. Analyzing the outcome, we can see that in all 5 instances of the problem, the GA gives better results than GAMS decreasing up to 30% in the overall cost, thus validating the GA. Then, 30 iterations of the GA were executed for every year considered in the planning horizon, taking into account the uncertainty representing the variation of the 3 demand points modeled by normal probability distribution (see Table 6) along with the number set of the collection center selected to be open and the average cost.

55


Pérez-Salazar et al / DYNA 82 (191), pp. 51-57. June, 2015.

collection center to open and the corresponding calculations of cost.

Table 5. Genetic algorithm validation results GAMS Num

GA Diff (%)

CCS

Cost ($)

CCS

Cost ($)

1

1

1,252,976.00

2

897,597.00

28

2

1

1,432,087.00

2

1,019,647.00

28

3

1

1,623,722.00

2

1,149,455.00

29

4

1

1,829,795.00

2

1,287,834.00

29

5

1

2,051,309.00

2

1,435,040.00

30

6. Conclusions Within the Mexican environmental context, a priority issue is that of creating solid waste treatment facilities due to the considerable increase in waste in recent years. There are several techniques regarding decisions on the location of solid waste system facilities. The overall objective of the work presented in this paper was to develop a facility location problem to assist decision makers in the selection of a collection center among three pre-identified locations given by the local government. Considering the overall cost of the network, we identify a three-echelon, multi-source, capacitated facility location problem for the consideration of transferring PET waste generation in five towns in the northern region of Veracruz through the selected collection center to meet three demand points in the re-use market. The facility location problem was modeled using a mixed integer programming technique. The mathematical model is optimized through genetic algorithm. The optimization is performed subject to random demand to determine which collection center to open and the corresponding calculations of cost.

Num: number of instance CCS: Collection center Selection Source: The authors

Table 6. GA results considering random demand Year

Normal distribution parameters of customer demand

Customer 1 µ

Customer 2 µ

Customer 3

CCS

Average cost (millions)

µ

1

611

49

315

23

250

23

2

0.835

2

267

21

365

18

278

18

2

0.672

3

841

55

401

27

318

20

2

1.064

4

919

65

462

25

375

28

2

1.180

5

1049

96

514

21

424

32

2

1.319

References [1]

Avella, P. and Boccia, M., A cutting plane algorithm for the capacitated facility location problem. Computational Optimization and Applications, 43 (1), pp. 39-65, 2009. DOI: 10.1007/s10589-007-9125-x [2] Ballou, R.H., Logística: administración de la cadena de suministro. Pearson Educación, 2004. [3] Cabrera, G., Cabrera, E., Soto, R., Rubio, L., Crawford, B. and Paredes, F., A hybrid approach using an artificial bee algorithm with mixed integer programming applied to a large-scale capacitated facility location problem. Mathematical Problems in Engineering, 2012. DOI: 10.1155/2012/954249 [4] Carlo, H.J., Aldarondo, F., Saavedra, P.M. and Torres, S.N., Capacitated continuous facility location problem with unknown number of facilities. Engineering Management Journal, 24 (3), pp. 24-31, 2012. DOI: 10.1080/10429247.2012.11431944 [5] Chi, G., Integrated planning of a solid waste management system in the City of Regina. Doctoral Thesis, University of Regina, Canadá 1997. [6] Chudak, F.A. and Williamson, D.P., Improved approximation algorithms for capacitated facility location problems. In Integer programming and combinatorial optimization, Springer Berlin Heidelberg, 1999, pp. 99-113. DOI: 10.1007/3-540-48777-8_8 [7] Correa, E.S., Steiner, M.T., Freitas, A.A. and Carnieri, C., A genetic algorithm for solving a capacitated p-median problem. Numerical Algorithms, 35 (2-4), pp. 373-388, 2004. DOI: 10.1023/B:NUMA.0000021767.42899.31 [8] Dias, J.M., Captivo, M.E. and Clímaco, J., Dynamic location problems with discrete expansion and reduction sizes of available capacities. InvestigaçãoOperacional, 27 (2), pp. 107-130, 2007. [9] Dias, J.M., Captivo, M.E. and Clímaco, J., Capacitated dynamic location problems with opening, closure and reopening of facilities. IMA Journal of Management Mathematics, 17 (4), 317-348, 2006. DOI: 10.1093/imaman/dpl003 [10] Erkut, E., Karagiannidis, A., Perkoulidis, G. and Tjandra, S.A., A multicriteria facility location model for municipal solid waste management in North Greece. European Journal of Operational Research, 187 (3), pp. 1402-1421, 2008. DOI: 10.1016/j.ejor.2006.09.021

Source: The authors

Figure 3. Average tons shipped from each source to each customer through collection center number 2 Source: The authors

Also, the GA gives the average amount of units to be shipped from sources to customers for each year. Fig 3 shows the average tons shipped from each source to each customer through the selected collection center that represents lower cost. We can infer important decisions based on this graph; for example, that we should not ship units from S1 to C1, but that it is convenient to ship units from S5 to C1. The mathematical model is optimized through genetic algorithm presented in this section. The optimization is performed subject to random demand to determine which 56


Pérez-Salazar et al / DYNA 82 (191), pp. 51-57. June, 2015. [30] Wu, L.Y., Zhang X.S. and Zhang, J.L., Capacitated facility location problem with general setup cost. Computers & Operations Research, 33 (5), pp. 1226-1241, 2006. DOI: 10.1016/j.cor.2004.09.012 [31] Zhou, J. and Liu, B., New stochastic models for capacitated locationallocation problem. Computers & industrial engineering, 45 (1), pp. 111125, 2003. DOI: 10.1016/S0360-8352(03)00021-4 [32] Soto-de la Vega, D., Vidal-Vieira, J.G. and Vitor-Toso, E.A., Methodology for distribution centers location through multicriteria analysis and optimization. DYNA [on line]. 81 (184), pp. 28-35, 2014. Available at: http://dyna.unalmed.edu.co/en/ediciones/184/articulos/v81n184a03/v81n1 84a03.pdf [33] Peñuela, C., Granada, M. and Sanches, J.R., Algorithm for probabilistic analysis of distribution systems with distributed generation. DYNA, [on line]. 78 (169), pp. 79-87, 2011. Available at: http://dyna.unalmed.edu.co/en/ediciones/169/articulos/a09v78n169/a09v7 8n169.pdf

[11] Farahani, R.Z., SteadieSeifi, M. and Asgari, N., Multiple criteria facility location problems: A survey. Applied Mathematical Modelling, 34 (7), pp. 1689-1709, 2010. DOI: 10.1016/j.apm.2009.10.005 [12] García, F.J.A. and Tena, C., Gestión de residuos sólidos urbanos: Análisis económico y políticas públicas. Cuadernos Económicos de ICE, (71), pp. 71-91, 2006. [13] Gendron, B., and Semet, F., Formulations and relaxations for a multiechelon capacitated location–distribution problem. Computers & Operations Research, 36 (5), pp. 1335-1355, 2009. DOI: 10.1016/j.cor.2008.02.009 [14] Gottinger, H.W., A computational model for solid waste management with application. European Journal of Operational Research, 35 (3), pp. 350-364, 1998. DOI: 10.1016/0377-2217(88)90225-1 [15] Huang, G.G.H., Huaicheng, G. and Guangming, Z., A mixed integer linear programming approach for municipal solid waste management. Journal of enviromanetal sciences China-English Edition, 9, pp. 431-445, 1997. [16] Kratica, J., Tošic, D., Filipović, V. and Ljubić, I., Solving the simple plant location problem by genetic algorithm. RAIRO-Operations Research, 35 (01), pp. 127-142, 2001. DOI: 10.1051/ro:2001107 [17] Listeş, O. and Dekker, R., A stochastic approach to a case study for product recovery network design. European Journal of Operational Research, 160 (1), pp. 268-287, 2005. DOI: 10.1016/j.ejor.2001.12.001 [18] Najm, M.A., El-Fadel, M., Ayoub, G., El-Taha, M. and Al-Awar, F., An optimization model for regional integrated solid waste management I. Model formulation. Waste management & research, 20 (1), pp. 37-45, 2002. DOI: 10.1177/0734242X0202000105 [19] Noche, B., Rhoma, F.A., Chinakupt, T. and Jawale, M., Optimization model for solid waste management system network design case study. In Computer and Automation Engineering (ICCAE) The 2nd International Conferenceon 5, pp. 230-236, IEEE, 2010. [20] Shafia, M.A., Rahmaniani, R., Rezai, A. and Rahmaniani, M., Robust optimization model for the capacitated facility location and transportation network design problem. International Conference on Industrial Engineering and Operations Management, Istanbul, 2012. [21] Shepherd, S. and Sumalee, A., A genetic algorithm based approach to optimal toll level and location problems. Networks and Spatial Economics, 4 (2), pp. 161-179, 2004. DOI: 10.1023/B:NETS.0000027771.13826.3a [22] Silva, C.M. and Biscaia, E.C., Genetic algorithm development for multiobjetive optimization of Batch Free radical polymerization reactors. Computers & Chemical Engineering, 27, pp 1329-1344, 2003. DOI: 10.1016/S0098-1354(03)00056-5 [23] Silva, F.J.F. and De la Figuera, D.S., A capacitated facility location problem with constrained backlogging probabilities. International journal of production research, 45 (21), pp. 5117-5134, 2007. DOI: 10.1080/00207540600823195 [24] Snyder, L.V. Facility location under uncertainty: A review. IIE Transactions, 38 (7), pp. 547-564, 2006. DOI: 10.1080/07408170500216480 [25] Snyder, L.V., Daskin, M.S. and Teo, C.P., The stochastic location model with risk pooling. European Journal of Operational Research, 179 (3), pp.1221-1238, 2007. DOI: 10.1016/j.ejor.2005.03.076 [26] Syarif, A., Yun, Y. and Gen, M., Study on multi-stage logistic chain network: A spanning tree-based genetic algorithm approach. Computers & Industrial Engineering, 43 (1), pp. 299-314, 2002. DOI: 10.1016/S03608352(02)00076-1 [27] Torres-Soto, J.E. and Üster, H., Dynamic-demand capacitated facility location problems with and without relocation. International Journal of Production Research, 49 (13), pp. 3979-4005, 2011. DOI: 10.1080/00207543.2010.505588 [28] Tragantalerngsak, S., Holt, J. and Rönnqvist, M., An exact method for the two-echelon, single-source, capacitated facility location problem. European Journal of Operational Research, 123 (3), pp. 473-489, 2000. DOI: 10.1016/S0377-2217(99)00105-8 [29] Wagner, M.R., Bhadury, J. and Peng, S., Risk management in uncapacitated facility location models with random demands. Computers & Operations Research, 36 (4), pp. 1002-1011, 2009. DOI: 10.1016/j.cor.2007.12.008

M. del R. Pérez-Salazar, completed her BSc Eng in Electronic Engineering in 2007 at the Instituto Tecnológico de Puebla and her MSc degree in Industrial Engineering in 2011 at the Instituto Politécnico Nacional, from which she gratuated with honors. From 2009 to 2010, she was a member of the Instituto Politécnico Nacional Institutional Research Training Program. Since 2011, she has been a Full Professor at the Industrial Engineering Department, Instituto Tecnológico Superior de Tantoyuca. She is currently the coordinator of the Industrial Engineering Graduate Program, Instituto Tecnológico Superior de Tantoyuca. Her research interests include discrete event simulation, supply chain mamagement, enterprise information systems, artificial intelligence applied to risk analysis and decision making. N.F. Mateo-Díaz, completed his BSc Eng in Industrial Engineering in 2009, his MSc degree in Industrial Engineering in 2013, both at the Instituto Tecnológico Superior de Tantoyuca. He worked on projects regarding finacial investment modeling, wining third place at the Second Latin American Financial Modeling contest using Risk Simulator software in 2013. Currently, he is a Full Professor at the Industrial Engineering Department, Instituto Tecnológico Superior de Tantoyuca. R. García-Rodríguez, completed his BSc. Eng in Computer Systems Engineering in 2005, his MSc degree in Computer Science in 2010, both at the Instituto Tecnológico de Ciudad Madero. Since 2011, he has been a Full Professor at the Computer Systems Department, Instituto Tecnológico Superior de Tantoyuca. His research interests include software engineering, intelligent optimization and mathematical modeling. C.E. Mar-Orozco, completed his BSc Eng in Industrial Engineering in 2009 at the Instituto Tecnológico de Ciudad Madero and his MSc degree in Industrial Management in 2012 at the Universidad Autonoma de Tamaulipas. His work experience includes both, manufacturing and services organizations. He has won several prizes in investment projects contests. Currently, he is a Full Professor at the Industrial Engineering Department, Instituto Tecnológico Superior de Tantoyuca. L. Cruz-Rivero, completed her BSc Eng in Industrial Engineering in 2002 at the Instituto Tecnológico de Ciudad Madero and her MSc degree in Business Administration in 2008 at the Universidad Valle del Bravo Campus Tampico, graduated Magna Cum Laude. She collaborated with the petrochemical sector in Petrocel-Temex as an assistant in the maintenance project to improve productivity. Since 2011, she has been a Full Professor at the Industrial Engineering Department, Instituto Tecnológico Superior de Tantoyuca. Her research area is design and development of products and services as a TRIZ practitioner by Altshuller Institute.

57


Short-term generation planning by primal and dual decomposition techniques José Antonio Marmolejo-Saucedo a & Román Rodríguez-Aguilar b a

Facultad de Ingeniería, Universidad Anáhuac México Norte, Mexico, D.F., México. jose.marmolejo@anahuac.mx Escuela Superior de Economía, Instituto Politécnico Nacional, Mexico, D.F., México. roman.rodriguez@ipn.mx

b

Received: January 28th, 2015. Received in revised form: March 26th, 2015. Accepted: April 30th, 2015.

Abstract This paper addresses the short-term generation planning (STGP) through thermoelectric units. The mathematical model is presented as a Mixed Integer Non Linear Problem (MINLP). Several works on the state of art of the problem have revealed that the computational effort of this problem grows exponentially with the number of time periods and number of thermoelectric units. Therefore, we present two alternatives to solve a STGP based on Benders’ partitioning algorithm and Lagrangian relaxation in order to reduce the computational effort. The proposal is to apply primal and dual decomposition techniques, which exploit the structure of the problem to reduce solution time by decomposing the STGP into a master problem and a subproblem. For Benders’ algorithm, the master problem is a Mixed Integer Problem (MIP) and for the subproblem, it is a Non Linear Problem (NLP). For Lagrangian relaxation, the master problem and the subproblem are MINLP. The computational experiments show the performance of both decomposition techniques applied to the STGP. These techniques allow us to save computation time when compared to some high performance commercial solvers. Keywords: Benders’ algorithm; Lagrangian relaxation; subgradient; decomposition techniques; power generation.

Planeación de la generación a corto plazo mediante técnicas de descomposición primal y dual Resumen En este trabajo se aborda la planeación de la generación a corto plazo (STGP) a través de las unidades termoeléctricas. El modelo matemático se presenta como un problema no lineal entero mixto (MINLP). Varios trabajos del estado del arte del problema han revelado que el esfuerzo computacional de este problema crece exponencialmente con el número de períodos de tiempo y número de unidades termoeléctricas. Por lo tanto, presentamos dos alternativas para resolver la planeación de la generación a corto plazo (STGP) basadas en el algoritmo de partición Benders y relajación lagrangiana con la finalidad de reducir el esfuerzo computacional. La propuesta consiste en aplicar técnicas de descomposición primal y dual que explotan la estructura del problema para reducir el tiempo de solución mediante la descomposición de la planeación de la generación a corto plazo (STGP) en un problema maestro y un subproblema. Para el algoritmo de Benders el problema maestro es un problema entero mixto (MIP) y para el subproblema es un problema no lineal (NLP). Para la relajación lagrangeana, el problema maestro y el subproblema son MINLP. Los experimentos computacionales muestran el rendimiento de ambas técnicas de descomposición aplicadas a la planeación de la generación a corto plazo (STGP). Estas técnicas permiten ahorrar tiempo de cálculo en comparación con algunos optimizadores comerciales de alto rendimiento. Palabras clave: Algoritmo de Benders; relajación lagrangiana; subgradiente; técnicas de descomposición; generación eléctrica.

1. Short-term generation planning The efficient short term generation planning of available energy resources for satisfying load demand has become an important task in modern power systems [14,17].

A definition of short-term generation planning (Unit commitment) is the operation of generation facilities to produce energy at the lowest cost to reliably serve consumers, recognizing any operational limits of generation facilities. The problem consists of deciding which units must

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 58-62. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.51147


Marmolejo-Saucedo & Rodríguez-Aguilar / DYNA 82 (191), pp. 58-62. June, 2015.

k

be used over a given planning horizon, usually 24 time periods. In the typical Unit Commitment [5], the transmission network is not considered, so for this case we considered network constraints, and the problem consists of determining the mix of generators and their estimated output level to meet the expected demand of electricity over a given time horizon (a day or a week), while satisfying the load demand, spinning reserve requirement and transmission network constraints. An electric network consists of many generation nodes with various generating capacities and cost functions, lines of transmission and nodes of power demand [7,9,12]. Over the past few years, several studies have been conducted to define appropriate models and algorithms in order to obtain the optimal solution of the STGP. There are many optimization techniques for solving this problem, the major solution approaches are decomposition techniques, branch-and-bound techniques, and metaheuristics [15,16]. Since the Short Term Generation Scheduling (STGS) with network constraints is a NP-hard Mixed Integer Non Linear Problem, for large power systems, exact methods proved to be inefficient [10]. Because of this, we present an alternative to generating quality bounds in short computing time based on primal and dual decomposition.

Lagrangian multiplier associated to a spinning reserve requirement.  nk ,  nk Lagrangian multipliers associated to transmission capacity limits. Sets:

J

K

N

n n

Set of indices of all plants. Set of period indices. Set of indices of all nodes. Set of indices of the power plants j at node n . Set of indices of nodes connected and adjacent to node n . Set of Benders’ iterations.

The problem STGP is defined as follows: M i n Z=  kK

t

jk

j  n

 [F v

j jk

jJ

B

(1)

 Aj yjk  Ej (t jk )]

K

[mk  nk ] 

nm

mn

[1 cos(mk  nk )]  Dnk

nm

mn

(2)

n  N,k  K

T v j

jk

j J

D

Tj v jk  t jk  Tj v jk

2 The Model Parameters: A j Start up cost of power plant j . Bnm Susceptance of line n  m . Cnm Transmission capacity limit of line n  m . Dnk Load demand at node n during period k . E j (t jk ) Nonlinear function representing the operating cost of power plant j as a function of its power output in period k . E j1 Linear coefficient of operating cost for plant j . E j 2 Quadratic coefficient of operating cost for plant j . F j Fixed cost of power plant j . Vjk Parameter that is equal to 1 when plant j is committed in period k after dual subproblem is solved. Yjk Parameter which is equal to 1 when plant j is started up at the beginning of period k after dual subproblem is solved. K nm Conductance of line n  m . Rk Spinning reserve requirement during period k . T j Maximum power output of plant j . T j Minimum power output of plant j . nr Reference node with angle zero.

nk

 Rk

k  K

(3)

j  J,k  K

(4)

j  J,k

(5)

n  N,k  K,m n

(6)

nN

y jk  v jk -v jk1 Cnm  Bnm[mk  nk ]  Cnm

  nk  

 nrk  0 v jk , y jk  0,1

n  N / nr  ,k  K

(7)

nr  N , k  K

(8)

j  J,k  K

(9)

The objective eq. (1) minimizes the start up cost A j y jk and operating cost of each plant. The operating cost of each plant j is included a fixed cost F j v jk and a variable cost E j ( t jk ) .

Variables: j tvjk Power output of plant in period k . jk Binary variable equal to 1 when plant j is committed in period k . y jk Binary variable equal to 1 when plant j is started up at the beginning of period k .  nk Angle of node n in period k .  nk Lagrangian multiplier associated to a power balance constraint.

There is a power balance constraint eq. (2) per node and time period. In each period, the production has to satisfy the demand and losses in each node. Power line losses are modeled through cosine approximation and it is assumed that the demand for electric energy is known and is discretized into t periods. There are many approximations to model power line losses, some of them are linear and non-linear approximations. Further details of the cosine approximation 59


Marmolejo-Saucedo & Rodríguez-Aguilar / DYNA 82 (191), pp. 58-62. June, 2015.

can be found in [1]. Spinning reserve requirements are modeled in eq. (3). In each period the running units have to be able to satisfy the demand and the prespecified spinning reserve. In eq. (4), each unit has a technical lower and upper bound for the power production. Transmission capacity limits of lines eq. (5) avoid dynamic stability system problems. The constraint eq. (6) holds the logic of running, start-up and shut-down of the units. A running unit cannot be started-up. Finally, angle in all buses has a lower and upper bound eq. (7).

than the primal problem and is easier to solve. The multipliers are updated through different methods, usually a subgradient method. The major difficulty of this method is associated with obtaining solution feasibility because of the dual nature of the algorithm. Dual Master Problem:

M ax

 j ( k ),  i ( k ),  ( k ),  ( k )

Benders Decomposition has been successfully applied to take advantage of underlying problem structures for various optimization problems, such as planning of power systems. The basic idea of this method is the generation, at each iteration, of an upper bound and lower bound on the sought solution of the problem. The upper bound results from the primal subproblem, while the lower bound results from the master problem [1].

k K

 jJ

min{ kK

j n

[ Fj v jk  Aj y jk  E j (t jk )]

  k [ Dnk  kK

j n

t jk 

B

mn

nm

( mk   nk ) 

  k [ Dnk  Rk   T j v jk ]    k [Cnm  jJ

kK

   k [  Bnm ( mk   nk )  Cnm

[ Fj v j (k )  Aj y j (k )]

kK

mn

K

mn

B

mn

k K

 jJ

 T v (k )   D j

n N

n

( v 1) j

(k )[v j (k )  V j( v 1) (k )]

Three test systems are presented to evaluate the performance of the proposed AGS algorithm.  The IEEE 24 bus test system with 24 nodes, 24 thermal units and 38 transmission lines [3,6].  A portion of bus electric energy system of Mainland Spain with 104 nodes, 62 thermal units and 160 transmission lines [1].  The IEEE 118 bus test system with 118 nodes, 54 thermal units and 186 transmission lines [13]. For Lagrangian relaxation, the mathematical model of STGP were implemented in GAMS [4] using the DICOPT solver for solving the MINLP problems (dual subproblems). CONOPT for solving the NLP problems (primal subproblems) and CPLEX for MIP problems (master primal problem). All the models have been solved on an AMD Phenom ™ II N970 Quad-Core with a 2.2 GHz processor and 8 GB RAM. NLP and MINLP solvers were tuned to obtain solutions with a tolerance of 10% optimality. Therefore, the master and subproblems solutions obtained from the solvers are almost feasible and good solutions. The iterative procedure continues until some stopping criterion is reached. For this case, the stopping criterion was 0.1 % of optimality of the MINLP solver.

 

k K

jJ

[ E j (t j ( k ))]

sujeto a :

t

j n

j

(k ) 

m n

5. Computational experimentation

(k )  R(k )

Benders Subproblem: v Min f sub 

( mk   nk )

Subject to eq. (4, 5, 7, 8, 9)

(10)

y j (k )  v j (k ) -v j (k  1)

t j ( k ), n ( k )

nm

(1  cos( mk   nk )]

(13)

( v 1)   f sub 

jJ

nm

}

subject to :

j

(12)

Ln (t jk , v jk , y jk ,  nk , k , k ,  k ,  k ) 

kK

v f Mast   

j J

Dual Subproblem:

Benders’ Master Problem: Min

kK

[ L vn ]

s u b je c t to :  j ( k ),  i ( k ),  ( k ),  ( k )  0

3. Benders’ Decomposition

 , v j ( k ), y j ( k )

 

gv 

Bnm [ m ( k )   n ( k )] 

m n

K nm [1  cos( m ( k )   n ( k ))]  Dn ( k )

T j v j (k )  t j (k )  T j v j (k ) Cnm  Bnm [ m ( k )   n ( k )]  C nm    n ( k )   v j ( k )  V jv ( k ) :  (j v ) ( k ) y j ( k )  Y jv ( k )

(11)

4. Lagrangian Relaxation Lagrangian relaxation [2,8,11] decomposes the STGP into a master problem and makes it easier to solve subproblems separately. The subproblems are linked by Lagrange Multipliers that are added to the master problem to yield a dual problem. The dual problem has lower dimensions 60


Marmolejo-Saucedo & Rodríguez-Aguilar / DYNA 82 (191), pp. 58-62. June, 2015. Table 1. Lagrangian Relaxation

Gap %

vio rel %

CPU Time

4.47 13.8 0.4

0.66 (2) 0.88 (2) 1.34 (2)

8’30’’ 17’25’’ 30’23’’

Gap %

vio rel %

CPU Time

9.3 4.2 1.9

0.03 (2) 0.02 (2) 0.07 (2)

36’57’ 38’ 03’’ 1hr 27’

System IEEE-24 IEEE-118 SIS-104 Source: The authors

Table 2. Benders’ Decomposition

System IEEE-24 IEEE-118 SIS-104 Source: The authors

Figure 3. Lagrangian bound of IEEE-118 Source: The authors

Figure 4. Benders bounds of IEEE-24 Source: The authors

Figure 1. Lagrangian bound of IEEE-24. Source: The authors

Figure 2. Lagrangian bound of SIS-104. Source: The authors

Figure 5. Benders bounds of Sis-104. Source: The authors

61


Marmolejo-Saucedo & Rodríguez-Aguilar / DYNA 82 (191), pp. 58-62. June, 2015.

[6]

Grigg, C., Whong, P., Albretch, P., Allan, R. et al., The IEEE reliability test system-1996. IEEE Transactions on power systems, 14 (3), pp 1010-1020, 1999. DOI: 10.1109/59.780914 [7] Granada, M., Rider, M., Mantovani, J. and Shahidehpour, M., Multiareas optimal reactive power flow. In Transmission and Distribution Conference and Exposition: Latin America, 2008 IEEE/PES, pp 1-6, 2008. [8] Ruzic, S. and Rajakovic, N., A new approach for solving extended unit commitment problem. Power Systems, IEEE Transactions on, 6 (1), pp 269-277, 1991. [9] Guo, S., Guan, X. and Zhai, Q., The necessary and sufficient conditions for determining feasible solutions to unit commitment problems with ramping constraints. In Power Engineering Society General Meeting, 2005. IEEE, 1, pp 344-349, 2005. [10] Sheble, G. and Fahd, G., Unit commitment literature synopsis. IEEE Transactions on Power Systems, 9 (1), pp 128-135, 1994. DOI: 0.1109/59.317549 [11] Salam, M.S., Nor, K. and Hamdam, A.R., Hydrothermal scheduling based lagrangian relaxation approach to hydrothermal coordination. IEEE Transactions on Power Systems, 13 (1), pp 226-235, 1998. DOI: 10.1109/59.651640 [12] Carpentier, J., Optimal power flows. Int. J. Elect. Power Energy Syst., 1 (1). pp 3-15, 1979. DOI: 10.1016/0142-0615(79)90026-7 [13] Marmolejo, J.A. and Rodríguez, R., Fat tail model for simulating test systems in multiperiod unit commitment, Mathematical Problems in Egineering, in press, 2015. DOI: 10.1155/2015/738215 [14] Ortiz-Pimiento, N.R. and Diaz-Serna, F.J., Validación de soluciones obtenidas para el problema del despacho hidrotérmico de mínimo costo empleando la programación lineal binaria mixta. DYNA [Online]. 75 (156), pp. 43-54. 2008, [date of reference november 30th of 2014]. Available at: http://www.scielo.org.co/scielo.php?script=sci_arttext&pid=s001273532008000300004&lng=en&nrm=iso. issn 0012-7353. [15] Gallego-Vega, L.E. and Duarte-Velasco, O.G.. Modeling of bidding prices in power markets using clustering and fuzzy association rules. DYNA [Online], 78 (166), pp. 108-117, 2011. [date of reference: november 30th of 2014]. Available at: http://www.redalyc.org/articulo.oa?id=49622365014. issn 0012-7353 [16] Medina, M.A., Ramirez, J.M., Coello, C.A., and Das, S., Use of a multi-objective teaching-learning algorithm for reduction of power losses in a power test system, DYNA [Online], 81 (185), pp. 196-203, 2014. [date of reference: november 30th of 2014]. Available at: http://www.redalyc.org/articulo.oa?id=49631031029. issn 0012-7353 [17] Gimenez-Alvarez, J.M., Schweickardt, G. and Gómez-Targarona, J.C., An overview of wind energy, taking into consideration several important issues including an analysis of regulatory requirements for the connection of wind generation into the power system. DYNA [Online], 79 (172), pp. 108-117, 2012. [date of reference: november 30th of 2014]. Available at: http://www.redalyc.org/articulo.oa?id=49623221013. issn 0012-7353

Figure 6. Benders bounds of IEEE-118 Source: The authors

Conclusions We evaluated the performance of primal and dual decomposition techniques in order to compare the quality of the solutions they provide Three test systems are presented to evaluate the performance of the proposed solution methods. Although for the MINLPs’ problems global optimality is not guaranteed, the proposed strategies show good convergence properties and provide better results than those obtained solving the problem using other methods. Thus, this application can be extended to large scale problems. Through the results, we see a decrease in computational time by Lagrangian relaxation of the problem, as well as lower Gap. See Figs.1, 2 and 3. However, the percentage of vio rel increases due to relaxation of system constraints. The solution obtained by Lagrangian relaxation is sometimes infeasible to the original problem, so to achieve the feasibility of the problem we propose using lower thermal plants cost as part of the vector of solutions that would provide electricity generation to meet demand requirements in all periods. By contrast, in Benders’ decomposition, there is no feasibility loss because the problem retains the same set of constraints. However, through experimentation, we observed a small percentage of relative deviation attributable to the optimizer. See Figs. 4, 5, 6.

J.A. Marmolejo, is a professor in the Faculty of Engineering at Universidad Anahuac Mexico Norte, Mexico. He has a PhD. in Engineering specializing in Operations Research from the National Autonomous University of Mexico and is currently a member of the National System of Researchers of the National Council of Science and Technology (CONACYT) of Mexico. He is on the Board of the Mexican Society of Operations Research. His areas of interest are large scale optimization and mathematical modeling in power systems and logistical problems in the supply chain.

References [1] [2] [3] [4] [5]

Alguacil, N. and Conejo, A,. Multiperiod optimal power flow using benders decomposition. IEEE Transactions on power systems,15 (1), pp 196-201, 2000. DOI: 10.1109/59.852121 Guignard, M., Lagrangean relaxation. Top,11 (2), pp 151-228, 2003. DOI: 10.1007/BF02579036 Charman, P.F., Bhavaraju, M.P., Billington, R. et al., IEEE reliability test system. IEEE Transactions on power apparatus and systems, 98 (6), pp 2047-2054, 1979. Brooke, A. and Kendrick, D., GAMS-The solver manuals. GAMS/COIN. GAMS Development Corporation. Washington, 2007. Wood, A.J. and Wollenberg, B., Power generation operation and control, New York: John Wiley and Sons, Inc,1996.

R. Rodriguez, has a PhD. in Economics specializing in mathematical finance obtained from the School of Economics at the National Polytechnic Institute, a MSc. degree in Engineering and Statistics from the National Autonomous University of Mexico, a MSc. degree in Public Policy from the Institute of Technology and Research superiors of Monterrey. Dr. Rodriguez is a member of the Mexican Mathematical Society, the National Association of Economists and Economic Modeling Network EcoMod. His areas of interest are stochastic optimization, mathematical finance and econometrics.

62


Technical efficiency of thermal power units through a stochastic frontier José Antonio Marmolejo-Saucedo a, Román Rodríguez-Aguilar b, Miguel Gastón Cedillo-Campos c & María Soledad Salazar-Martínez d a

Facultad de Ingeniería, Universidad Anáhuac México Norte, México D.F., México. jose.marmolejo@anahuac.mx Escuela Superior de Economía, Instituto Politécnico Nacional, México D.F., México. roman.rodriguez@ipn.mx c Instituto Mexicano del Transporte, Querétaro, México. gaston.cedillo@imt.mx d Universidad Nacional Autónoma de México, México D.F., México. s.salazar@unam.mx

b

Received: January 28th, 2015. Received in revised form: March 26th, 2015. Accepted: April 30th, 2015.

Abstract This work presents a model to obtain a stochastic frontier production function of a Mexican power generation company. The stochastic frontier allows us to evaluate the technical efficiency of an energy producer according of the level of inputs. Electricity generation based on thermal generation is highly expensive due to operational inefficiency of thermal power plants. At the moment, in Mexico, technical efficiency of thermal power units has not been studied for the national electricity system. Therefore, in order to know the productivity levels of thermal generation, an empirical application of the stochastic frontier model is obtained using a panel data of thermoelectric units from the Mexican electricity system for the 2009-2013 period. Keywords: Stochastic frontier; thermal generation; technical efficiency.

Eficiencia técnica de unidades de generación termoeléctrica mediante una frontera estocástica Resumen Este trabajo presenta un modelo para obtener la frontera estocástica de la función de producción de una empresa mexicana de generación eléctrica. La frontera estocástica permite evaluar los niveles de eficiencia técnica del productor de energía respecto de los insumos utilizados. La generación eléctrica obtenida mediante generación térmica es muy costosa debido a la ineficiencia operativa de las centrales termoeléctricas. Hasta el momento, en México, la eficiencia técnica de las centrales termoeléctricas no ha sido estudiada para el Sistema Eléctrico Nacional. Por lo tanto, con la finalidad de conocer los niveles de productividad de la generación térmica, se realizó una aplicación de un modelo de frontera estocástica utilizando datos de panel de las centrales termoeléctricas del Sistema Eléctrico Mexicano para el periodo 2009-2013. Palabras clave: Frontera estocástica; generación térmica; eficiencia técnica.

1. Power generation: the case of the Mexican electricity system The generation of electricity using fossil fuels in Mexico is a process with broad participation; in 2011, the plants that only used fossil fuels accounted for 72.6% of the total electricity produced and are expected to remain broad in participation to meet the future electricity demand [1].

The use of fossil fuels for power generation has been severely questioned as the production of carbon dioxide ( ) contributes to the accumulation of greenhouse gases (GHGs) emitted into the atmosphere. Therefore, there are two prospective scenarios of capacity expansion for the 2013-2027 period. The first is the planned expansion of the public power generation companies with a share of 31.9% with the use of clean technologies in 2027.

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 63-68. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.51152


Marmolejo-Saucedo et al / DYNA 82 (191), pp. 63-68. June, 2015.

An 18.4% of hydro capacity, 4.1 % wind power, nuclear 1.8% and a remaining 2.4% with geothermal, solar and biogas capacity. In the alternative scenario, the expansion program is aligned to the goals set in local laws, seeking to increase its generation with non-fossil sources to a 35 % in 2027 [2]. This means that, although fossil fuel generation will be reduced, its use will be located in the worst case at 65% in 2027 [3]. Therefore, the total elimination of fossil fuels for power generation is nearly impossible, because it still represents one of the economic lower cost options in Mexico. Figs. 1, 2 and, 3 show the behavior of thermal power generation, fuel oil consumption, and useful existence (stock of fuel oil) from 2009 to 2013.

Figure 4. Fuel oil consumption vs. power generation. Source: The authors

Fig. 4 shows that the behavior of the fuel oil consumption of power plants over time is directly related to the behavior of generation; i.e. power generation is a function of the consumption. Both behaviors follow the same trend but they do not match on the same points throughout the timeline. For example, there are differences between consumption and generation since in some years, the points almost coincide and the graphical difference is very low, but in other years this difference increases and the points are apart from each other. It could be concluded that when the points of consumption and generation are far apart, we have a period of technical inefficiency. In 2013, the production of generation per unit volume (m³) increased. It is the lowest difference that can be seen. A total of 4.9 MWh/m³ was generated in 2013; while in 2010, it was of 4.3 MWh/m³. Ideally, the points must coincide throughout the timeline, i.e. 100% of technical efficiency. Regarding the level of fuel oil inventory (useful existence), we can mention that it is a policy concerning the reduction of the risk of not meeting the demand of electricity. Indeed, the public power generation company must ensure at all times that the fuel oil supply to the thermal plants is guaranteed for any contingency.

Figure 1. Power generation per year with fuel oil. Source: The authors

Figure 2. Fuel oil consumption per year. Source: The authors

2. Technical efficiency The efficiency analysis is based on microeconomic analysis, through maximizing a given production function using a set of inputs and technology. There are two approaches to the optimization of a production function either through the maximization of production or cost minimization given a certain level of production. In [4], two conceptual visions of economic efficiency were proposed:  Technical efficiency: The ability of a production unit to obtain the highest level of product, a given level of inputs and technology.  Allocative efficiency: The ability of a production unit to use the inputs in optimal proportions, given a price level and technological level.

Figure 3. Useful existence (stock) per year. Source: The authors

64


Marmolejo-Saucedo et al / DYNA 82 (191), pp. 63-68. June, 2015.

The usefulness of technical efficiency lies in generating information to improve the management capacity of the production units, and to know the information of input used and generation of production over to define optimal strategies for improvement. There are several methods of estimating technical efficiency: parametric, non-parametric, deterministic, and stochastic.  Parametric: Assume a functional form of the production function.  Nonparametric: Do not assume any functional form of the production function.  Deterministic: Part of the approach that all distance between the border of production and production value observed for a productive unit corresponds to technical inefficiency.  Stochastic: The production has a random component representing the technical efficiency as part of the deviation from the optimum production units. Techniques for estimating technical efficiency according to the adopted approach can be classified among the primal and dual approaches. In the case of primal approach, technical efficiency is estimated based on maximizing production or minimizing function costs. In the case of a dual approach, technical efficiency for both functions is estimated [16-18]. Models can also be categorized in terms of temporality, in cross section or panel data models. The nonparametric models generally used are estimates based on Data Envelopment Analysis (DEA) methodologies based on mathematical programming. The great advantage of using DEA methodologies is that a specific functional form of the production function is not required. On the other hand, the main disadvantage is that they are deterministic models that can be affected by the number of inputs used as well as by the presence of outliers [14, 15].

Figure 5. Differences between deterministic and stochastic efficiency. Source: The authors

The two stages estimation procedure is inconsistent in its assumptions regarding the independence of the inefficiency effects in the two estimation stages. This estimation procedure is unlikely to provide estimates that are as efficient as those that could be obtained using as single stage estimation procedure. This issue was addressed in [11,12] where the authors proposed stochastic frontier models in which the inefficiency effects are expressed as an explicit function of a vector of firm specific variables and a random error. [13,14] proposed a model, which is equivalent to the [12] specification, with the exceptions that allocative efficiency is imposed, the first order profit maximizing conditions is removed, and panel data is permitted. The model specification may be expressed as:

2.1. Stochastic frontier

,

1, . . . ,

1, . . . , 1

In 1977, [5,6] simultaneously proposed a stochastic frontier. The initial specification was a cross-sectional model with an error term composed of two factors: one that measured the random effect and another that measured the technical inefficiency. [7-10] present developments of stochastic frontier models in various applications. In [11], a model was proposed for the inefficiency effects of the stochastic frontier production function. This model is applied in the analysis of data on electricity generation during different time periods. The most important thing in terms of the methodology of estimation of stochastic frontiers is that the treatment of error terms does not assume that all errors are attributable to a random factor, and that a segment of these is attributable to technical inefficiency. Fig. 5 shows the comparison between deterministic and stochastic methods for estimating technical inefficiency for a production function. A number of empirical studies have estimated stochastic frontiers and predicted firm level efficiencies using these estimated functions and then regressed the predicted efficiencies upon firm specific variables in an attempt to identify some of the reasons for differences in predicted efficiencies between firms in an industry.

Where: Y is a logarithm of the production of the i-th firm in the t-th time period x is a kx1 vector of transformation of the input quantities for the i-th firm in the t-th time period  is a vector of unknown parameters V is a random variable which is assumed to be iid ~N 0, V ) U is a non negative random variable assumed to account for technical inefficiency in production and to be independently distributed as truncations at zero of the N m , U m z  z is a px1 vector of variables which influences the efficiency of a firm and  is an 1xp vector of parameters to be estimated. 2.2. Efficiency predictions The measures of technical efficiency relative to the production frontier are defined as: 65


Marmolejo-Saucedo et al / DYNA 82 (191), pp. 63-68. June, 2015.

E Y ∗ |U , Xi / E Y ∗ |U

The model results show that general model and the inputs considered are statistically significant. The values of the Where Y ∗ is the production of the i-th firm, which will be estimated parameters of inputs considered are consistent with equal to Y when the dependent variable is in original units expectations, i.e. both inputs positively involved in the and will be equal to exp (Y when the independent variable generation of electricity. The first test (Mu) states that the average truncated is in logarithms. In the case of a production frontier, EFF will take a value between zero and one, while it will take a value normal is zero. It can be seen that at the 5% significance level the null hypothesis is not rejected. The null hypothesis of the between one and infinity in the cost function case. second test is that inefficiency is time invariant (Eta). The values of the null hypothesis are located at the limit to 95% 3. Stochastic frontier model to measure efficiency in significance. The hypothesis that inefficiency is time Mexican power Generation Company invariant is rejected to 90% significance, and shall be deemed The study includes data for a group of 21 thermoelectric inefficiency dynamic over time. Based on the model parameters estimated, we calculated units. Annual average gross electricity generation, level of the level of efficiency according to equation (2). The level of consumption, and useful existence of fuel oil are analyzed. A balanced panel data for 21 thermoelectric units was built for the technical efficiency estimated is between 0 and 1, where 1 2009-2013 period. The useful existence and consumption of fuel means that the production unit is efficient and values below oil are expressed as an annual average in cubic meters and 1 indicate that the unit is inefficient. When the distance from 1 is greater, it means the unit presents greater inefficiency. annual average gross generation of electrical energy in MWh. Fig. 6 shows the estimation of efficiency for 2013 of all The objective of measuring technical efficiency in the production of electrical energy is to assess whether levels of thermoelectric units considered in the analysis. It is noted that consumption and useful existence correspond with the level Rio Bravo, Tula Steam and Manzanillo are the most efficient of production of each thermoelectric, assuming a given thermoelectric units. By contrast Lerma, Lerdo and Valladolid Vapor are the worst performers in terms of technology. The goal is to evaluate the interaction between the inputs to thermoelectric units. Of the 21 thermoelectric units analyzed, only four have generate electric power and congruence of results. Two an efficiency of 90%, ten have an efficiency of 80% and specifications of production function were tested: Translogarithmic and Cobb-Douglas. The Cobb-Douglas production seven shows an efficiency level below 70%. It is noteworthy function was chosen because it generates better estimates, so all that only four thermoelectric units show efficiency levels of around 90%. This implies that strictly speaking 81% of the variables are expressed in logarithmic fashion. The advantage of having a series of panel data is that this thermoelectric units operate inefficiently. An efficiency rating was calculated based on estimates of allows the evaluation of the technical efficiency of thermoelectric units over time to capture the dynamic efficiency technical for each thermoelectric in the period of performance of each production unit. The functional form of analysis, using 2009 as base year. The index shows the evolution of efficiency for each thermoelectric in time. Values greater than the model is presented in equation (3). 100 indicate that the efficiency has been improved, and values below 100 indicate that the efficiency is worse.  .   Fig. 7 shows the evolution of efficiency index of each unit – analyzed for 2013. It is noted that in general the level of efficiency of all units has decreased since 2009. Especially 1, . . . ,21 2009, . . . ,2013 3 for the three thermoelectric units with worst performance, their level of inefficiency has fallen on average by 3% in the A fixed effects model was estimated by the maximum period of analysis. likelihood method, considering fuel oil consumption and useful existence as independent variables and power generation as dependent variable. Technical efficiency was considered a variable over time. The results obtained for the U proposed model are presented in Table 1. S EFF

0, X

Table 1. Results of panel data model Coefficient Standard error Variable 0.4304761 0.0476022 Consumption 0.479270 0.0800474 Stock 1.414125 0.3824081 Constant 0.2714063 0.2098633 Mu -0.1630130 0.0318923 Eta 56.822957 Log likelihood 0.000000 Prob. > Chi2 Source: The authors estimated in STATA.

(2)

$ y e a r

P-value 0.0000 0.0000 0.0000 0.1960 0.0609

Figure 6. Technical efficiency, 2013. Source: The authors.

66


Marmolejo-Saucedo et al / DYNA 82 (191), pp. 63-68. June, 2015.

In Figs. 8 and 9 we see an example of the relationship between the total cost of maintenance per year and the relationship between the Variable Annual Maintenance Cost (CVM) and the operation time (HO), respectively. Thus, considering the age of most power plants, the maintenance represents a considerable budget, which often cannot be covered in a timely manner, minimizing technical efficiency.

U S $ / y e a r

Figure 7. Technical efficiency index, 2013. Source: The authors.

The results show that 81% of the thermoelectric units are inefficient, considering threshold efficiency values of at least 90%, observing the evolution of efficiency in time, all thermoelectric units have decreased levels of efficiency. The estimated level of efficiency is an indicator to analyze in more detail the operation of thermoelectric units with lower levels of efficiency. It is also useful to evaluate overall performance of the electric power sector. These results show the need to assess in more detail the operation of 81% of the thermoelectric plants in Mexico.

Figure 8. Total Cost of Maintenance vs. Energy Production. Source: The authors

3.1. Cause analysis of inefficiency There are several factors that affect the productivity of the systems; for example, technical efficiency, which can be incorporated in the stochastic frontier. One of the reasons for technical efficiency to show a downward trend in all thermoelectric plants, could be the wear and tear of the generating units, given that in many cases the right kind of maintenance determined by the number of operation hours (inspection, minor, intermediate and major) is not given, mainly due to limited budget or other political factors. The diagnosis of the operation of an energy system is to discover and interpret the signs of malfunction of equipment that compose and quantify their effects in terms of additional consumption of resources; i.e. where, how and how much the overall consumption of resources can be saved, holding constant the quantity and product specifications of the system. For thermoelectric plants a malfunction of certain equipment such as boilers, will have a major economic impact, even for small deviations in their performance with respect to what is expected by design. A good diagnosis of the operation must be preceded by a conceptual development that explains the origin of the increase.

Figure 9. Variable Annual Maintenance Cost vs. Hours of Operation. Source: The authors

4. Conclusions The stochastic frontier of production function shown in this work allows an efficiency study of different thermal generation plants. Additionally, the analysis of panel data allows the evaluation of the variation in time of the technical inefficiency of electrical production. In general, 81% of the thermoelectric units are technically inefficient in their operation; one reason for this technical inefficiency is related to maintenance levels. The evaluation of technical efficiency is a useful indicator for monitoring the operation of thermoelectric units and evaluating their performance, in order to identify which units require particular attention to achieve maximum performance in their operation. Therefore, it is important to conduct a proper diagnosis with (conventional, energy simulation and thermoeconomics) existing methods for the proper functioning of the energy system.

Table 2. Type of maintenance that should be given to a steam turbine Maintenance Type Period (OH) Inspection 4,000 Lower 8,000 Intermediate 16,000 Major 32,000 Source: The authors 67


Marmolejo-Saucedo et al / DYNA 82 (191), pp. 63-68. June, 2015. [16] Garzón, P. and Pellicer, E., Organizational efficiency of consulting engineering firms: Proposal of a performance indicator. DYNA, 76 (160), pp. 17-26, 2010. [Online], [date of reference December 19th of 2014]. Available at:http://www.revistas.unal.edu.co/index.php/dyna/article/view/1346 3/14365 [17] Calderero-Gutierrez, A., Fernandez-Macho J., Kuittinen H. et al., Innovacion en las regiones europeas. Una alternativa metodologica y actualizada del Ris. DYNA, 84 (6), pp.501-516, 2009. [18] Diaz, J., La eficiencia técnica como un nuevo criterio de optimización para la generación hidroeléctrica a corto plazo. DYNA, 76 (157), pp. 91-100, 2009. [Online], [date of reference December 19th of 2014]. Available at: http://www.scielo.org.co/scielo.php?script=sci_arttext&pid=S001273532009000100009&lng=es&nrm=iso

Acknowledgements The authors thank Flora Hammer for comments and suggestions that improved this paper. At the same time, we acknowledge all the support provided by the National Council of Science and Technology of Mexico (CONACYT) throughout the research program “Redes Temáticas de Investigación,” as well as by the Mexican Logistics and Supply Chain Association (AML) and the Mexican Institute of Transportation (IMT). References [1]

[2]

[3]

[4] [5]

[6]

[7]

[8] [9] [10] [11]

[12]

[13] [14] [15]

Salazar, M.S., Una estrategia para mejorar la administración de los inventarios de diesel en las centrales termoeléctricas: un estudio de caso, MSc. Thesis, Engineering Systems Department, National Autonomous University of Mexico, Mexico, 100 P, 2015. Secretaría de Energía. Prospectiva del sector eléctrico 2013-2027, Mexico, 2013. [Online], [date of reference December 19th of 2014]. Available at: http://sener.gob.mx/res/PE_y_DT/pub/2013/Prospectiva_del_Sector _Electrico_2013-2027.pdf González-Santaló, J.M., La generación eléctrica a partir de combustibles fósiles. Boletin IIE. Octubre-diciembre 2009. [Online], of 2014]. Available at: [date of reference December 19th http://www.iie.org.mx/boletin042009/divulga.pdf Farrell, M., The measurement of productive efficiency. Journal of the Royal Statistical Society, 120 (3), pp. 253-290, 1957. DOI: 10.2307/2343100 Meeusen, W. and Van den Broeck, J., Efficiency estimation from Cobb-Douglas production functions with composed error. International Economic Review, 18, pp. 435-444, 1977. DOI: 10.2307/2525757 Aigner D., Lovell C., and Schmidt, P., Formulation and estimation of stochastic frontier production function models. Journal of Econometrics, 6, pp. 21-37, 1977. DOI: 10.1016/03044076(77)90052-5 Forsund, F., Lovell, C. and Schmidt, P., A survey of frontier production functions and of their relationship to efficiency measurement. Journal of Econometrics, 13, pp.5-25, 1980. DOI: 10.1016/0304-4076(80)90040-8 Schmidt, P., Frontier production functions. Econometric Reviews, 4, pp. 289-328, 1986. DOI: 10.1080/07474938608800089 Bauer, P., Recent developments in the econometric estimation of frontiers. Journal of Econometrics, 46, pp.36-56, 1990. DOI: 10.1016/0304-4076(90)90046-V Greene, W., The econometric approach to efficiency analysis. In H. Fried, Lovell, C. and Schmidt, S (Eds.), The Measurement of Productive Efficiency. New York: Oxford University Press, 1993. Reifschneider, D. and Stevenson, R., Systematic departures from the frontier: A framework for the analysis of firm inefficiency. International Economic Review, 32 (3), pp.715-23, 1991. DOI: 10.2307/2527115 Battese, G. and Coelli, T., A stochastic frontier production function incorporating a model for technical inefficiency effects. Working Papers in Econometrics and Applied Statistics. Department of Econometrics, University of New England, 69, pp. 1-22, 1995. Battese, G. and Coelli, T., A Model for Technical Inefficiency Effects in a Stochastic Frontier Production Function for Panel Data. Empirical Economics, 20, pp. 325-332, 1995. DOI: 0.1007/BF01205442 Berndt, E.R. and Wood, D.O., Technology, prices and the derived demand for energy, Econ. Statist. 57, pp. 259-268, 1975. DOI: 10.2307/1923910 Díaz-Serna, F.J., Optimización de la operación y evaluación de la eficiencia técnica de una empresa de generación hidroeléctrica en mercados de corto plazo. PhD Thesis, Universidad Nacional de Colombia. Medellín, 186 P, 2011. [Online], [date of reference of 2014]. Available at: December 19th http://www.bdigital.unal.edu.co/3683/#sthash.KcLJiF0R.dpuf

J.A. Marmolejo-Saucedo, is professor in the Faculty of Engineering at Universidad Anahuac Mexico Norte, Mexico. He has a PhD. in Engineering with a specialization in Operations Research from the National Autonomous University of Mexico (UNAM), Mexico and is currently a member of the National System of Researchers of the National Council of Science and Technology (CONACYT) of Mexico. He is on the Board of the Mexican Society of Operations Research, and member of the Scientific Committee of the International Congress on Logistics and Supply Chain (CiLOG) organized by the Mexican Logistics and Supply Chain Association (AML). His areas of interest are large-scale optimization and mathematical modeling in power systems and logistical problems in the supply chain. R. Rodriguez-Aguilar, is PhD. in Economics with a specialization in mathematical finance from the School of Economics at the National Polytechnic Institute, Mexico. He has a MSc. degree in Engineering and Statistics granted by the National Autonomous University of Mexico, a MSc. degree in Public Policy Institute of Technology and Research superiors of Monterrey, Mexico. Dr. Rodriguez is a member of the Mexican Mathematical Society, the National Association of Economists and Economic Modeling Network EcoMod. His areas of interest are stochastic optimization, mathematical finance and econometrics. M.G. Cedillo-Campos, is a Professor in Logistics Systems Dynamics and Founding Chairman of the Mexican Logistics and Supply Chain Association (AML), Mexico. Dr. Cedillo is National Researcher (National Council of Science and Technology of Mexico - CONACYT), Innovation Award 2012 (UANL-FIME) and National Logistics Award 2012. In 2004, he received with honors a PhD. in Logistics Systems Dynamics from the University of Paris, France. Recently, he collaborated as a Visiting Researcher at Zaragoza Logistics Center as well as Georgia Tech Panama. He works in logistics systems analysis and modeling, risk analysis, and supply chain management, which are the subjects he teaches and researches in different prestigious universities in Mexico and abroad. Dr. Cedillo is the Scientific Chairman of the International Congress on Logistics and Supply Chain (CiLOG) organized by the Mexican Logistics and Supply Chain Association (AML), and coordinator of the National Logistics Research Network in Mexico supported by CONACYT. M.S. Salazar-Martínez, has a degree in Electrical Electronic Engineering from the National Autonomous University of Mexico. Currently, she is working in the Federal Electricity Commission in Mexico. Her functions include planning and supply of diesel and fuel oil-fired to power thermoelectric plants to the National Electric Network by forecasting methods. Her areas of interest are modeling problems using advanced techniques of planning systems.

68


Optimization of the distribution of steel pipes using a mathematical model Miguel Mata-Pérez a & Jania Astrid Saucedo-Martínez b a b

Posgrado en Logística y Cadena de Suministro, Universidad Autónoma de Nuevo León, México. miguel.matapr@uanl.edu.mx Posgrado en Logística y Cadena de Suministro, Universidad Autónoma de Nuevo León, México. jania.saucedomrt@uanl.edu.mx Received: January 28th, 2015. Received in revised form: March 26th, 2015. Accepted: April 30th, 2015.

Abstract Distribution is one of the most important processes in a supply chain, given that it represents up to two thirds of company logistics costs and up to 20% of the total cost of products. For that reason, it is essential to optimize the costs of distribution. A steel producer located in Monterrey distributes their products to different parts of Mexico. Currently, the distribution is carried out through empirical knowledge, underusing resources and generating unnecessary costs. The aim is to undertake the distribution process more efficiently. This paper presents an optimization model based on vehicle routing problem (VRP), for the distribution of heavy pipes taking into account the company’s own characteristics, such as: rented heterogeneous fleet, multiple shipments of products, split deliveries and open cycles (meaning that the routes may not necessarily end in the depot). Keywords: Vehicle Routing Problem (VRP); combinatorial optimization; distribution of heavy pipes.

Optimización de la distribución de tubería de acero mediante un modelo matemático Resumen El proceso de distribución suele representar más de dos terceras partes de los costos logísticos de la compañía y más del 20% del costo total de los bienes transportados. Es por ello que es vital para la empresa optimizar dicho costo. Una empresa productora de aceros ubicada en Monterrey, debe distribuir sus productos a múltiples clientes localizados a lo largo del país. En la actualidad dicha actividad es realizada mediante conocimientos empíricos, ocasionando que se subutilicen los recursos y generando altos costos de distribución. La empresa requiere realizar dicho proceso en forma más eficiente. Este artículo presenta un modelo de optimización basado en el problema de ruteo de vehículos (VRP), para la distribución de tubería pesada tomando en cuenta las características propias de la empresa, tales como: flota heterogénea y rentada, embarque de múltiples productos, entregas divididas y ciclos abiertos (las rutas no necesariamente terminan en el depósito). Palabras clave: Problema de ruteo de vehículos (VRP); optimización combinatoria; distribución de tubería pesada.

1. Introduction Transport operations and product distribution represent up to two-thirds of company logistics costs [2] and up to 20% of the total cost of transported products [13]. In many cases, the transport networks are too stiff and unable to absorb market variations in demand [2]. This has prompted outsourcing to transportation and distribution businesses, offering greater flexibility and more efficient deliveries

management at competitive prices. Given a set of products and of customers with well-defined demands, the distribution process consist in the company having to decide whether the deliveries can be complete or whether they need to be split, the best times for the deliveries, the kinds of vehicles to be used, the complete supply path, among other possible decisions. The distribution process of every company usually entails special characteristics in accordance with the individual features of each company.

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 69-75. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.51153


Mata-Pérez & Saucedo-Martínez / DYNA 82 (191), pp. 69-75. June, 2015.

Figure 1. Kind of Pipeline. Source:The company. Figure 2. Mexico divided by areas. Source: The authors.

It is important to note that the optimal distribution configuration is affected by variety configuration for the distribution, which, in turn, is affected by the variety of destinations, the diversity of products offered, and demand variability. Hence, it is common in practice to find problems such as underuse of transport, inefficient routes, and extra costs for loading and unloading. Therefore, route planning is one of the main problems in the optimization of transport logistics operations, whose main objective is to find the appropriate balance between the cost of this activity and its contribution to the level of service specified by the company. In this work, we present a combinatorial optimization model based on VRP for optimization in a real distribution company that supplies its products all over the country.

vehicles cannot visit certain areas due transport authority regulations regarding the kind of road or due to physical restrictions a customer may present to receiving a certain kind of vehicle on their premises. One of the features of our problem is that the company does not stick to delivery schedules and simplifies the solution by discarding penalties for partial or late deliveries. This also allows each customer order to be carried on more than one vehicle regardless of its characteristics. This is known as a “split delivery.” The fixed cost per trip includes three deliveries; however, the same vehicle can make more than three deliveries, incurring an additional cost for each one. Pipelines being very heavy, it is important to maximize the load capacity of each vehicle in terms of weight, whereas the volume of the material is not a restriction for the arrangement inside the vehicle.

2. Description of the problem A steel producer located in Monterrey—which distributes its products nationally—welds pipes from hot or cold rolled steel sheet, with different thicknesses and hardnesses to provide different products such as mechanical tubing, driving, conduit, oil, thin wall, etc. (See Fig. 1). The company focuses its operations in a plant. In this paper, we focus on the distribution of the steel pipe all over the country, so that all the vehicles used must start their routes in this plant. By company policies, Mexico is divided into thirteen areas, each composed of one or more states (see Fig. 2). According to these areas, the route for each vehicle must not go beyond a single zone. The company does not have a fleet for distribution; instead they rent the necessary vehicles from several freight companies. This presupposes that the availability of vehicles is unlimited. Since the vehicles are not company property, they have no obligation to return to the plant once they finish their routes. Another important feature of the fleet of vehicles is that they have different load capacities (9 vehicle types each with a capacity of 3.5, 6, 15, 22, 27, 28, 30, 32 and 36 Tons respectively). It must also be taken into account that some

3. Background Transportation problems are a diverse set of cases that some authors have attempted to group according to the most important characteristics; this allows the formulation of mathematical models to facilitate decision-making in companies that use a kind of transport. Furthermore, by adopting models, their solution usually has a significant impact on the cost and the level of customer service. The transportation problems consist basically in assigning routes to vehicles to deliver or pick up products. The well-known Vehicle Routing Problem (VRP) generalizes a large set of problems concerning the distribution of products or services to a set of clients located in specific points. The VRP has been studied extensively in the literature. For a deeper review of VRP, see [9,10]. It is important to mention that the VRP is NP-Hard [9]. There are many exact methods [11] for solving the VRP and an increasing number of heuristics methods for approximate the optimal solution [8,12]. 70


Mata-Pérez & Saucedo-Martínez / DYNA 82 (191), pp. 69-75. June, 2015.

All VRPs are determined by the functional constraints that must be satisfied by the vehicles and the conditions and operating standards of the enterprises. Depending on their characteristics, [4, 6] introduces the following VRP types:  Size of fleet: a single vehicle or a limited or unlimited number of vehicles.  Demand: stochastic, deterministic, dynamic, partially satisfied, fixed for all clients or variable depending on the client.  Multiple products or a single product type.  Schedule: unrestricted, with time windows (just beginning, end only, beginning and end, flexible, multiple).  Fleet Type: homogeneous or heterogeneous.  Depots: single, multiple, replenishment intermediate.  End of route: return to the depot (closed cycle) or not (open cycle).  Network communication: if there is a direct path between two clients, or whether these should be considered different routes.  Costs: fixed or variable.  Capacity of vehicles: limited and singular, limited and different, and unlimited.  Number of routes per vehicle: limited to a single route, limited to a certain number, and unlimited.  Objective: minimize costs, minimize number of vehicles, minimizing distance traveled and minimize time. Thus, the combination of different characteristics determines an appropriate model for any possible specific situation. The study of the basic models has allowed the development of techniques that are applicable to cases that are increasingly complex. For their historical significance, below is a summary of some of the most important routing problems. One of the first studies that treated this problem dates back to 1959 and it treated a problem involving dispatch service trucks applied to fuel distribution stations [5]. One of the best known is the Traveling Salesman Problem (TSP), in which a salesman must visit a particular number of customers once, and then return to where he started his trip [1]. Later, a variant appeared, known as m-TSP (Multiple Traveling Salesmen Problem), in which there are m sellers who must attend a certain number of clients that can be visited only once, and each seller must return to the starting point when her or his trip is finished [3]. See an illustration of the m-TSP in Fig. 3. The Vehicle Routing Problem (VRP) is a generalization of the m-TSP, where a demand is associated for each customer and a capacity is defined for each vehicle. In the easier VRP, there exists a fleet of identical vehicles to make deliveries to customers from a single depot (See Fig. 4). The following list presents some of the highlights of the VRP problems. For a more comprehensive list see [14].  Asymmetric Vehicle Routing Problem (AVRP): The duration of the trip or the distance between two points depends on the direction of the path.  Capacitated Vehicle Routing Problem (CVRP): The vehicle has a carrying capacity that must not be exceeded.

Figure 3. Multiple Traveling Salesmen Problem (m-TSP). Source: Bektas, 2006.

Figure 4. Vehicle Routing Problem. Source: The authors.

    

71

Fleet Size and Mix Vehicle Routing Problem (FSMVRP): Handling fixed costs depending on the type of vehicle. The variable costs are the same for all vehicles. The problem does not impose restrictions on the number of vehicles. Vehicle Routing Problem with Heterogeneous Fleet (VRPHE): Fixed costs and dependent variables of the vehicle type. The problem does not impose restrictions on the number of vehicles. Pickup and Delivery Problem (PDP): The same vehicle must pick up and carry the goods from one place to another network. Min-max Vehicle Routing Problem (VRP Min-max): Try to minimize the length of the longest path. Vehicle Routing Problem with Precedence Constraints (VRPPC): Before visiting a particular client, the vehicle must visit a previous set of them. Multiple Depot Vehicle Routing Problem (MDVRP): There are several depots, from which the vehicles assigned to them depart and return. Open Vehicle Routing Problem (OVRP): When transportation is outsourced, the vehicles have no


Mata-Pérez & Saucedo-Martínez / DYNA 82 (191), pp. 69-75. June, 2015.

reason to return to the plant. Dynamic Vehicle Routing Problem (DVRP): Set of problems where some parameters depend on the time variable.  Stochastic Vehicle Routing Problem (SVRP): Set of problems where some parameters have some uncertainty.  Vehicle Routing Problem with Multiple Use of Vehicles (VRPM): Each vehicle can take more than one route over a period of time.  Vehicle Routing Problem with Split Delivery (VRPSDV): A client’s demand can be covered by several vehicles.  Vehicle Routing Problem with Time Windows (VRPTW): Every customer has a distribution or delivery schedule. Schedules are also presented in the plants. Concerning our problem, in the literature we found similarities with other VRP that have already been formulated; however, none of them include all of the company characteristics. That is why a custom mathematical model is required. Our problem presents a heterogeneous fleet, because the company rents different kinds of vehicles according to its needs. The deliveries can be split due to the fact that it is not uncommon for the demands to exceed vehicle capacity. The company offers multiple products, and because the fleet is hired, it is not necessary for the vehicles to return to the depot (open cycles).

4.2. Variables

Binary: 1 if vehicle travels from to , 0 otherwise. Positive integer: Number of packages type carried to the customer by vehicle . Positive integer: Number of deliveries made by vehicle . Positive integer: Number of extra deliveries made by vehicle . Positive: Total cost of the vehicle calculated by the sum of all costs and all extra costs. 4.2. Model (1) s. t. :

0.1

∀ ,

(2)

(3)

(4)

1

∀ ,

(5)

4. Mathematical model proposed 3

Imitating the terminology from classical models (see, for example [7]), we called our proposed model Open Vehicle Routing Problem with Heterogeneous Fleet, Split Deliveries and Multiple Products (OVRPHFSDMP). It will be assumed that all data are integers and not negative numbers. The mathematical model for the OVRPHFSDMP is as follows. 4.1. Parameters Number of clients with positive demand. Customers are indexed , from 1 to and the index 0 represents the depot (starting point). Number of vehicles available. Number of types of packages that must be carried to the customers. Weight load capacity (in tons) of vehicle . Weight of each package (in tons). Number of packages type required by the client . Accessibility for vehicle to visit the customer . (This takes the value 1 if is possible to visit customer j with vehicle v, and 0 otherwise). Number large enough to ensure that the vehicles assigned to customer will satisfy his demand. Cost of vehicle to travel from the depot to client . This represents any subset of clients, which is used to eliminate the subtours. 72

∀ ,

1

(6)

∀ ,

1

(7)

∀,

(8)

(9)

∀ ,

(10)


Mata-Pérez & Saucedo-Martínez / DYNA 82 (191), pp. 69-75. June, 2015.

| |

1∀ , ⊆

∈ 0,1 ,

,

0,

;| |

,

2

Table 1. Result for the real instances. Case Cost

(11)

(12)

Eq. (1) represents the objective function that minimizes the sum of the total cost for all used vehicles. The constraint represented for eq. (2) calculates the price that the company needs to pay to use the vehicle from the plant to the farthest destination taking into account the fixed costs and the extra costs. Eq. (3) estimates the number of customers that the vehicle must visit, and eq. (4) calculates the number of extra deliveries when the vehicle visits more than three clients. Eq. (5) ensures that all vehicles used start their routes at the plant but do not need to return to it as well as ensuring that each customer is visited at most once by the same vehicle. Eq. (6) allows the entry and exit of vehicles from a point k on the condition that a trip is not made from a place that has not been previously visited. Eq. (7) assigns packages on the vehicles that will be used. Eq. (8) ensures that customer demand is satisfied. Eq. (9) ensures that the weight carried on a vehicle does not exceed its capacity. Eq. (10) states that the vehicles will be assigned only to customers that the vehicles can access (i.e. due to infrastructure or transport authority regulations). Eq. (11) ensures that vehicles do not perform cycles.

Ton

C

P

V

ACU

1

8,138.00

48.79

23

24

13

15%

2

24,658.50

840.07

24

79

29

34%

3

14,961.00

560.66

37

125

28

43%

4

14,882.00

324.24

25

32

18

34%

5

4,440.50

27.85

17

20

8

14%

6

7,027.00

198.72

30

39

11

54%

7

3,433.00

143.34

10

20

9

37%

8

5,172.20

146.65

16

26

9

44%

9

6,700.00

221.95

16

21

13

26%

10

17,154.50

364.61

22

35

18

36%

Source: The authors.

we know that the ACU in the historic data is around 30%, meaning that the routing that model offers, represents an improvement of around 4% in ACU. Therefore, based on these results, we proposed that the company consolidate its orders due to the fact that the ACU was low. This is possible given that the customers do not penalize the company for deliveries delayed up to one week. In the simulated cases, we suppose that the orders can be consolidated, that is why the ACU is significantly improved. We created a random generator of cases based on data from real cases to determine the demand by probability functions. For each client and each type package a random real number is generated ∈ 0,1 . Then the demand is assigned as follow:

5. Computational experimentation To test the model, 60 test cases have been proposed, 10 of them were from the history of the company and 50 others were probabilistically generated taking into account the behavior of real case studies. To solve all the cases, we used GAMS (version 23.7.3) running on a Precision R5400 Rack Workstation Dell computer with 2 Quad Core Xeon Processor E5420 2.50 GHz and RAM 4 GB. It is important to remember that the company has divided the country into thirteen different areas to facilitate the distribution, so that, if in one day all the areas are addressed, we split the problem into 13 independent problems and then integrate the results. For reasons of confidentiality, we were not able to use the actual prices of the vehicles, instead assigning rates that behave similarly (variation between the vehicle costs). In Table 1, we present the results for the real cases. The first column represents the number of cases, the second shows the total costs obtained, the third column presents the total tonnes to be sent, the fourth, fifth and sixth columns represent the number of customers, packages and vehicles involved in the case respectively, and the seventh column shows the average capacity used (ACU) of the vehicles in the solution. Usually when the company receives an order, it sends a vehicle to satisfy the demand regardless of the size of the order or the average vehicle capacity. Although for such cases, we do not have the company routing at our disposal,

0, if 1,9 , if 10,15 , if 16,30 , if

∈ ∈ ∈ ∈

0,0.5 0.5,0.95 0.95,0.98 0.98,1

Where: , is a random integer number between a and b. The quantities for the demands and the threshold for the partitioning in demands have been selected imitating the frequencies found in practice. We proposed five types of cases, gradually varying the number of clients, packages and vehicles, (see Table 2). Table 3 shows twenty representative cases (four of each kind). The first column represents the case number; the second column shows the size of case (Clients/Package/Vehicles); the third column indicates the total weight to be sent; the fifth column shows the average capacity used in the solution; finally, the fifth column present the computation time in seconds to solve each case. 73


Mata-Pérez & Saucedo-Martínez / DYNA 82 (191), pp. 69-75. June, 2015. Table 2. Characteristics of cases generated. Instance type Clients

Packages

Vehicles

1

5

5

6

2

5

8

6

3

8

8

9

4

10

10

12

5

12

15

15

Note that all the solutions (Table 3.) found for the 50 cases are optimal but, as can also be seen, the computation time increases exponentially as the size of the case increases. 6. Conclusions The problem studied in this work has great practical significance because the distribution process is one of the principal components of any supply chain, and because it is directly related to costs, productivity and business performance in enterprises. The main contribution of this work is the creation of a mathematical model that helps the decision maker to adapt to different scenarios investing less time and producing better solutions. The mathematical tool developed is a model of mixed integer linear programming for solving vehicle routing problems with heterogeneous fleet not belonging of the company, split deliveries and multiple products (OVRPHFSDMP for Open Vehicle Routing Problem with Heterogeneous Fleet, Split Deliveries and Multiple Products). The model has been tested in real cases, in an important company, which produces and distributes steel pipes across the country, demonstrating that it adequately simulates reality, offering effective solutions and a plausible profit in the distribution costs for the company.

Source: The authors.

Table 3. Result for the generated instances. Case C/P/V Ton

ACU

Secs

1

5/5/6

60.29

97%

0.84

4

5/5/6

11.43

38%

0.19

5

5/5/6

22.47

75%

0.14

6

5/5/6

61.58

99%

0.67

12

5/8/6

96.10

98%

28.53

13

5/8/6

43.51

73%

0.20

References

15

5/8/6

39.19

65%

0.22

[1]

16

5/8/6

65.26

99%

11.63

21

8/8/6

65.30

99%

975.79

22

8/8/6

51.33

86%

644.22

27

8/8/6

61.09

99%

1,000.66

30

8/8/6

103.26

84%

1,000.61

31

10/10/12

158.19

99%

23,759.56

35

10/10/12

155.91

99%

17,087.36

36

10/10/12

102.24

85%

19,548.61

38

10/10/12

175.02

95%

24,679.61

41

12/15/15

312.90

99%

49,886.19

48

12/15/15

262.00

94%

15,479.63

49

12/15/15

227.37

92%

270,071.09

50

12/15/15

194.54

91%

35,217.33

Applegate, D.L., Bixby, R.M., Chvátal, V. and Cook, W.J.. The traveling salesman problem. Journal of Computer and System Sciences, 72 (4), pp. 509-546, 2006. [2] Ballou, R., Business logistics management: Supply chain management. Planning, Organizing, and Controlling the Supply Chain. Weatherhead, United Kingdom: Prentice-Hall International, 2004. [3] Bektas, T., The multiple traveling salesman problem: An overview of formulations and solution procedures. Omega, 34, pp. 209-219, 2006. DOI: 10.1016/j.omega.2004.10.004 [4] Bodin, L. and Golden, B., Routing and scheduling: Classification in vehicle routing and scheduling. Networks, 11, pp. 97-108, 1981. DOI: 10.1002/net.3230110204 [5] Dantzig, G.B. and Ramser, J.H., The truck dispatching problem. Management Science, 6 (1), pp. 80-91, 1959. DOI: 10.1287/mnsc.6.1.80 [6] Desrochers, M., Lenstra, J.K. and Savelsbergh, M.W.P., A classification scheme for vehicle routing and scheduling problems. European Journal of Operational Research, 47, pp.75-85, 1990. DOI: 10.1016/0377-2217(90)90007-x [7] Dror, M. and Trudeau, P., Split delivery routing. Naval Research Logistics, 37, pp. 383-402. 1990. DOI: 10.1002/nav.3800370304 [8] Galván, S., Arias, J. y Lamos H., Optimización por simulación basado en EPSO para el problema de ruteo de vehículos con demandas estocásticas. DYNA, 80 (179), pp. 60-69, 2013. [9] Laporte, G., Fifty years of vehicle routing. Journal of Transportation Science, 43 (4), pp. 408-416, 2009. DOI: 10.1287/trsc.1090.0301 [10] Laporte, G. The vehicle routing problem: An overview of exact and approximate algorithms. European Journal of Operational Research 59, pp. 345-358, 1992. DOI: 10.1016/0377-2217(92)90192-C [11] Laporte, G. and Nobert, Y., Exact algorithms for the vehicle routing problem. Annals of Discrete Mathematics 31, pp. 147-184, 1987. DOI: 10.1016/s0304-0208(08)73235-3 [12] Sepúlveda, J, Escobar, J.W. and Adarme-Jaimes, W., An algorithm for the routing problem with split deliveries and time windows

Source: The authors.

We expected the time to increase according to the number of the clients, kind of package and vehicles used, but with respect to the ACU we were not able to determine anything. 74


Mata-Pérez & Saucedo-Martínez / DYNA 82 (191), pp. 69-75. June, 2015. (SDVRPTW) applied on retail SME distribution activities. DYNA, 81 (187), pp. 223-231, 2014. DOI: 10.15446/dyna.v81n187.46104 [13] Toth P. and Vigo D., Models, relaxations and exact approaches for the capacitated vehicle routing. Bologna: Ediciones SIAM, 2002. [14] Yepes, V., Las redes de distribución como elementos de ventaja competitiva. Qualitas Hodie 76, pp. 30-33, 2002. M. Mata-Pérez, completed a BSc in Applied Mathematics at UAQ, Mexico in 2002, and a MSc. degree and PhD. in Engineering Systems at the UANL, Mexico. He is an expert in mathematical modeling and optimization of largescale systems. He is currently a full professor in the Logistics and Supply Chain Program of UANL, Mexico. J.A. Saucedo-Martínez, completed her BSc. In Mathematics at UANL, Mexico in 2005, a MSc. degree in Engineering Systems at UANL, Mexico, and a PhD in Applied Mathematics at UNESP, Brazil, in 2012. She is an expert in mathematical modeling and optimization of large-scale systems. She is full professor in the Logistics and Supply Chain Program at UANL, Mexico.

Área Curricular de Ingeniería Administrativa e Ingeniería Industrial Oferta de Posgrados

Especialización en Gestión Empresarial Especialización en Ingeniería Financiera Maestría en Ingeniería Administrativa Maestría en Ingeniería Industrial Doctorado en Ingeniería - Industria y Organizaciones Mayor información: E-mail: acia_med@unal.edu.co Teléfono: (57-4) 425 52 02

75


Effects of management commitment and organization of work teams on the benefits of Kaizen: Planning stage Midiala Oropesa-Vento a Jorge Luis García-Alcaraz b Leonardo Rivera c & Diego F. Manotas d a

Instituto de Ingeniería y Tecnología, Universidad Autónoma de Ciudad Juárez, México. al132804@alumnos.uacj.mx b Instituto de Ingeniería y Tecnología, Universidad Autónoma de Ciudad Juárez, México. jorge.garcia@uacj.mx c,d Escuela de Ingeniería Industrial. Universidad del Valle, Colombia. leonardo.rivera.c@correounivalle.edu.co; diego.manotas@correounivalle.edu.co Received: January 28th, 2015. Received in revised form: March 26th, 2015. Accepted: April 30th, 2015.

Abstract This paper presents an analysis of the effects of management commitment and organization of work teams in the benefits of implementing Kaizen in industrial enterprises during planning stages. To gather information, 200 questionnaires were applied to 68 companies distributed in the states of Tabasco, Sinaloa and Chihuahua in Mexico and in the province of Camagüey, Cuba. We used the methodology of least partial squares with the WarpPLS 4.0 software to develop a model of structural equations that explains such effects. The results show that when there is a high level of managerial commitment, this impacts the profits and competitiveness of companies positively. We also found that the organization of work teams has positive impacts on competitive benefits and these, in turn, on economic benefits. As a result of this study, we present the impact of certain critical success factors of Kaizen on the benefits of its implementation, which is a key factor in its sustainability over time. Keywords: kaizen, management commitment, coordination of teams, partial least squares (PLS).

Efectos del compromiso gerencial y organización de equipos de trabajo en los beneficios del Kaizen: Etapa de planeación Resumen Este trabajo presenta un análisis de los efectos que tienen el compromiso gerencial y la organización de equipos de trabajo en los beneficios de la implementación del kaizen en las empresas industriales durante su etapa de planeación. Para buscar la información se aplicaron 200 cuestionarios a 68 empresas distribuidas en los estados de Tabasco, Sinaloa y Chihuahua en México y también en la provincia de Camagüey. Cuba. Para obtener un modelo de ecuaciones estructurales explicativo de los efectos, se utilizó la metodología de mínimos cuadrados parciales usando WarpPLS 4.0. Los resultados obtenidos muestran que cuando existe un alto compromiso gerencial se tiene impactos positivos en los beneficios económicos y competitivos de las empresas. Asimismo, la organización de equipos de trabajo tiene impactos positivos sobre los beneficios competitivos y estos a su vez sobre los beneficios económicos obtenidos. Como resultado de este estudio se muestra el impacto que tienen determinados factores críticos de éxito del Kaizen, en los beneficios de implementación del mismo, elemento primordial para su sostenibilidad en el tiempo. Palabras clave: kaizen, compromiso gerencial, organización de equipos de trabajo, mínimos cuadrados parciales (PLS).

1. Introduction Over the years, Western industries have managed their businesses pursuing short-term goals. This practice prevents them from seeing beyond their immediate needs, and holds them in short-term planning processes. This is short-sighted and limits the levels of quality and profitability they can reach. According to the management teams of Japanese

companies, the secret of the most successful companies in the world lies in having high quality standards for their products, processes, services and employees; Therefore, quality is a philosophy that should be applied at all levels in an organization, and this requires a continuous improvement process that should have no end [1-3]. This process enables companies to view a wider horizon, to maintain a permanent pursuit of excellence and innovation, to increase their

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 76-84. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.51157


Oropesa-Vento et al / DYNA 82 (191), pp. 76-84. June, 2015.

solutions that must be implemented and the change in standards and operational methods required to ensure that the problem does not occur again [16]. There are reports in the literature that state that in established Kaizen programs, each employee submits between 25 and 30 suggestions every year, and that around 90% of them are implemented. Toyota is recognized as a leading company in the application of Kaizen. In 1999, one of the Toyota manufacturing facilities located in the United Stated reported that 7,000 employees submitted over 75,000 suggestions, out of which about 99% was implemented [15]. By measuring the sustainability of Kaizen, the research has tended to emphasize the critical success factors to measure its impact on the economic and competitive benefits of a company. Because the sustainability of Kaizen is measured by attributes or parameters that generate information, it requires all participants to be involved and committed to focusing on common customer satisfaction goals to be effective, allowing them to become more competitive. Therefore, it is essential to have clear and standardized attributes that enable the organization to know which factors have a greater impact on certain benefits, some of which are discussed below.

competitiveness and reduce costs, guiding their efforts to meet the needs and expectations of customers, both internal and external. Furthermore, this process of continuous improvement requires that the manager behave like a true leader in the organization, ensuring the participation of all the employees, and getting involved in all the processes of the supply chain. To do this, he must commit deeply to this work, since he is responsible for implementing the process and the most important driving force of the company. To carry out this process of continuous improvement, both in a particular department and across the company, it should be taken into consideration that this process should be: economic (it should require less effort than the benefits it brings), and cumulative, in that each improvement realized will open the possibility of successive improvements while taking full advantage of the new level of performance. This paper analyzes the impact that managerial commitment and teamwork have on the benefits of continuous improvement systems from the perspective of Kaizen. We also measure the perception of many people at managerial levels around the issues of continuous improvement, and with this, we validate the hypotheses enunciated in the paper. The paper is organized into 5 sections. After the introduction, Section 2 presents a review of literature associated with Kaizen, in Section 3 we present the design of this research, Section 4 shows the results, and finally Section 5 presents conclusions, limitations and future research.

2.2. Management commitment and organization of work teams in the implementation of Kaizen People in management positions show no apparent concern for the development of relationships between the management level and the members of the organization, through which the manager may have some positive influence on their behavior [17]. The strategic process, including the development, implementation and monitoring of strategies in the business, reflects the characteristics of the style of leadership in the company. Managers need to implement organizational changes to face new challenges, to ensure that the company adapts to and copes with new circumstances. However, the company and its individuals resist such changes in many ways [18]. Most managers tend to consider several factors such as: company profile, top management orientation, goals and objectives, internal variables, external variables and other factors that appear to be key to determining the success or failure of strategies in the organization. However, issues related to human factors are not considered, such as the development of management skills aimed at creating relationships between managers and people at lower levels to persuade and motivate members of the organization, whose participation is essential for the successful development of the strategy [19]. Consequently, one of the main problems in large and medium industrial enterprises is the lack of leadership in those who have the responsibility to manage the company. Every industrialist has shown a certain degree of leadership, because they have created companies in the market conditions at the time [20]. But these people unfortunately do not view leadership as a quality through which they can influence human factor towards achieving the goals in the

2. Literature Review 2.1. Kaizen in industry Companies can obtain significant competitive advantages by successfully implementing Kaizen. The elements that made Kaizen successful in the Toyota Production System are still valid. We could even argue that these elements make Kaizen even more relevant today than in the 70s and 80s, in a competitive environment where speed and efficiency are crucial [1,2]. Kaizen is reported to lead to higher quality and productivity. Kaizen also helps to improve accountability and employee commitment [2, 3]. These results have kept Kaizen as a popular topic in companies around the world [3] and a staple in the scientific literature [3-11]. In today’s agitated and uncertain economic environment, in which we are still feeling the aftermath of the financial crisis of 2008 and 2009, many Mexican industrial companies have initiated or increased their efforts to improve their operations. These companies have noticed that the sole reduction of their headcount is not enough, providing fertile ground for the application of philosophies such as Lean Manufacturing and Kaizen [12,13]. The success of Kaizen in companies is due to the fact that it involves every employee in the continuous improvement effort, taking advantage of their contributions to achieve small and gradual changes [14,15]. In this manner, Kaizen centers in the identification of problems, their root causes, the 77


Oropesa-Vento et al / DYNA 82 (191), pp. 76-84. June, 2015.

Kaizen over time. company [21]. To identify the impact of the critical success factors of The prevailing approach to leadership in senior management is to apply those policies and procedures that Kaizen on benefits, in its planning stage, we first took into were useful for the company in the past and assume that they account two variables for measurement because of the will be useful in the near future. This approach is guided by complexity of the model as a whole. The selected variables the idea that the manager can act as a leader exerting power are management commitment and organization of teams. On the other hand, is also very important to identify the in his managerial position [22]. Other employees do not present their points of view because they perceive this as a benefits, and in this sense, the study provides insight into the impact of the two selected variables on company profits. high-risk attitude that they are not willing to take [23]. In the current study, activities related to management commitment and work teams and their impact on company 3. Research Design profits in implementing the Kaizen are determined, since this commitment from the members of the organization to the 3.1. Design of the questionnaire goals of a particular strategy in the business, is affected by the lack of motivation to achieve them. Management does not As a basis for this research, we built a questionnaire pay attention from to the personal interests of individuals taking into account the literature related to the critical success [3,5,20,22]. factors for implementation of Kaizen, and the benefits for The type of managerial thinking that is based on rigid both customers and businesses. This research was conducted patterns considers economic, financial, production, in electronic databases such as Elsevier, Scirus, JSTOR, technology and other factors as crucial, relegating to the ScienceDirect, Web of Science, Ebscohost, Ingenta, background concerning the relations that should exist Springer, Google Scholar, and academic textbooks. The between the different levels of the organization. This issue is questionnaire was validated by four experts, two academics the subject of lengthy debates, but limited or no action is and three engineers in the area of continuous improvement taken if it affects the social and economic world of managers and Lean Manufacturing. All of them independently assessed and company owners [24,25]. the relevance, consistency, adequacy, clarity, content, The change in the mindset of managers in these knowledge and structure of the written items. Subsequently, organizations should start with a new approach to leadership a pilot test was conducted applying the questionnaire to 30 in the direction that leads to a change in their philosophical engineers working in manufacturing industries in Ciudad outlook. Thus, the manager will move from a traditional Juarez and Los Mochis, in areas of continuous improvement conception of leadership to a new vision that will generate a and lean manufacturing. wider and more accurate perspective of the environment in The questionnaire was divided into three sections, with a which the company works. total of 51 questions related to Critical Success Factors in the This new approach to leadership also requires the active three stages of Kaizen: Thirteen items in the Planning Stage, and effective participation of the human factor, making the 22 items for the Execution Stage, and 16 items for the Control position of the group relevant for the company. This is an Stage. important variable because of the effect it has on the process There are 41 questions related to benefits, with 14 of generating and executing strategies [19,21,23]. questions for economic benefits, 12 for competitive benefits The adoption of new managerial behavior, with a new and and 13 for human resources benefits. Also, five questions broader perspective of the organization towards the future, on related to demographic aspects were included. one hand, and a greater shift towards a sustained relationship The questionnaire should be answered on a Likert scale, with the people who make up the company, on the other; is with values between 1 and 5. Table 1 presents the scale used presented as a management tool, where the greater closeness in the questionnaire, answering the general question "Degree and trust between people at management level and the other of implementation of the following activities� [32-40]. hierarchical levels will increase the level of commitment of We chose four variables for this study: Two related to the members of the company towards the goals of a particular management commitment and organization of work teams strategy [26,27,28]. and 2 more related to the economic and competitive benefits The increase in the level of commitment will be attained for the Company. These variables or dimensions have 37 when it is understood that people are part of the organization, items in total, which are shown in the following list: improving a sense of belonging. Employees will show more Dimension 1: Management Commitment (CompGer) interest and effort not by compulsion but by a sincere desire 1. Management plans the acquisition of the resources to contribute to achieving the company objectives required for improvement programs (financial resources, (developing the sense of belonging) [29, 30, 31]. physical spaces, time) There are many critical success factors of Kaizen presented in the literature. An in-depth literature search found Table 1 235 articles where authors identified 51 critical success Scale for Questionnaire Answers 1 2 3 4 5 factors pertaining to the implementation of Kaizen and 41 types of benefits to do with its implementation. On this basis, Never Seldom Sometimes Frequently Always we set out to investigate how these critical success factors create an impact on the profits of the company and thus Source: Adapted from [41]. provide solutions that can ensure greater sustainability of 78


Oropesa-Vento et al / DYNA 82 (191), pp. 76-84. June, 2015.

Tabasco, Los Mochis in Sinaloa, Ciudad Juarez in Chihuahua, and also in Camag端ey, Cuba. The aim of the survey was explained and the enterprises were invited to participate in the research. Suitable candidates to answer the questionnaire were: directors, managers, supervisors, engineers and technicians.

2. Policies, objectives and structure of Kaizen events are established. 3. The opinions of company customers are taken into account to make changes at work. 4. A culture of continuous improvement is developed. 5. A structure is developed to determine faults. Dimension 2: Organization of Work Equipment (OrgEqT). 1. Groups are organized to propose suggestions for improvement of products, processes or to solve problems: quality circles, suggestion programs in groups, etc. 2. Commitment and motivation in the team are generated. 3. Support teams for running Kaizen are organized. 4. Heterogeneity of improvement teams. Dimension 3: Economic Benefits (BenefEc) 1. Reduction of the percentage of defective products 2. Decreases in unit manufacturing costs 3. Reduction in the time elapsed between the reception of the order and the delivery to the customer as much as possible. 4. Increased productivity. 5. The company meets deadlines and quantities as promised. 6. Reduction of material handling distance. 7. Reduction of waste in areas such as inventories, waiting times, transport and movement of workers. 8. Reduction of steps in the production process. 9. Profit maximization. 10. Decrease in failures of equipment and tools. 11. High productivity increases. 12. Reductions in design and operational cycles. 13. Improved cash flows. 14. Better economic balance. Dimension 4: Competitive Benefits (BenefCo) 1. The company responds to customer needs. 2. IAn increased rate of introduction of new products is perceived. 3. Improved product quality. 4. The company responds to customer needs. 5. Employee skills are improved. 6. Reduction in machine setup times. 7. A systemic view of the organization is provided. 8. Process-oriented thinking is encouraged. 9. Improved product design is perceived. 10. Increased ability to compete in globalized markets and to continuously adapt to sudden market changes. 11. Strategic advantage over its competitors. 12. Accumulated knowledge and experiences that are applicable to organizational processes. 13. Internal barriers are easily knocked down, thereby allowing powerful and authentic teamwork. 14. Capacity to adapt continuously to sudden changes in the market (related to social, cultural, economic, and political factors).

3.3.

Information analysis and validation

A database was designed using the SPSS 21.0速 software for descriptive analysis of the information. First, for validation purposes, a rational validity was considered, according to [42]. Subsequently, tests for detecting missing values were performed and given that the data came from an ordinal scale (Likert), they were replaced by the median. Similarly, a test was performed to identify outliers, by standardizing the data, considering in the analysis standardized absolute values below 3.3. In addition, a statistical validation of the dimensions was performed by calculating the Cronbach Alpha Index (IAC) to determine the internal consistency of the items. When the IAC has values greater than or equal to 0.70 the dimension is considered important [44-50]. In this validation process, we also used the average extracted variance (VME), used to assess the discriminant and convergent validity between items. We also analyzed the combined cross-factorial loadings to assess the discriminant validity for each dimension. At this stage it was considered that the point of acceptable cut off for the factor loadings of the items should be 0.50, also evaluating the significance of the P-value of the item in the dimension. To evaluate the presence of collinearity between latent variables, we considered the index of inflation of variance (VIF) and we used values less than 10 as cut-off points, or having the coefficient of correlation between the dimensions (r-value)with a value of less than 0.90. Because of the nature of the questionnaire items presented in ordinal scale, we used the Q2ratio as a nonparametric measure of predictive validity. Q2must be greater than zero to be considered as an acceptable prediction of the model. 3.4. Structural equation model Structural Equation Models (Structural Equation Modeling, SEM) provide a general framework for statistical analysis of the relationships between several variables. Techniques such as factor analysis and multiple regression can be considered specific categories in the application of structural equation models. The measured variables, also called observed variables or indicators, are variables that can be observed and measured directly. Latent variables cannot be observed directly, and must be inferred from their effects on the observed variables. The latent variables are also called constructs, factors (factor analysis) or unobserved variables. In structural equation models, one of the most interesting features is that they allow the estimation of the indirect and total effect one variable can have on another, not only the direct impact as in linear regression. There are three types of effects [51].

3.2. Data collection process The final questionnaire was administered in print and electronically via the tool Survey Monkey. We sent it to 68 industrial enterprises in the regions of Villahermosa in 79


Oropesa-Vento et al / DYNA 82 (191), pp. 76-84. June, 2015.

H6 The economic benefits will have a direct and positive effect with the competitive benefits of Kaizen.

H1

4.

H2

Results

4.1. Description of the sample

H3

H6

A total of 200 completed questionnaires were collected. Of these, 57 were through the electronic tool Survey Monkey and 143 through paper surveys. Of the collected surveys, 134 were answered by men and 66 by women. Table 2 presents a descriptive analysis of the sample, with the industrial sector and the position held by respondents. We can observe that 36 of the respondents are engineers focused on working on continuous improvement processes, 53 are technical assistants, 28 are operators with extensive experience in the implementation of Kaizen, 31 supervisors and 17 managers, among others. Furthermore, 98 respondents were employed in the automotive sector, while 42 belong to the mechanical industry

H4 H5 Figure 1. Proposed model and hypotheses Source: The authors

The proposed model and hypotheses that reference the dimensions described above are shown in Fig. 1. For their modeling and validation we used WarpPLS 4.0 ÂŽ software, whose algorithms used to calculate the estimators of the relationships between variables are based on Partial Least Squares (PLS). We chose this technique given that when modeling with PLS we require less demanding assumptions about sample size and data distribution. The WarpPLS4 algorithm uses a resampling method (bootstrapping) to reduce the effects of convergence. The model in Fig. 1 proposes the direct effects of CompG (management commitment) on OrgEqt (organizing teams), the BenefEc (economic benefits) and BenefCo(competitive benefits) as well as the direct effects of OrgEqt(organization of teams) with BenefEc (economic benefits) and BenefCo (competitive benefits). These relationships are proposed based on experience and the study of literature, that say that what any company ultimately pursues is economic benefit, so that the relationships between its variables are aimed in that direction. We will consider that a relationship is valid when each segment or line joining two latent variables shows a beta value and P value of less than 0.05. To evaluate the fit of the model, the hypotheses must be validated considering the P-value of the estimated value in each intended effect. The hypotheses are: H1 The management commitment will have a direct and positive impact on the economic benefits of the company. H2 The management commitment will have a direct and positive effect on the competitive benefits of the company. H3 The management commitment will have a direct and positive impact on the organization of work teams in the company. H4 The organization of work teams will have a direct and positive impact on the economic benefits of the company. H5 The organization of work teams will have a direct and positive effect on the competitive benefits of the company.

4.2. Validation of the instrument The statistical validation of the reliability of the instrument was performed by calculating the Cronbach Alpha index. To find the consistency and correlation among items the Cronbach's Alpha index was calculated, as shown in Table 3. This analysis considered primarily a total of 10 items related to the study variables; management commitment and organization of teams. When calculating Cronbach’s Alpha, a score of 0.729 is obtained, and we found that we could increase the reliability by eliminating an item, number 8 "goals are set in improvement programs," leaving a total of 9 items and a Cronbach's alpha of 0.882. It is also observed in Table 3 that all values are greater than 0.7; thus, representing that the instrument has good consistency [52]. In Table 4, we can see the estimation of the parameters for the validation. It is observed that the average values of the extracted variance (AVE) are greater than 0.5, this indicates that the questionnaire is convergent and has discriminant validity. Considering the R2 for the latent variables, we found acceptable values. When evaluating the VIF index we observed values below 3.3, indicating independence between the latent variables. Table 2. Position held / Industrial Sector Position held Mana Supervis Techn In Operator Admin. Engineer ger or ician charge Total 1 0 0 2 1 1 0 5 1 Textile Automotive 23 3 22 7 14 21 8 98 Power 0 1 0 0 2 4 1 8 Plastics 1 0 0 0 0 2 0 3 Mechanics 1 2 6 0 8 20 5 42 Others 2 6 8 8 6 5 9 44 Total 28 12 36 17 31 53 23 200 Source: Authors 80


Oropesa-Vento et al / DYNA 82 (191), pp. 76-84. June, 2015. Table 3. Iterations to achieve the Cronbach's alpha

Table 5. Combinations and cross loads Item CompGer OrgEqT

Management plans the acquisition of the resources required for improvement programs (financial resources, physical spaces, time). Policies, objectives and structure of Kaizen events are established. The opinions of the customers of the company are taken into account to make changes at work. A culture of continuous improvement is developed. A structure is developed to determine faults. Groups are organized to propose suggestions for improvement of products, processes or solve problems: quality circles, suggestion programs in groups, etc. Commitment and motivation in the team are generated. Goals are set in improvement programs. Support teams for running kaizen are organized. Policies, objectives and structure of Kaizen events are established. Number of elements Cronbach’s Alpha Source: Authors

Iter 1 .690

ITER 2

.705

.880

.689

.863

.868

.691

.864

.688 .691

.862 .864

.704

.877

R-squared coefficients Adjusted R-squared coefficients Composite reliability coefficients Cronbach's alpha coefficients Average variances extracted Full collinearity VIFs Q-squared coefficients Source: Authors

0.871 0.813 0.576 3.645

BenefCo P value

CompG1

0.712

0.428

0.196

-0.38 <0.001

CompG2

0.639

-0.525

0.124

0.024 <0.001

CompG3

0.845

-0.099

0.077

0.068 <0.001

CompG4

0.767

-0.027

-0.156

0.151 <0.001

CompG5

0.814

0.167

-0.201

OrgEqT1

0.674

0.739

0.008

-0.042 <0.001

OrgEqT2

-0.162

0.738

-0.402

0.249 <0.001

OrgEqT4

-0.241

0.828

-0.01

0.114 <0.001

OrgEqT5

-0.262

0.682

0.438

-0.362 <0.001

BenefEc 1

-0.199

0.117

0.702

0.094 <0.001

BenefEc 2

-0.474

0.319

0.709

-0.156 <0.001

BenefEc 3

-0.24

0.228

0.704

-0.423 <0.001

BenefEc 4

-0.147

0.208

0.705

0.244 <0.001

0.1 <0.001

.882 .682 .696

.866 .874

BenefEc 5

-0.25

0.406

0.664

-0.044 <0.001

BenefEc 6

0.218

-0.01

0.766

-0.095 <0.001

10 0.729

9 0.882

BenefEc 7

0.11

-0.074

0.788

-0.032 <0.001

BenefEc 8

0.164

0.118

0.805

-0.22 <0.001

BenefEc9

0.121

-0.01

0.724

-0.049 <0.001

BenefEc 10

0.097

-0.277

0.795

0.07 <0.001

BenefEc11

0.133

-0.425

0.785

0.18 <0.001

Table 4. Parameter estimates for validation CompG

BenefEc

0.668

BenefE c 0.68

0.666

0.675

0.506

0.835

0.939

0.927

BenefCo2

-0.256

0.314

0.27

0.604 <0.001

0.915 0.477 2.907 0.517

BenefCo 3

-0.148

0.2

0.347

0.738 <0.001

BenefCo 4

-0.204

0.019

0.121

0.575 <0.001

BenefCo 5

0.089

0.136

0.11

0.716 <0.001

BenefCo 6

0.002

0.093

0.093

0.704 <0.001

BenefCo 7

0.21

-0.116

0.102

0.754 <0.001

BenefCo 8

0.03

-0.099

-0.096

0.721 <0.001

BenefCo 9

-0.018

0.207

0.018

0.711 <0.001

BenefCo10

0.243

-0.047

-0.204

0.726 <0.001

BenefCo11

0.038

-0.22

-0.15

0.784 <0.001

OrgEqT

0.736 0.56 2.909 0.667

0.928 0.532 3 0.686

BenfCo 0.512

Regarding Q2, we observed values greater than zero, indicating a good predictive model. Table 5 shows the combined and crossed factorial loadings of the items to evaluate the saturation of each dimension, considering that they must have values greater than 0.50 in the corresponding variable and lower on the crossed factorial loadings (another variable). It is noted that the P-value is significant (<0.05) for all items, confirming the convergent validity of the questionnaire [53].

BenefEc12

0.241

-0.206

0.814

-0.172 <0.001

BenefEc 13

0.166

-0.313

0.744

0.191 <0.001

BenefEc 14

-0.302

0.19

0.776

0.16 <0.001

BenefCo 1

-0.155

-0.229

-0.002

0.579 <0.001

BenefCo 12

-0.185

-0.272

-0.064

0.672 <0.001

BenefCo 13

0.321

-0.379

-0.192

0.702 <0.001

BenefCo 14

-0.107

0.449

-0.323

0.644 <0.001

Source: Authors

4.3. Structural equation model The initial model proposed was assessed, based on the hypotheses displayed in Fig. 2. The test presented an R2 value of 68 for the economic benefits variable and an R2 of 51 for the competitive benefits. In this model, we observe significant relationships with values of less than 0.05; however, there is one exception: The relationship between OrgEqT (organization of work teams) and BenefEc (economic benefits), which presents a P-value greater than 0.05, which generates the need to modify the model as initially proposed.

In the modeling undertaken using WarpPLS 4.0®, three effects were analyzed: i) the direct effects to test the hypotheses; ii) indirect effects; and iii) the total effects. 4.3.1. Direct effects The direct effects of the model are presented in Fig. 2, for each relationship between dimensions, we present a measurement value for dependence expressed by β and in brackets the P-value for each hypothesis test. 81


Oropesa-Vento et al / DYNA 82 (191), pp. 76-84. June, 2015. Table 6. Sum of indirect effects CompG BenefEc BenfCo Source; The authors

0.161

Table 7. Sum total effects CompG OrgEqT 0.817 BenefEc 0.732 BenfCo 0.708 Source: Authors Figure 2. Initial Model Source: Authors

OrgEqT

0.420

OrgEqT

0.105 1.1.1

BenefEc

0.156 0.197

BenfCo 0.535

4.3.2. Indirect effects Table 6 shows the sum of indirect effects. It is important to note that all effects between dimensions are significant, since their P-values were less than 0.05. For example, although there is no direct relationship between OrgEqT and BenefEc, there is an indirect effect of 0.105, indicating that when the standard deviation of the first dimension is increased by in one unit, the standard deviation of the second dimension increases by 0.105. The highest indirect effects are those between CompG and BenefEc with a value of 0.420.

Once we developed the model, we removed nonsignificant relationships, those with values above 0.05, and proceeded to remove them iteratively. We removed the relationship between CompG (management commitment) and BenefEc (economic benefits) because it has a P-value of 0.21 and a beta close to zero. The model without this relationship is presented in Fig. 3. For this new model, we can observe that all relationships are significant, with P-values smaller than 0.05. For example, the relationship linking the CompG (management commitment) to BenefCo (competitive benefits) has a β = 0.55 and P <0.01, which means that when the standard deviation of the first dimension increases by one unit, the second dimension increases its standard deviation by 0.55. This also represents its measure of dependence. The same interpretation is applicable to other relationships. It is important to notice that the highest dependence (according to the value of β) is observed between CompG (management commitment) and OrgEqT (organization of work teams) with β = 0.82.

4.3.3. The total effects Table 7 presents the sum of the direct effects and the sum of indirect effects that comprise the total effects. The total effect between competitive benefits and economic benefits is 0.535, i.e. when the standard deviation of the first variable increases by one unit, the standard deviation of the second one goes up by 0.535 units. 5. Conclusions 5.1. Conclusion on hypothesis stage planning Based on the hypotheses we proposed, presented in Fig. 1, we have the following conclusions: H1 There is enough statistical evidence to say that management commitment has a positive direct effect on the economic benefits for the company, because when the first latent variable increases by one standard deviation, the standard deviation of the second variable increases by 0.35 units. H2 There is enough statistical evidence to say that management commitment will have a direct and positive effect on the competitive benefits of the company, because when the first latent variable increases by one standard deviation, the second one increases by 0.55 units. H3 There is enough statistical evidence to say that management commitment has a positive direct effect on the organization of teams, because when the first latent variable or dimension increases by one standard deviation increase by 0.82-second units.

Figure 3. Final Model Source: Authors

82


Oropesa-Vento et al / DYNA 82 (191), pp. 76-84. June, 2015.

H4 There is enough statistical evidence to say that the organization of work teams does not have a direct positive impact on the economic benefits of the company, because when the first latent variable increases by one standard deviation, the second one increases by 0.05 units. However, there is evidence to say that the organization of work teams has a positive indirect effect on the economic benefits for the company by 0.105. H5 There is enough statistical evidence to say that the organization of work teams has a positive direct effect on the competitive benefits of the company, because when the first latent variable increases by one standard deviation, the second one increases by 0.20 units. H6 There is enough statistical evidence to say that competitive benefits have a direct and positive impact on the economic benefits of the company, because when the first latent variable increases by one standard deviation, the second one increases by 0.54 units.

References [1]

[2] [3] [4]

[5]

[6]

5.2. Industrial implications of the results

[7]

The results contribute to the identification of critical success factors and benefits of Kaizen and the features that are most strongly related to the sustainability and results of Kaizen. For the authors, this is the first study that tests the causal relationships between sustainability performance and the critical success factors of Kaizen. Such understanding will increase the likelihood that the results of Kaizen implementations are sustained, eliminating wasted efforts and supporting improvement in the organization. All this leads to an economic contribution, given by the reduction in failures of equipment and tools, reduced machinery setup times, increased levels of customer satisfaction and consumers, increased levels of inventory turnover, significant drop in levels of faults and errors, lower levels of waste and an overall better financial balance.

[8] [9] [10] [11]

[12] [13]

[14] [15]

5.3. Limitations of the study This work focused on assessing the impacts of management commitment and organization of work teams on company benefits in the regions of Tabasco, Sinaloa, Juarez, Chihuahua in Mexico and CamagĂźey, Cuba. However, extending the study to other regions will enable researchers to find comparative models supporting the validity of analysis reported here.

[16] [17] [18] [19] [20]

5.4. Future Research We recommend that the same questionnaire be applied to other regions, both inside and outside of Mexico, to extrapolate and find better answers with respect to the impact of the critical success factors of Kaizen on company benefits. It would be also advisable to consider other types of businesses and industries in order to analyze these effects and compare the results from another perspective.

[21]

[22]

83

Kumar, S. and Harms, R., Improving business processes for increased operational efficiency: A case study, Journal of Manufacturing Technology Management, 15 (7), pp. 662-674, 2004. DOI: 10.1108/17410380410555907 Ramadani, V. and Gerguri, S., Theoretical framework of innovation and competitiveness and innovation program in Macedonia. European Journal of Social Sciences, 23 (2), pp. 268-276, 2011. Medina, C. and Espinosa, M., Innovation in modern organizations. Management and Strategy, 5, pp. 85-102, 1994. Cooney, R. and Sohal, A., Teamwork and total quality management: A durable partnership. Total Quality Management & Business Excellence, 15 (8), pp. 1131 - 1142, 2004. DOI: 10.1080/1478336042000255442 Gondhalekar, S., Babu, S. and Godrej, N., Towards using Kaizen process dynamics: A case study. International Journal of Quality & Reliability Management, 12 (9), pp. 192-209, 1995. DOI: 10.1108/02656719510101286 Bateman, N., Sustainability: The elusive element of process improvement. International Journal of Operations & Production Management, 25 (3), pp. 261-276, 2005. DOI: 10.1108/01443570510581862 Van de Ven, A., Central problems in the management of innovation. Management Science 32 (5), pp. 590-607, 1996. DOI: 10.1287/mnsc.32.5.590 Burch, M.K, Lean Kaizen events and longevity determinants of sustainable improvement, Doctoral Dissertation, University of Massachusetts, Amherst, USA, 2008. Sheridan, J.H., A new attitude. Industry Week, 249 (10), pp.16, 2000. Daniel, D., Management information crisis. Harvard Business Review, 39, pp. 110-121, 1961. Terziovski, M. and Sohal, A.S., The adoption of continuous improvement and innovation strategies in Australian manufacturing firms Lithuania. Technovation, 20 (10) pp. 539-550, 2000. DOI: 10.1016/S0166-4972(99)00173-X Juran, J., Juran on leadership for quality. A manual for managers. Madrid, Spain: DĂ­az Santos, 1990 Suarez, B. and Miguel, J., In search of an area of sustainability: An empirical study of the implementation of the Continuous Improvement Process in Spanish Town Councils. INNOVATE Journal of Administrative and Social Sciences, 19 (35), pp. 47-64, 2009. Barraza, M.F., Smith, T. and Dahlgaard-Park, S.M., Lean Kaizen public service: An empirical approach in Spanish local governments, The TQM Journal, 21 82) pp.143-167, 2009. Lefcovich, M., Advantages and benefits of Kaizen. [on line], 2007. [date of reference: September 10th of 2011]. Available at: http: //www.tuobra.unam. mx / published / 040816180352.html Vonk, J., Business process improvement through Kaizen permits, Spectrum, 78 (2), pp. 33-34, 2005. Watson, L., Striving for continuous improvement with fewer resources. Kaizen, Marshall Star, pp. 1-5, 2002. Wheatley, B., Innovation, ISO registration, CMA, 72 (5), pp.23-23, 1998. Tenenhaus, M., Component-based structural equation modelling. Total Quality Management & Business Excellence. 19 (7-8), pp 87886, 2008. DOI: 10.1080/14783360802159543 Woodman, R., Saweyr, J. and Griffin, R., Toward a theory of organizational creativity. Academy of Management Review 18 (2), pp. 293-321, 1993. DOI: 10.2307/258761, 10.5465/AMR.1993.3997517 Bhatnagar, R. and Sohal, A.S., Supply chain competitiveness: Measuring the impact of location factors, uncertainty and manufacturing practices. Technovation, 25 (5), pp. 443-456, 2005. DOI: 10.1016/j.technovation.2003.09.012, 10.1016/S01664972(03)00172-X Romero, R., Noriega, S., Escobar, C. and Avila, D., Critical success factors: A competitiveness strategy. CULCYT, 6 (31), pp. 5-14, 2009.


Oropesa-Vento et al / DYNA 82 (191), pp. 76-84. June, 2015. [23] Nordgaard, A., Ansell, R., Jaeger, L. and Drotz, W., Ordinal scales of conclusions for the value of evidence. Science and Justice, 50 (1), pp. 31-31, 2010. DOI: 10.1016/j.scijus.2009.11.025 [24] Hair, J.F., Ringle, C.M. and Sarstedt, M., PLS-SEM: Indeed a silver bullet. Journal of Marketing Theory and Practice, 19 (2), pp. 139-151, 2011. DOI: 10.2753/MTP1069-6679190202 [25] Rigdon, E.E., Rethinking partial least squares path modeling: In praise of simpler methods. Long Range Planning, 45 (5,6), pp. 341358, 2011. [26] Yuan, K. and Bentler, P., New developments and techniques in structural equation modeling. New Jersey, Lawrence Erlbaum Associates, Publishers. USA. 2001. [27] Dale, B., Boaden, R., Willcox, M. and McQuater, R., Sustaining total quality management: What are the key issues. TQM Magazine, 9 (2), pp. 372-380, 1997. DOI: 10.1108/09544789710178668 [28] Suarez, B., Michael, J. and Castillo, I., The implementation of Kaizen in Mexican organizations. An empirical study. GCG, 5 (4), pp. 60-74. 2011. [29] Han, S.B., The effects of ISO 9000 registration efforts on total quality management practices and business performance. PhD. Thesis, University of Rhode Island, Rhode Island, United States, 151 P., 2000. [30] Swamidass, P.M., A comparison of the plant location strategies of foreign and domestic manufacturers in the U.S., Journal of International Business Studies, 21 (2), pp. 310-316, 1998. [31] Martin, K.. Kaizen-White collar: Events rapid improvement office, services and technical processes, IIE Annual Conference and Exhibition, Nashville, TN, May 22, 2007. [32] Ortiz, C., All out Kaizen: A continuous improvement plan delivers change to the production floor and dollars to the bottom line. Industrial Engineer, 38 (4), pp.30-30, 2006. [33] Bicheno, J., Kaizen and Kaikaku. Manufacturing operations and supply chain management: The LEAN approach, in: Taylor, D., and Brunt, D., eds, London, UK. Thomson Learning, pp. 175-184. 2001. [34] Terziovski, M. and Sohal, A.S., The adoption of continuous improvement and innovation strategies in Australian manufacturing firms Lithuania. Technovation, 20 (10) pp. 539-550, 2000. DOI: 10.1016/S0166-4972(99)00173-X [35] Shumacker, R.E. and Lomax, R.G., A Beginner's guide to structural equation modeling. 3A. Ed., LLC Inc. New York, Taylor and Francis Group, 2010, 2 P. [36] Khurana, A. and Talbot, B., The internationalization process modelthrough the lens of the global color picture tube industry. Journal of Operations Management, 16 (2-3), pp.215-239, 1998. DOI: 10.1016/S0272-6963(97)00039-9 [37] Lionnet, P., Innovation: The Process, ESA Training Workshop. Lisbon, 2003. [38] Olaya, A. The economics of innovation and technological change: A theoretical approach from the Schumpeterian thought. Strategic Sciences Journal, 16 (20), pp. 237-246, 2008. [39] Sheridan, J.H., Kaizen Blitz, Industry Week, 246 (16), pp.18-27, 1997. [40] Sabatini, J., Turning Japanese, Automotive Manufacturing & Production, 112 (10), pp. 66-69, 2000. [41] Rigdon, E.E., Rethinking partial least squares path modeling: in praise of simpler methods. Long Range Planning, 45 (5,6), pp. 341-358, 2011. [42] Correa-E., A. and Gómez-M., R.A., Information technologies in the supply chain. DYNA, 76 (157), pp.37-48, 2009. [43] Creswell, J., America's Elite Factories, Fortune, 144 (4), pp. 206A206L, 2001. [44] Tabachnick, B.G. and Fidell, L.S., Using multivariate statistics. 4th Ed., Allyn & Bacon, Boston, 2001, P. 77. [45] Nemoto, M., Total quality control for management. Strategies and techniques from Toyota and Toyoda Gosei. New Jersey: Prentice Hall.Inc., 1987. [46] Brown, M., Guide to data analysis. Madrid: McGraw-Hill, 2002. [47] Garza, E., Kaizen continuous improvement. Redalyc.Org, 8 (3), pp. 330-333, 2005. [48] Martinez, A.C.L., Self-report reading strategy used among Spanish university students of English. RESLA, 21, pp. 161-179, 2008.

[49] Hernández, R., Fernandez, C. and Baptista, P., Research methodology. McGraw-Hill Publishing, Mexico, 1991. [50] Manos, A., Lean Kaizen: A simplified approach to process improvement. ASQ Quality Press, pp. 47-49, 2006. [51] Avelar-Sosa, L., García-Alcaraz, J.L., Cedillo-Campos, M.G. and Adarme-Jaimes, W., DYNA, 81 (186), pp. 208-217, 2014. DOI: 10.15446/dyna.v81n186.39958 [52] Cronbach, L.J., Coefficient alpha and the internal structure of tests. Psychometrika, 16 (3), pp.297-334, 1951. DOI: 10.1007/BF02310555 [53] García-Rivera J.L., Iniesta, D.G., Alvarado, A., Critical success factors for Kaizen implementation in manufacturing industries in Mexico. The International Journal of Advanced Manufacturing Technology, 68 (1-4) pp. 537-545, 2013. DOI: 10.1007/s00170-0134750-2. M. Oropesa-Vento, is a student of the Doctoral program in Engineering at Universidad Autónoma de Ciudad Juárez, México. She is a professor of engineering in the area of quality systems at Universidad Autónoma Indígena of Mexico. She has worked on projects in the area of Lean and Continuous Improvement (Kaizen) in manufacturing companies and has published several works in scientific journals and at international conferences. She is currently is a researcher recognized by the National Council for Science and Technology of Mexico (CONACYT). She received her MBA from the University of Camagüey (Cuba). Her main research areas are related to total quality management, continuous improvement, decisionmaking and modeling applied to production processes. J.L. García-Alcaraz, is a Research Professor in the Department of Industrial Engineering at the Universidad Autónoma de Ciudad Juárez in México. He is a founding member of the Mexican Society of Operations Research and an active member of the Mexican Academy of Engineering. Dr. Garcia is a researcher recognized by the National Council of Science and Technology of Mexico (CONACYT) through the National System of Researchers at level 1. He received an M.S. in Industrial Engineering from the Instituto Tecnológico de Colima (Mexico), a PhD. in Industrial Engineering from the Instituto Tecnológico de Ciudad Juárez (Mexico) and a Post-Doctorate at Universidad de La Rioja (Spain). His main research areas are related to multi-criteria decision making and modeling applied to production processes. He is a co-author of over 100 articles published in journals, conferences and congresses. L. Rivera, is an Assistant Professor at the School of Industrial Engineering, Universidad del Valle, in Cali, Colombia. He holds a B.S. in Industrial Engineering from Universidad del Valle, an M.S.I.E. from the Georgia Institute of Technology and a PhD. in Industrial and Systems Engineering from Virginia Polytechnic Institute and State University. His research interests include Logistics, Operations Research, Lean Manufacturing and the Economic impact of operational decisions. D.F. Manotas, is an Associate Professor at the School of Industrial Engineering, Universidad del Valle, in Cali, Colombia. He holds a B.S. in Industrial Engineering from Universidad del Valle, an M.S. in Finance from Universidad de Chile and a PhD. in Engineering from Universidad del Valle. His research interests include the financial impact of operational and logistics decisions, multi-criteria decision methodologies, and operational and financial risk in supply chains.

84


A framework to evaluate over-costs in natural resources logistics chains Gabriel Pérez-Salas a, Rosa G. González-Ramírez b & Miguel Gastón Cedillo-Campos c a

División de Recursos Naturales e Infraestructura, CEPAL - Naciones Unidas, Santiago de Chile, Chile. Gabriel.PEREZ@cepal.org b Escuela de Ingeniería Industrial, Pontificia Universidad Católica de Valparaiso, Chile. rosa.gonzalez@ucv.cl c Instituto Mexicano de Transporte, Querétaro, México. gaston.cedillo@mexico-logistico.org Received: January 28th, 2015. Received in revised form: March 26th, 2015. Accepted: April 30th, 2015.

Abstract Foreign trade barriers and duties have been significantly reduced by multilateral agreements and integration mechanisms among nations. Logistics costs represent a significant proportion of the final price of the items, especially of the commodities. For this reason, minimizing logistics costs is an important challenge for nations in order to enhance competiveness of foreign trade and in particular to take advantage of natural resources rents. In this article, we propose a holistic framework for estimating logistics costs, focused on determining over costs that result from inefficient procedures and a lack of adequate public policies and regulations of the public entities. These issues have not been considered in the literature, and this article fills this gap. As a case study, we present the application of the framework to estimate over costs in some natural resources logistics chains of Bolivia. A discussion of the results found is presented together with recommendations for further research. Keywords: Natural Resources; Logistics chains; International trade

Un marco de referencia para evaluar los extra-costos en cadenas logísticas de recursos naturales Resumen Las barreras y aranceles al comercio exterior han disminuido significativamente gracias a los acuerdos multilaterales y los mecanismos de integración entre las naciones. Los costos logísticos representan una proporción significativa del precio final de los artículos, especialmente de los commodities. Por esta razón, minimizar los costos logísticos es un desafío importante para las naciones para fomentar la competitividad del comercio exterior y en particular para aprovechar la renta de los recursos naturales. En este artículo, proponemos un marco de referencia integral para la medición de costos logísticos, enfocado en determinar los extra-costos que resultan por procedimientos ineficientes y la falta de políticas públicas y regulaciones de las entidades públicas. Estos elementos no han sido considerados en la literatura, y este artículo aborda esta brecha. Como caso de estudio, presentamos la aplicación del marco de referencia para estimar los extra-costos en algunas cadenas logísticas de recursos naturales en Bolivia. Se presenta una discusión de los resultados encontrados, así como recomendaciones para trabajo futuro. Palabras clave: Recursos Naturales; Cadenas logísticas; Comercio exterior

1. Introduction Free trade agreements and the current geographical fragmentation of production worldwide are positioning logistics costs as a key element of the total cost of a product, in particular commodities. Hence, logistics costs present an important challenge for companies and nations in order to

compete in the current markets [1]. Foreign trade competitiveness relies on the efficiency and effectiveness of the export and import logistics chains, where a significant number of private and public parties interact. Efficiency at seaports, airports and borders (nodes of the global network) are critical. International experience shows that an adequate coordination of the different parties involved

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 85-92. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.51158


Pérez-Salas et al / DYNA 82 (191), pp. 85-92. June, 2015.

in the logistics chain, generates important economic, social involved, where inventory costs, dead times for loading and and environmental benefits, reducing operational costs and unloading operations, stock out costs, and the variability on the lead times are the main variables that represent total enhancing national connectivity and regional integration. Latin American countries face important challenges in logistics costs. There are two main approaches in the literature to terms of logistic performance with respect to OECD countries and other emerging countries. To reduce this gap, measure logistics costs: macro and micro economic it is necessary to promote public policies that enhance perspectives, of which the contributions to the literature are infrastructure development, trade facilitation including presented in section 2.1 and 2.2 respectively. Section 2.3 customs procedures, an adequate market regulation, supply presents some case studies that employ a combination of macro and micro approaches. A more detailed review can be chain security and the efficiency of logistic services [2]. In the literature, it is possible to find several studies that found in [13]. show the importance of transport costs with respect to trade costs, as well as the importance of a good level of 2.1. Macro methodologies for logistics cost estimation infrastructure. Avelar-Sosa et al. [3] present a study in which the results show that a good degree of the regional Macroeconomic methodologies aim to determine the infrastructure has a positive impact on logistics services, and logistics performance of a country, based on the global as a consequence, on the costs. In [4,5] an analysis of the estimation of its logistics systems and its relative importance determinants of transport costs and their impact on foreign with respect to the productivity of the country and its trade in Spain is presented. In [6-8] the impact of transport competitiveness. Measuring the efficiency of a nation´s costs for foreign trade is analyzed for intra-trade and logistic system is important for the private and public sector. containerized trade in Latin America. The impact of port For the public sector, it supports the design of public policies efficiency as a determinant in maritime transport costs is and investment decisions. For the private sector it also analyzed in [9], and the importance of connectivity measures provides information that may be an input for strategic for maritime transport is discussed in [10]. decision making such as investments and others. Integral and sustainable logistics policies have been Macroeconomic methodologies are mainly based on promoted by the Unit of Infrastructure Services at ECLAC descriptive tools and econometric methods, where the (Economic Commission for Latin American and Caribbean), variables do not necessarily comprise total logistics costs but with the aim to integrate the different stakeholders that represent estimations of the country’s logistics costs. For this, participate at the national and international logistics chains primary and secondary information sources are employed. [11,12]. This article presents a holistic framework for Particularly for the Latin American case, in [14] a study measuring logistics cost in natural resources logistics chains to determine logistics costs is presented and evidence of the incorporating ECLAC´s integral vision recommendations. impacts on the competitiveness, development and poverty This incorporates the role of the State, taking into account indexes is provided for the economies of the region. The main those public entities that directly participate in different links trade barriers that were identified are the infrastructure, of the global logistics chains (e.g. Customs, Agricultural and logistics and trade facilitation. A key finding with respect to Livestock services, among others). infrastructure barriers is that simultaneous improvements in In addition, the proposed framework considers not only road and rail infrastructure imply bigger impacts than the costs incurred under normal operations, but also over investments in road infrastructure itself. This highlighted the costs that are generated as a result of the lack of coordination need for integral public policies for transport infrastructure and inefficiencies in the export and import processes, and the that enhances sustainability and efficiency, promoting the lack of adequate State public policies and regulations. As it complementarity in transport modes (co-modality). will be further described, methodologies that have been These types of estimations turned out to be critical to the proposed in the literature do not incorporate this vision, establishment of public policies or projects related to which is the main contribution of this work. improving the logistics performance of a country. In global The manuscript is structured as follows. Section 2 terms, macro-economic methodologies can be classified presents a brief literature review where the main according to four main approaches: contributions on the subject are described. Section 3 presents i. The first approach refers to the measurement of global the methodological proposal. Section 4 presents the case indicators such as the GDP or the CIF value of the cargo. study of Bolivia and section 5 presents some conclusions and This type of indicator is frequently used in the annual recommendations for further research. reports related to logistics in United States. ii. The second approach is related to the effect on time and 2. Literature review its implications in foreign trade, specifically in terms of reliability of the supply chain measured as the time Logistics costs are defined as those resources required to variance of the in transit inventories. This type of perform the activities related to moving, storing and methodology has been used by [15-17], that consider distributing goods from its origin to the point of transport, warehousing, management and information consumption. Hence, logistics costs involve a spatial systems costs. In [18], an analysis of the relationship dimension associated to transport and warehouse activities, between import and export times, logistics services and as well as a temporal dimension associated to the process foreign trade is presented. The incidence of the time 86


PĂŠrez-Salas et al / DYNA 82 (191), pp. 85-92. June, 2015.

factor and the probability of a foreign trade transaction warehousing and order processing. Some other authors present different classifications, considering warehousing between two nations occurring is also analyzed. iii. The third approach relies on the causes and effects of and inventory management as a single component [25], logistics costs, where the main estimator of logistics costs whereas others incorporate other administrative costs. None is the inventories levels. In [19-21], these topics are of these approaches explicitly, consider those involved in the addressed, with a special focus on the time variable, inspections and activities of public organisms as part of the considering the impact of lead times variability and its cost (i.e. Customs, Agricultural and Livestock, etc.). We can find different approaches in the literature reliability. This approach is very commonly employed for the private sector. Tongzon [22] propose a set of regarding those decisions in which logistics cost is the main determinants for logistics competitiveness, from which objective to optimize: inventory policies, transport mode operational effectiveness, adaptability to customer selection (multimodal), location policies and supply chain demand and cooperation and the fostering of alliances are policies among others. Three main methodologies are identified and further described: highlighted. iv. Finally, the last approach corresponds to the indexes i. The first methodology relies on the estimation of the private logistics costs incurred, with a special focus on the computed by the World Bank such as the Logistics lead times and variability in the inventory levels. It is Performance Index (LPI) and Doing Business that are important to point out that the variable “time� does not useful tools for benchmarking analysis, given that they only depend on the transport mode configuration of the are periodic evaluations including a significant sample of supply chain, but also on the characteristics of the demand countries. for the product. In this regard, Zinn et al. [26] study the One of the main advantages that the macroeconomic effect of the demand patterns in time, and how these method offers is the availability of indicators that support affect security inventories and hence, inventory costs. On decisions making in preliminary stages of public policy the other hand, Haartveit, [27] propose a method for planning. It also supports the evaluation of different computing logistics costs based on a time study, and stakeholders and institutions both public and private that Vernimmen et al. [28] address the effects of variability in participate in the logistics chain. Furthermore, these types of service times of maritime shipping companies with indicators also enhance the implementation of measures and respect to the inland logistics chain. On the other hand, policies required to improve the competitiveness of a nation. Evaraert [29] analyzes the particular case of logistics Macroeconomic approaches previously mentioned, deal costs for a wholesale distributor where costs are handled with aggregated data and, in general, the variable time is not as a function of production amounts, using the ABC a focus. The first approach does not consider this variable and costing to determine the resources consumption of each the second approach, partially analyzes it. The third approach estimates logistics costs based on the levels of inventory, activity in the logistics chain that the wholesaler without considering other costs such as transport and administrates. Haartveit et al. [30] emphasize the variable management, or the interaction with variables associated to time to measure logistics costs of a multi-item chain of a process facilitation and the performance of the public particular enterprise. Lamban et al. [31] propose a logistic organisms involved in the logistics chain. In the case of index to determining the storage costs of a product for a global indicators such as the LPI, these are based on supply chain. perceptions of logistics operations by the people working in ii. A second approach is related to the relationship between a country and do not measure the performance and total cost logistics costs for decision making. Under this components of logistics activities. Furthermore, the logistics perspective, logistics costs determine inventory policies, performance index is estimated based on the average values transport mode selection, location policies and supply for the Likert type variables, and hence special statistical chain policies. For inventory policies, there exist three treatment of data is required [23]. types of decisions that are taken such that logistics costs An important gap for this type of methodology is that are minimized: order size, number of orders to place and results are not product specific and they deal with aggregated the frequency for the orders. Such decisions are modeled data, which can lead to important distortions with respect to either theoretically or for specific case studies. the real operation costs and times, due to the lack of iii. The last approach is related to more integral supply chain differentiation between products and their own requirements management policies. Roorda et al. [32] propose an (bulk products, perishable items, dangerous cargo, etc.). For integrated model where logistics costs are combined with this reason, additional information related to logistics costs other variables such that outsourcing of logistic services, and its deployment is needed, and this is provided by the other industrial sectors development, and the impact on microeconomic approaches. new logistics chains in the market. Microeconomic methods provide more details regarding logistics costs, either for a product level, group of products, 2.2. Micro methodologies for logistics cost estimation firms industry or a cluster. These methods provide a more Microeconomic approaches support decision making for integral vision of the logistics costs, incorporating the costs the enterprises based on logistics costs as criterion. associated to the time dimension (safety stock and stock out) According to [24], logistics costs are a compound of the costs traditional transport, warehousing and management costs. Conceptually, the literature focuses on the private vision, involved in four activities: transport, inventory management, which is the traditional perspective for logistics costs 87


PĂŠrez-Salas et al / DYNA 82 (191), pp. 85-92. June, 2015.

estimation. It refers to those costs incurred by a private company to transport goods from the point of origin to the point of consumption, without considering those costs in which the public sector is involved. Such costs include those of process facilitation, inspections and regulation. The methodology proposed herein, fills this gap by including these costs in the analysis, with a holistic or integral vision of the global logistics chains.

Export/Import Efficiency in service provision

Track & Treace Carbon footprint

Logistics costs Operational efficiency

Externalities reduction

2.3. Combined macro and micro case studies Traditional

Externalities

In order to illustrate two practical applications, UNESCAP [33] presents two case studies: United States and Korea. These two countries were chosen as they present periodic national logistics costs reports. In the United States, the company Cass Logistic Systems Inc. publishes national logistics costs statistics every year, considering three key components: transportation, inventory carrying and administrative costs. The Korea Transport Institute, on the other hand, developed a methodology for estimating logistics costs in order to evaluate the efficiency of the national logistic system. Several logistics cost factors were considered such as transportation costs, inventory holding costs, packing costs, stevedoring costs, information costs and administration costs. In both cases, public sector activities are not considered directly, and only the costs generated by the operations of the private stakeholders are considered, without including the externalities and over costs caused by the public sector and inefficiencies that occur at different echelons of the global logistics chain. This is the gap that the present document is addressing.

Market inefficiencies -Infrastructure: monopoly

and

economic

rent

appropriation. -Transport: externalities

Public sector -Regulation and generation of incentives for the efficient provision of infrastructure -Mechanisms to reduce externalities Figure 1. Integral Vision of Logistics costs. Source: The authors.

the trade processes, identifying the potential inefficiencies, in terms of time and costs involved, independently of the transport mode and the stage of the supply chain, considering both public and private stakeholders. Fig. 1 presents a diagram of the integral vision of logistics costs assessment. The diagram considers both traditional logistics costs, which are those incurred by the private company to move goods from one point to another, as well as those externalities and over costs that resulted from the inefficiencies in the provision of logistic services involving the State and public entities, mainly related to transport and infrastructure. Logistics over costs could be incurred, for instance, due to the lack of adequate regulations in monopolies and economic rent appropriation. Furthermore, over costs can be generated due to the lack of an efficient service provision at the different echelons of the logistic chain. This can generate additional storage days or fees due to the non-fulfillment of a time window that is required by a certain facility (e.g. a late arrival fee imposed by the port terminal for the reception of cargo). Another example are truck delays and waiting times at the port terminals due to the lack of efficient coordination in terms of landside operations. Externalities are those impacts that the society has to assume, as a result of logistic activities, such as congestion, pollution and road accidents among others. The aim of identifying and incorporating them when it comes to computing logistics costs is to enhance more sustainable logistics. The role of the public sector is crucial to identify and correct those market failures by an adequate regulation and facilitation, both in the transport and logistics sector. In terms of infrastructure provision, the State should enhance mechanisms that prevent monopolistic markets but also

3. Framework for logistics costs estimation As pointed out in the previous section, traditional approaches for logistics costs assessment found in the literature present important gaps. From a macroeconomic perspective, particular information of the logistics chains is not considered to obtain global performance indicators that are employed for national benchmarking purposes, while from a microeconomic perspective, causes analysis or more detailed computation of logistics costs is not prioritized. Furthermore, the costs related to public entities and the state institutions are not considered. The following subsections present the integral vision of logistics costs with respect to the traditional vision. Then, the main determinants of logistics costs are analyzed and finally, the proposed reference model is described. 3.1. Traditional and integral vision of logistics costs Traditional approaches for logistics costs assessment do not consider those costs related to inefficiencies brought about by inadequate provision of infrastructure services by the State, (e.g. inspections, trade facilitation) as well as those related to the externalities that are generated by the community and the environment. For this reason, ECLAC promotes an integral framework for the analysis of each of 88


PĂŠrez-Salas et al / DYNA 82 (191), pp. 85-92. June, 2015.

ensure an adequate economic rent appropriation by the private operators.

3.3.1. Step 1: Definition of the scope of the study and selection of the logistics chains

3.2. Determinants of total logistics costs

a) Mapping the most relevant logistics chains This activity consists in the analysis of the export and import transactions of the country in question in order to determine a set of representative logistics chains to be included in the study. This considers the identification of logistics chains and modeling the main process and stakeholders involved, as well as the main transport corridors used to transport cargo. The basis of the analysis is the historical data of trade volumes and the coverage of the transport modes that are most frequently used for each logistics chain. For the export transaction, the analysis should consider those logistics costs from the origin of the product (i.e. at the farm) to the point of consumption in the country of destination. For import transactions, the analysis should consider the costs from the moment the product arrives at the point of origin until it reaches the warehouse of the importer. b) Selection of logistics chains This consists in the identification and selection of the main logistics chains based on a multicriteria analysis. For this, a preselection of the most relevant logistics chains is performed as a first step. Then, criteria to select a set of logistics chains are defined. Criteria to be considered could be the relative importance of each productive sector with respect to trade volumes, value of the cargo, innovation opportunities, among other criterion that may be important to consider. Finally, based on the relative importance of each criterion, the pre-selected logistics chains are evaluated and those with a higher score are selected.

To identify the relative importance of each component of the total logistics costs is crucial. The components of logistics costs are not necessarily the same or have the same impact on all products. Literature provides a set of factors that may determine logistics costs such as: infrastructure, human resources competences, technology, legal and regulatory aspects, facilitation of administrative procedures, as well as those elements related to the industrial organization or the market structure. Another important factor is related to the institutional and regulatory stability and maturity. Measuring each of the components of the logistics costs and establishing causal relationships among the components and determinants is important for estimating logistics costs. For this, variables that affect total logistics costs can be classified as endogenous and exogenous. Endogenous variables correspond to those logistic activities such as transport, loading/unloading, management and order processing, warehousing, etc., that are performed by the private operator at the domestic and international stages of the total logistics chain. Given that these type of costs have a financial and a time dimension, those variables associated to time and its variability should be considered, as well as those related to the costs of service and the cost of inventories incurred by the inspections and regulatory activities of Customs and other governmental agencies (i.e. Agricultural and Livestock Services, Health and Aquiculture services, among others), or those procedures related to health and safety that include administrative and order processing costs, costs for transporting the cargo to be inspected, loading and unloading, packing/unpacking and inventory costs. Such costs are commonly not considered in the approaches found in the literature. Exogenous variables are those that indirectly affect logistics costs or the efficiency of the logistics chain as a whole. This category includes national institutions, infrastructure, human resources involved in public services, information systems and technology that support public services, legal aspects and the institutionalism of the public sector that can impact any of the endogenous variables both private and public, as well as those costs related to externalities and those resulting from the market failures. According to [34], exogenous factors that drive logistics costs have not been addressed as extensively as endogenous factors in the literature. Hence, they use a path analysis to examine six institutional constructs on logistics performance and found that all of them have a direct or indirect impact. The methodology proposed herein aims to incorporate both exogenous and endogenous factors.

Table 1. Cost categories by transport mode and trade. Cost Description Category

Preshipment

Costs related to the activities performed for cargo handling prior to its shipment to its final destination such as: packing, labeling, consolidation of cargo and storage of products, transport of cargo among facilities. This also includes the cost of those activities related to cargo inspection and certifications required by any public organism.

Shipment to Costs of the land transport from the warehouse of the the port of exporter to the port terminal where it will be transferred. origin Port/ Airport/ Border entry

Costs for cargo handling at the port/airport terminal where the cargo will be loaded to a ship. It also includes all the related costs for delays at the gate and within the terminal.

Costs incurred for Customs and other control agencies for the inspection, control and clearance procedures. It also Customs includes those costs incurred for certifications and and control inspections required by the customer or the country of agencies destination. For the case of road transport, it also includes those costs incurred at the borders. Costs related to the freight shipment that could be either by Shipment to road, air or maritime mode. It also considers any other destination handling costs as well as insurances.

3.3. Proposed framework for logistics costs estimation

Costs such as in-transit inventory costs as well as those Inventory incurred due to waiting times. Also, additional costs that and Finance resulted because of delays or variability in the lead times are considered in this category. Source: The authors.

The framework is structured in three steps that are further described. 89


Pérez-Salas et al / DYNA 82 (191), pp. 85-92. June, 2015.

reduce those over costs per product and corridor. This process is not proportional to the scale economies of the potential solutions recommended, and is actually a complex process in which political and social variables may also be considered.

c) Definition of costs categories Costs have to be categorized in order to be measured. Table 1 presents a general categorization of costs that may be used and should be adapted to the specific considerations of each case study. d) Identification of the sources of information This consists in the identification of the main sources of information both primary and secondary for data gathering. Primary sources of information consider those relevant stakeholders involved in the process either of export or import of each logistics chain. An agenda should be planned, with the dates for interviews and focus group.

3.3.3. Step 3: Recommendations and proposals for public policies a) Analysis and recommendations A summary of the recommendations to improve current export and import processes as well as public policies recommendations should be reported based on the results obtained with the logistics costs analysis, as well as the priorities defined for the specific projects to be implemented and its impacts on national productivity and competitiveness.

3.3.2. Step 2: Costs, over costs and processes analysis a) Development of logistics processes mapping This activity consists in the analysis of the export and import processes of each logistics chain under study, with the aim of determining potential impacts that these factors and their regulation have with respect to the efficiency of the logistics chains. For this, in-depth interviews should be performed with those stakeholders previously selected form each logistics chain. With the information gathered during the interviews and the secondary information obtained, a logistic processes map should be designed, such that the main physical and information flows are identified with their corresponding costs. Other techniques such as Business Process Analysis (BPA) to model the process could also be used. b) Costs and over costs analysis A diagnostic of the current situation of each logistics chain may be elaborated, based on the logistic process mapping. The costs and over costs incurred at each stage of each logistics chain must be identified. For this, a base scenario is determined first, whereby the costs of the logistics chain are identified under “normal” operation. Then, the over costs are determined considering other scenarios that can occur (based on in-field observations and interviews) in which inefficient operations occur that result in additional costs (i.e. a late arrival of export cargo for the stacking period at a port that result in an additional fee for the exporter), as well as costs associated to waiting times or delays in each of the stages of the logistics chain (i.e. waiting times at the gate of the port or at an empty container park). This estimation may be computed according to average observations or occurrence probabilities. c) Logistic inefficiencies analysis, its sectorial impact and national competitiveness This activity aims to provide a general analysis of the most relevant inefficiencies observed for the logistics chains under study, and current gaps with respect to other economies and best practices. Impacts on the competitiveness of foreign trade at a macro level of those gaps and potential benefits obtained by the reduction or elimination of those over costs observed should be estimated. Recommendations to reduce those gaps identified should be provided and analyzed with the stakeholders involved in the processes in order to prioritize them and generate a roadmap of the specific actions required to significantly

4. The Bolivian, case study of cost logistics estimation The Natural Resources and Infrastructure Division of ECLAC has recently analyzed the challenges of the transport systems in landlocked South American countries. In this section, we present the results of the analysis related to estimating logistics costs of a number of representative mineral natural resources logistics chains in Bolivia [35]. The logistics chain of zinc concentrate via rail and sea transport in the pre-shipment stage are first sent by road from the mine to the plant (10 km) and from there to the Avaroa railway station (15 km). In the event that the plant has a branch rail line, the road segment runs only from the mine to the plant. Then the concentrate is transported by rail via Ollagüe (on the Bolivian border) to the ports of Mejillones or Antofagasta (Chile), a journey of 650 km. Sea transport then follows from one of these Chilean ports to a port in Japan or the Republic of Korea, lasting between 30 and 35 days. The inefficiencies in shipping operations in this corridor account for 5.3 per cent of the value. Inefficiencies in customs processing are the most significant contributing factor here, accounting for 2.8 per cent of the total identified; these inefficiencies are the result of delays at border crossings and the significant cost of the certificate of quality issued by the National Service for the Registration and Control of the Sale of Minerals and Metals (SENARECOM), which is equal to 0.5 per cent of the gross value. The second factor, accounting for 1.5 per cent, is associated with the pre-shipment phase, essentially as a result of the poor condition of secondary roads. The third factor, accounting for 0.6 per cent, involves rail transport and is due to the change of locomotives at the border. The collection process shows cost overruns equal to 0.4 per cent as a result of delays in the settlement of payments (bank transfers) attributable to the time required for the exchange of documents between the seller and the buyer. In the case of logistics chains that use road transport in combination with sea transport, there is a road segment from the mine to the plant in Potosí and from there to the port of departure (Arica, Chile), covering a distance of 806 km, which takes between 2 and 2.5 days. Sea transport then follows from Arica to a port in Japan or the Republic of 90


Pérez-Salas et al / DYNA 82 (191), pp. 85-92. June, 2015.

Network "Sistemas de Transporte y Logística,” the authors acknowledge all the support provided by the National Council of Science and Technology of Mexico (CONACYT) through the research program “Redes Temáticas de Investigación.” At the same time, we acknowledge the determination and effort of the Mexican Logistics and Supply Chain Association (AML) and the Mexican Institute of Transportation (IMT) for providing us an internationally recognized collaboration platform, the International Congress on Logistics and Supply Chain [CiLOG].

Korea, which is a trip lasting between 30 and 35 days. In this case, the inefficiencies identified account for 19.1 per cent. Road transport accounts for 13.4 per cent, which is due to delays resulting from the poor road conditions from the plant to the asphalt road. This segment covers a distance of approximately 50 km. There are further delays at the port of Arica because trucks have to wait in line to offload their cargo, due to the lack of proper planning and scheduling of operations at the port, as well as coordination mechanisms. The rest of the links in the chain contribute in a way that is similar to rail transport.

References

5. Discussion and recommendations

[1]

Implementation of systemic approaches for an efficient supply chain management has become one of the main challenges to take advantage of insofar as the business opportunities presented by globalization. In this regard, logistics costs represent the main challenge and play a key role specifically for developing economies, and especially when areas are remotely located, given that distance has historically been one of the main limitations to allowing countries to be integrated into global markets. Logistics costs currently account for a bigger percentage of the total cost of a product with respect to duties and tariffs. This is influenced by the successful implementation of free trade agreements among countries and more open economic policies. This document presents a framework for logistics costs assessment based on an integral vision that differs from the traditional approaches found in the literature in that State participation, the resulting costs of an inadequate provision of services, and the public entities involved are also taken into consideration. The framework proposed considers a sequential analysis of the foreign trade processes to identify inefficiencies and over costs. Furthermore, a temporal dimension is also incorporated into the analysis. A systematization of the framework proposed in the long term can be the basis for econometric studies that may be further implemented to validate results. As further research, we propose the application of the methodology proposed to several case studies and the undertaking of benchmark analysis for different countries in order to provide recommendations that may enhance the competitiveness of foreign trade. For instance, it is possible to consider those supply chains that have been characterized in the literature in detail such as the “Cocoa supply chain” in Colombia [36] or the “Apple supply chain” in Chile [37], extending the preliminary analysis to incorporate the estimation of logistics costs. Another research avenue is the analysis of logistics costs with respect to trade barriers and duties in order to demonstrate the need to improve current foreign trade processes and take greater advantage of the multilateral and bilateral agreements signed between two or more nations.

[2]

[3]

[4]

[5]

[6]

[7] [8]

[9]

[10] [11]

[12] [13]

Acknowledgments

[14]

The authors would like to thank Erik Leal for his valuable contributions to this work. As part of the National Research 91

ECLAC, IDB, World Bank., Como reducir las brechas de integración. Infraestructura física y costos en el comercio intrarregional. Tercera Reunión de Ministros de Hacienda de América y el Caribe, Lima, Perú, 2010. NU. CEPAL, OCDE., Perspectivas económicas de América Latina. Logística y competitividad para el desarrollo, [on line] OCDE-CEPAL, Chile, 2013. [Consulta: 20 de Agosto de 2014]. Disponible en: http://repositorio.cepal.org/bitstream/handle/11362/1504/LCG2575_e s.pdf?sequence=1 Avelar-Sosa, L., García-Alcaraz, J.L., Cedillo-Campos, M.G. and Adarme-Jaime, W., Effects of regional infrastructure and offered services in the supply chains performance: Case Ciudad Juarez, DYNA, 81 (186), pp. 208-217, 2014. Márquez-Ramos, L., Martínez-Zarzoso, I., Pérez-García, E. and Wilmsmeier, G., Determinantes de los costes de transporte marítimos. El caso de las exportaciones españolas, España, ICE Comercio Internacional y Costes de Transporte, No. 834, 2007. Márquez-Ramos, L., Martínez-Zarzoso, I., Pérez-García, E. and Wilmsmeier, G.,Transporte Marítimo: Costes de transporte y conectividad en el comercio exterior español, en: González-Laxe, Sánchez, Lecciones de Economía Marítima, Spain, Netbiblo, 2007, pp. 105-144. Martinez-Zarzoso, I. and Wilmsmeier, G., Trade responses to freight rates: The case of intra Latin-American maritime trade, in: Fosgerau, de Palma, Marcucci, Niskanen andVerhoef, Special Issue: Transport and Urban Economics, of the 4th Kuhmo-Nectar Conference, Copenhagen 48, pp. 24-46, 2011. Martínez-Zarzoso, I. and Wilmsmeier, G., International transport costs and the margins of intra-Latin American maritime trade. Aussenwirtschaft, 65 (1), pp. 49-72, 2010. Wilmsmeier, G. and Martínez-Zarzoso, I., Determinants of maritime transport costs – A panel data analysis for Latin American Containerised Trade. Transportation Planning and Technology, 33 (1), pp. 105-121, 2010. DOI: 10.1080/03081060903429447 Sánchez, R.J., Hoffmann, J., Micco, A., Pizzolotti, G., Sgut, M. and Wilmsmeier, G., Port efficiency and international trade: Port efficiency as a determinant of maritime transport cost. Maritime Economics and Logistics, 5 (2), pp. 199-218, 2003. DOI: 10.1057/palgrave.mel.9100073. Márquez-Ramos, L., Martínez-Zarzoso, I., Pérez-García, E. and Wilmsmeier, G., Maritime transport costs: Importance of connectivity measures, Ingeniería y Desarrollo, 2006. Cipoletta-Tomassian., G., Pérez-Salas., G. and Sánchez, R.J., Políticas integradas de infraestructura, transporte y logística: Experiencias internacionales y propuestas iniciales. Serie Recursos Naturales e Infraestructura, 50, CEPAL, 2010. Lupano J., Developing integrated and sustainable policies on infrastructure, logistics and mobility in Mesoamerica. Boletin FAL 319 (3), 2013. Leal, E., Costos logísticos: Revisión de literatura y propuesta metodológica para su cálculo en cadenas exportadoras. Working Paper. González, J.A., Guash, J.L. and Srebrisky, T., Improving logistics costs for transportation and trade facilitation. Policy and research working paper 4558, Washington D.C., The World Bank, Latin America and Caribbean Region, Sustainable Development Department, 2008.


Pérez-Salas et al / DYNA 82 (191), pp. 85-92. June, 2015. [15] Bowersox D.J. and Calantone R.J., Global Logistics. Journal of International Marketing, 6 (4), pp. 83-93, 1998. [16] Bowersox, D.J., Calantone, R.J. and Rodrigues, A.M., Estimation of global logistics expenditures using neural networks. Journal of Business Logistics 24 (2), pp. 21 -36, 2003. DOI: 10.1002/j.21581592.2003.tb00044.x [17] Rodrigues, A.M., Bowersox, D.J. and Calantone, R.J., Estimation of global and national logistic expenditures: 2002 data update, Journal of Business Logistics 26 (2), pp. 1-15, 2005. DOI: 10.1002/j.21581592.2005.tb00202.x [18] Nordås, H.K., Pinali, E. and Geloso-Grosso, M., Logistics and time as a trade barrier, OECD Trade Policy Working Papers, OECD Publishing [Online], No. 35, 2006 [date of reference: November 12th of 2014]. Available at: http://www.oecd-ilibrary.org/trade/logisticsand-time-as-a-trade-barrier_664220308873. DOI: 10.1787/664220308873. [19] Hausman, W.H., Lee, H.L. and Subramanian, U., Global logistics indicators, supply chain metrics and bilateral trade patterns. Fifth draft, The World Bank, 2005. DOI: 10.1596/1813-9450-3773 [20] Hummels, D., Time as trade barrier, Thesis, Purdue University, USA 2001. [21] Djankov, S., Freund, C. and Pham, C. ,Trading on time. The World Bank, 2006. DOI: 10.1596/1813-9450-3909 [22] Tongzon, J., Determinants of competitiveness in logistics: Implications for the ASEAN Region. Maritime Economics & Logistics, 9, pp. 6783, 2007. DOI: 10.1057/palgrave.mel.9100172 [23] Jöreskog, K. and Moustaki, I., Factor analysis of ordinal variables with full information maximum likelihood, [Online], 2006. [date of of 2014]. Available at: reference November 12th http://www.ssicentral.com/lisrel/techdocs/orfiml.pdf [24] Heskett, J.L., Business logistics, physical distribution and materials management. 2nd ed. New York: Ronald Press Co, 1973. [25] Wilson, R., 22nd Annual state of logistics report. Cass Information Systems, Inc, 2011. [26] Zinn, W., Marmorstein, H. and Charnes, J., The effect of autocorrelated demand on customer service. Journal of Business Logistics, 13 (1), pp. 173- 192, 1992. [27] Haartveit, E.Y., Kjøstelsen, L. and Jacobsen, B.S., Time is money – Quantifying logistics cost by measuring time. The Norwegian Forest and Landscape Institute, Norway, 2007. [28] Vernimmen, B., Dullaert, W. and Engelen, S., Schedule unreliability in liner shipping: Origins and consequences for the hinterland supply chain. Maritime Economics & Logistics, 9, pp. 193-213, 2007. DOI: 10.1057/palgrave.mel.9100182 [29] Everaert, P., Bruggeman, W., Sarens, G., Anderson, S.R. and Levant, Y., Cost modeling in logistics using time-driven ABC: Experiences from a wholesaler. International Journal of Physical Distribution & Logistics Management, 38 (3), pp. 172-191, 2008. DOI: 10.1108/09600030810866977 [30] Roorda, M.J., Cavalcante, R., McCabe, S. and Kwan, H., A conceptual framework for agent-based modelling of logistics services. Transportation Research: Part E: Logistics and Transportation Review, 46 (1), pp. 18-31, 2010. DOI: 10.1016/j.tre.2009.06.002 [31] Haartveit, E.Y., Kjøstelsen, L. and Jacobsen, B.S., Time is Money – Quantifying logistics cost by measuring time. The Norwegian Forest and Landscape Institute, 2007. [32] Lambán, M.P., Royo, J., Valencia, J., Berges, L. and Galar, D., Modelo para el cálculo del costo de almacenamiento de un producto: caso de estudio en un entorno logístico, DYNA, 80 (179), pp. 23-32, 2013. [33] UN-ESCAP. Study on commercial development of regional ports as logistics centres, 2003. [34] Lee, S.-H., van Wyk, J., National institutions and logistic performance: a path analysis, Service Business, Article in Press. DOI: 10.1007/s11628-014-0254-x [35] Pérez-Salas, G., Sánches, R.J. and Wilsmeier, G., Status of implementation of the almaty programme of action in South America, ECLAC, Natural Resources and Infrastructure, [Online], Series No. 167, 2014, [date of reference November 12th of 2014]. Available at: http://www.cepal.org/publicaciones/xml/0/53920/StatusofImplementa tion.pdf

[36] García-Cáceres, R.G., Perdomo, A., Ortiz, O., Beltrán, P. and López, K., Characterization of the supply and value chains of Colombian cocoa, DYNA, 81 (187), pp. 30-40, 2014. DOI: 10.15446/dyna.v81n187.39555 [37] Lopez-Campos, M., Bearzotti, L., González-Ramírez, R.G. and Cannellla, S., Modelado y análisis de la cadena logística de exportación de manzanas chilenas. Proceedings of the XVII ALIO-SMIO Conference on Operations Research, 2014. G. Pérez-Salas, is an Economic Affairs Officer in the Natural Resources and Infrastructure Diviosn at the Economic Comission for Latin American and Caribbean of United Nations (ECLAC). He is an Informatics Civil Engineer with a MSc. degree in Maritime and Port Management. He has over 14 years of experience in the United Nations System. Since 2008, he has been working for the Infrastructure Services Unit in UNECLAC in the strengthening of the technical and institutional capacity of Latin American and Caribbean countries to foster a sustainable management of infrastructure services. In particular in the areas of logistics, surface transport, urban movility, road safety and ports management. He has published 49 studies, including 3 book chapters, 8 papers in referred proceedings, 6 institutional working papers, 28 articles in specialized maganizes and 4 ocassional papers all of them related to infrastructure services issues. Ten of these publications are in English and the rest in Spanish, and they were published in countries as Argentina, Chile, Colombia, Ecuador, Mexico, United States of America, Spain and United Kingdom. He has participated in more than 50 technical cooperation assistance, multi-agency and intergovernmental meetings and technical seminars. R.G. González-Ramírez, is a Professor-Researcher in the Industrial Engineering School of the Pontificia Universidad Catolica de Valparaíso in Chile. She holds a BSc degree in Industrial Engineering from the Technologic Institute of Morelia, Mexico, a MSc. degree in Industrial Engineering from Arizona State University, USA, a MSc. degree in Quality and Productivity Systems and a PhD in Engineering Sciences from Monterrey Tech, Mexico. She is currently part of the National Research System in Mexico. Her research areas are related to supply chain management and port logistics, including optimization models and algorithms, as well as trade facilitation, competitiveness studies and sustainability issues including social corporate responsibility and port-city relationships. Other research areas include supply chain network design, districting and territory design, lot sizing, metaheuristics and mathematical programming. For the past 5 years, she has been working on applied research projects with several grants from the Chilean government. She is the author of several scientific publications in indexed journals, specialized magazines, book chapters and has participated in several seminars and international conferences. M.G. Cedillo-Campos, is a Professor-Researcher in Logistics Systems Dynamics at the Mexican Institute of Transportation (IMT), Mexico. He is the Founder and President of the Mexican Logistics and Supply Chain Association (AML). Dr. Cedillo-Campos is acknowledged as a National Researcher level 1 by the National Council of Science and Technology in Mexico (CONACYT). In 2004, he received a PhD. in Logistics Systems Dynamics from the University of Paris, France. In 2012, he was acknowledged by the Autonomous University of Nuevo Leon (UANL), Mexico with the Innovation Award. In 2012, he collaborated as speaker at Georgia Tech Panama. Dr. Cedillo is member of the Mexican Academy of Systems Sciences and the Scientific Chairman of the International Congress on Logistics and Supply Chain (CiLOG) organized by the AML. He is the author of several scientific publications in top journals as Transportation Research Part E: logistics and Transportation Review, Simulation, Computers and Industrial Engineering, Applied Energy, Computers in Industry, Journal of Applied Research and Technology, among others.

92


The role of sourcing service agents in the competitiveness of Mexico as an international sourcing region María del Pilar Ester Arroyo-López a & José Antonio Ramos-Rangel b a b

Departamento de Ingeniería Industrial, Escuela de Ingeniería, Instituto Tecnológico de Monterrey, Campus Toluca, Mexico,. pilar.arroyo@itesm.mx Posgrado en Ingeniería Industrial, Escuela de Ingeniería, Instituto Tecnológico de Monterrey, Campus Toluca, Mexico. joseantonio.ramos@itesm.mx Received: January 28th, 2015. Received in revised form: March 26th, 2015. Accepted: April 30th, 2015.

Abstract The purpose of this work was to explore and define the sourcing services of Mexican third parties in order to provide a better understanding of how they contribute to the attractiveness of the country as a low-cost production region. Given the exploratory nature of this research, the case study was the research method selected to collect relevant information. Two Mexican companies associated with global supply chains of different types—product-driven and buyer-driven—were selected as representative cases. Primary information was collected through in-depth personal interviews, site visits and secondary documents. The analysis of the two cases allowed the determination of the supplier governance structure and the assessment of the third parties’ contribution to the integration of local suppliers to global supply chains (GSC). In addition, the analysis contributes to the establishment of the value outsourcing services represent for international buyers as well. Keywords: Global supply chains; international sourcing; third parties; supplier governance.

El papel de los proveedores de servicios de abasto para la competitividad de México como una región de abastecimiento internacional Resumen El objetivo de este trabajo fue el explorar y definir los servicios de abastecimiento de terceras partes mexicanas para proporcionar un mejor entendimiento en relación a cómo estos agentes contribuyen al atractivo del país como una región de producción de bajo-costo. Dada la naturaleza exploratoria del estudio, el estudio de caso fue el método cualitativo de investigación elegido. Dos compañías mexicanas asociadas a cadenas de suministro global de diferente tipo –impulsada por el producto e impulsada por el comprador- fueron seleccionadas como casos representativos. La información primaria fue recolectada a través de entrevistas personales a fondo, visitas en sitio y documentos secundarios. A partir del análisis de los dos casos se determina la estructura de gobierno en la cadena, la contribución que realizan las terceras partes a la integración de los proveedores locales a cadenas de suministro globales (CSG) así como el valor que sus servicios de tercerización del abasto representa para los compradores internacionales. Palabras clave: Cadenas de suministro global; abastecimiento internacional; terceras partes; gobernabilidad de proveedor.

1. Introduction The North American Free Trade Agreement (NAFTA) contributed to increasing export activities in Mexico and facilitated the entrance of global firms seeking opportunities to decrease their production costs. NAFTA and the

geographical location of Mexico improved the attractiveness of the country as a low-cost sourcing region in comparison with other alternatives, such as China and countries located in East Asia or Eastern Europe. However, in many cases, the lower logistics and labour costs may not compensate for the transaction costs of selecting, monitoring and managing relations with local suppliers, governmental agencies and

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 93-102. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.51160


Arroyo-López & Ramos-Rangel / DYNA 82 (191), pp. 93-102. June, 2015.

particular segment of the complete chain and functions as the unique link between the focal firm and their suppliers. Depending on the type of GSC and the required control over suppliers, as critical enablers of information and communication technologies for supply chain integration, sourcing agents may select distinct forms of supplier governance as well as different strategies to integrate processes and improve productivity and the relationship with participant suppliers [12]. For the product-driven chain, where tighter controls are required, insourcing is the preferred procurement strategy. This is illustrated by the case of the manufacturing contractor, Flextronics, which produces complex goods for companies in the electronics sector. These products are manufactured by Flextronics’ suppliers in the industrial parks administered by this third party. A contrasting case is Li & Fung, a Hong Kong-based company that coordinates all the production and logistics activities of the textile and apparel chain of leading brand-owners. This trader does not own production facilities, but supplies the orders of international buyers by capitalizing on its knowledge regarding the manufacturing capabilities of Asian suppliers and on the relationships established when working as a trading broker. Whatever the supplier governance structure chosen by the sourcing agents, it should effectively identify problems and non-compliance incidents that may affect supply performance and generate disruptions of product and information flows. After analysing which factors contribute more to the efficiency of the sourcing process, a third party must decide to focus more on improving the coordination with suppliers or increasing the efficiency of the sourcing transactions (identification, selection and control of suppliers) [13]. The objective of this study was to explore in detail how sourcing service companies operating in Mexico create value for buyers, facilitate the integration of local suppliers in a GSC and contribute to the competitiveness of the country as a global sourcing region. To attain this objective, a qualitative research approach was used. Two case studies were analysed to obtain in-depth insight regarding the mechanisms used by the sourcing agents to assure a continuous flow of quality products to multinationals sourcing from Mexico. The organization of the article is as follows. The first section discusses the concepts of supply chain integration and the role of third parties in global sourcing. The next section outlines the methodology used to collect information regarding the activities performed by the third parties responsible for sourcing, followed by a description of these activities. Thereafter, research findings from the case analysis are presented in a set of research propositions accompanied by a detailed discussion. The final section of the paper provides general conclusions and uncovers the contribution of sourcing agents to the competitiveness of Mexico as a sourcing region.

unions. To overcome this difficulty, third parties that are able to coordinate the activities of multiple local suppliers have emerged to facilitate sourcing from Mexico. International traders serving clothing and apparel firms in East Asia and Eastern Europe, as well as contract manufacturers that produce whole products or components on behalf of Original Equipment Manufacturers (OEM) in the electronics sector are examples of such third parties [1-5]. The portfolio of services of these advanced sourcing service firms is not limited to the coordination of multiple and highly interdependent manufacturing activities. Additional activities performed by these agents include: the co-design of new products, the transference of designs to manufacturing, the quality assessment of production, the assistance provided to suppliers to satisfy the performance criteria defined by the buyer, the delivery of orders to the buyer and the distribution of final products through different channels. Gereffi [6] categorized global supply chains (GSC) as buyer or product-driven, depending on the profile of the focal or leading firm, i.e. the organization with a substantial influence over the other participants in the chain. For buyerdriven GSC, the focal firms are large retailers, distributors and brand owners. The core capabilities of these leading firms are not manufacturing but design, merchandising and distribution; therefore, an increasing number of these relinquish control over manufacturing and logistics to third parties. The leading company focuses on its core competences while the third parties manage production. These entities are also recognized as focal firms because they manage all the supply chain activities required to produce goods that are labour-intensive [7]. Typical buyer-driven GSCs are those of the apparel, shoe and clothing industries. In contrast, focal firms of product-driven GSCs are OEMs that perform the chain’s most valuable activities, namely the design and production of complex goods that are more capital and technology intensive. This second type of chain is characterized by a more hierarchical structure and higher barriers of entry to suppliers, particularly for those sourcing high-tech components. In a product-driven chain, the focal firm usually sustains close relationships with critical suppliers; in addition, relations with parts and basic components manufacturers are solely transactional. This type of chain is representative of the automotive and electronics industry. The focal firms of both types of GSCs are interested in international sourcing as a means to sustain their competitive advantage through lower costs, high product quality, shorter product development and delivery times and the flexibility to respond to demand changes [8,9]. For Latin American industries, specifically those located in Colombia, the previous elements explain 63% of the variance in their competitiveness [10]. However, international sourcing represents a challenge because of the geographical dispersion and diversity of suppliers. Therefore, focal firms rely on third parties to implement an effective supplier management program able to identify and mitigate the risks associated with disruptions, bankruptcy and unsatisfactory quality of subcontracted suppliers. Bitran et al. [11] discussed the potential contribution of neutral third parties to the integration of GSC by introducing the concept of “mini-maestro”, an entity that controls a

2. Theoretical background Third parties are recognized as important members of the supply chain that conduct multiple operational, 94


Arroyo-López & Ramos-Rangel / DYNA 82 (191), pp. 93-102. June, 2015.

be employed to achieve SCI given the business environment and the capabilities of leading firms. For example, Cao et al. [5] identified three different coordination structures for the textile-apparel chains: vertical integration, an efficiency oriented chain and a third-party (3P) hub chain. In the first two cases, leading or original design manufacturers that own and run manufacturing facilities worldwide figure as coordinators of the chain. However, in the 3P-hub, an external firm assumes the responsibility of integrating and simplifying all supply chain activities to provide finished goods to dominant retailers and brand owners [4,5]. These coordination models match with the sourcing methods identified by Mihm [22] in the complex business context of fast fashion. The first method is supported by a fully vertically integrated chain where the focal firm conducts the core activities of design, manufacturing of products in firmowned factories and logistics. The second method, known as house branding, implies control of design and distribution while subcontracting manufacturing. Under this hybrid coordination scheme, a retailer/distributor partnership leads the chain. Efficiency is attained because both parties, the retailer and the manufacturer, combine their expertise and capabilities to span all the chain’s activities. The third method is full outsourcing, whereby an expert third party controls all activities from design to product delivery to customers and may even manage final distribution. Under this scheme, the third party assumes the responsibility to coordinate all the sourcing processes including the management of suppliers [13]. The previous coordination structures and associated modes of production subcontracting imply different degrees of decision making centralization. Vertical integration entails centralized and unidirectional decision making, whereas the other options (house branding focused on efficiency and third party) involve synchronized decision making across different lines of organizational authority and responsibility. It has been suggested that this last type of integration may work better than hierarchical structures provided that the coordinator firm be able to persuade other firms to collaborate via shared benefits and trust [17]. Particularly under full outsourcing, trust acts as a substitute of hierarchical control facilitating relationships, reducing opportunism and transaction costs, improving performance of the buyer-supplier dyad and increasing the commitment and loyalty of the suppliers [23,24]. Collaborative supplier-buyer relations are characterized not only by trust but also by different degrees of interdependence [25]: asymmetric, symmetric and no perceived interdependence. Asymmetric interdependence resembles a hierarchical relation whereby the power imbalance leads to unilateral governance, centralized decision making and low investment in the relationship. Conversely, symmetric interdependence implies both parties are equally dependent on each other. This power balance encourages a fair relationship that contributes to knowledge and resource sharing, increases commitment, and facilitates relationships and collaborative integration [16,21,25,26]. No interdependence occurs when both parties do not perceive they depend on each other; under this scheme, the relationship may be more competitive than cooperative. Sourcing service providers may use different types of control in addition to trust, to complement their supplier

administrative and manufacturing activities commonly performed internally by multinationals. Increasingly often, buying firms outsource or delegate complete supply chain processes, such as the administration of central warehouses or the manufacturing of complete products, with the objective to reduce their operational costs, obtain specialized support or increase their service level [14]. The demand for more valuable and integrated services has backed the advancement of the outsourcing market. In their study regarding the contribution of third party logistics providers (3PLs) to supply chain integration, FabbeCostes et al. [15] conclude the neutrality of 3PLs, their abilities to understand the customers’ needs and adapt their services to buyers’ demands are critical for the coordination of any SC process. These authors note the need to increase understanding regarding the role of 3PLs in the integration and performance of the supply chain, and the importance of recognizing them as pro-active “actors” in the chain, not just as “tools”. According to Masson et al. [4] and Bitran et al. [11], trading agents and contract manufacturers are examples of SC coordinators that support global sourcing and contribute to the operational efficiency, flexibility and responsiveness of global supply chains. In the specific case of traders, their intangible knowledge-based capabilities in addition to their ability to respond to the hyper-dynamism of the business environment facilitate the development of a relationship network that could play a significant role in integrating the fragmented activities of the supply chain [1,4,16]. However, the study of these types of third parties has mainly focused on the value they deliver to their customers with a limited understanding of the traders’ contribution to the attractiveness of a sourcing region. Supply chain management (SCM) has been conceptualized as the integration of business processes across independent firms to provide products, information and services to create value for the final customer [4]. The final purpose of supply chain management is the achievement of sustainable competitive advantage through the seamless coordination of business processes (connectivity) and elimination of processes’ redundancies (simplification) [18]. Integration of the chain may be attained at different levels and entail different actions [19]. Internal or cross-functional integration requires changes in the organizational structure as well as changes in key performance indicators at firm level. Meanwhile external integration demands collaboration and synchronization of processes and decisions among the SC’s participants [20]. The concept of supply chain integration (SCI) is broad and multi-dimensional and requires coordination of products, processes and information flows. This concept requires the alignment of individual objectives and the distribution of responsibilities within and across organizational boundaries [21]. Supply chain integration is influenced by several factors related to the uncertainty in the demand and the complexity of the business environment. Simple conditions (low competition, high production volumes of standardized products and make-to-stock) require low integration, whereas complex conditions (strong competition, high product variety, small batches and make-to-order) require high levels of inter-organizational integration [18]. Alternative forms of business activity coordination may 95


Arroyo-López & Ramos-Rangel / DYNA 82 (191), pp. 93-102. June, 2015.

personal interview in the company headquarters was requested. Data were collected through a combination of site visits and semi-structured interviews with key informants. The interviews lasted 2-4 hours and were recorded with previous authorization of the interviewees. Additional information regarding the company was collected through observations of procedures and dialogues during on-site visits as well as analysis of secondary sources: Web pages, open documents and internal reports provided by the companies. These documents in addition to transcripts and notes taken during the visits were relevant to ensure reliability of the cases. The interview guide was designed to capture: a) the extension and depth of the activities performed by the two sourcing service companies, b) how they manage their supplier base, c) how they create value to suppliers and consumers and d) how they facilitate international sourcing. In accordance with Balabanis [30], two types of activities were considered: transaction and physical-fulfilment services. Transaction services included all activities related to the generation of foreign demand such as the product design, the development of relationships with international buyers, the promotion of regional suppliers and the negotiation of agreements between suppliers and buyers. The physical-fulfilment services refer to activities required to satisfy an order, such as production planning, elaboration of export/import documents and invoices, export packaging, warehousing and freight transportation. A third type of service related to manufacturing control was also included, such as: identification and selection of suppliers, sourcing of raw materials, monitoring of production, quality control and supplier relation management.

governance strategy. The control options are output, behavioural or social control. Output control is based on the evaluation of a supplier’s performance; behavioural control implies monitoring a process to ensure it is appropriately performed, and social control refers to the development of shared values, beliefs and goals [26]. Authors such as Wathne [27] conclude that output control is useful to ascertain certain aspects of the supplier (quality, reliability and financial strength) to make selections, whereas behavioural control not only ensures performance but also contributes to knowledge sharing and mutual learning [26]. Although there are advantages to the development of cooperative and close relationships, it also implies higher management costs, which is why authors such as Gadde and Snehota [28] argue that close relationships should not be promoted with all suppliers. It is more appropriate to sustain several types of relationships depending on the expected benefits. This explains why leading firms in the SC choose to establish collaborative relationships only with strategic suppliers (tiers 1 or 2). Furthermore, leading firms may choose to delegate the management of relations with lowertier suppliers to a third party while maintaining a close relationship with the sourcing service provider. This firm, in turn, should determine what type of relationships to sustain within its network of suppliers. 3. Methodology This research examines in detail how third parties coordinate the activities of local suppliers to serve multinational firms sourcing from Mexico. Given the exploratory nature of this work, the case study was chosen as the research methodology. The flexibility of this qualitative approach is appropriate to explore in detail how a sourcing service provider contributes to the integration of local suppliers to GSC and to the attractiveness of the country as a sourcing region. The case study method is desirable when “how” or “why” questions are posed regarding a complex issue or object over which the researcher has no control [29]. When investigating events that may have little theoretical background, the researcher may select a few cases representing the phenomenon to generate theoretical propositions with the potential for generalizability. In this research, two cases were selected to elicit the practices and value generated by sourcing service providers operating in two contrasting supply chain frameworks [6]. Then, the role and scope of services of these third parties may vary significantly because of the distinctive characteristics of the buyer or product-driven supply chains. For the buyer-driven supply chain, Aztex Trading, a trader with recognized capabilities to provide full-package production (sourcing of final products not only the assembly of garments) to leading firms in the apparel industry was selected. For the productdriven supply chain, American Industries was identified as a third party that supplies secondary raw materials, supplementary products (commodities), administrative and trading services to international firms in the sectors of aerospace, electronics, automotive, and medical devices. Executives of both firms were contacted by phone and a

4. Description of cases CASE 1. The first third party analysed, Aztex Trading, is defined by executives as a “service and knowledge company”, which provides customers with a full-service package, from in-sourcing to delivery of final products to the customer’s warehouse. This trading company has been in operation for 20 years and has a staff of approximately 40 people working at the 16 offices located around the country (Mexico City, North, Central and Southeast Mexico). According to the company founder, initially Aztex was identified as a “supervisor of production” on behalf of leading brand owners. This was because the company’s services were limited to: 1) monitoring that subcontracted suppliers produce the garment according to the designs provided by the retailer and 2) preparing the documentation for custom clearance. However, when international customers decided to source from Asia at the beginning of 2000, Aztex radically changed its strategy to serve large and prestigious local retailers and brand owners. These customers were not seeking low prices and large volumes of commodity products, but sought prime quality clothes with good quality and “fitness” (well-shaped clothes adjusted to local taste). Eventually, new international buyers also became Aztex’s customers. Then, the trader assumed the responsibility of providing—within strict reliability standards—complete product lines according to the buyer’s guidelines. 96


Arroyo-López & Ramos-Rangel / DYNA 82 (191), pp. 93-102. June, 2015.

The first activity performed when a customer decides to use Aztex’ services is the defining of the season catalogue that will guide sourcing and production. The fabrics and designs are defined by the buyer using the trader’s advice, which recommends local fabrics available in its textile directory. All fabrics (domestic and imported) selected by the customer are sourced by the trader. The interaction with customers is intense during this design phase, but once customers have selected the season’s catalogue, they rely completely on the trader to complete production. “That’s why they contract us, it’s our job” stated the trader’s executives. Once the catalogue is defined, Aztex selects the suppliers that will source/produce the textiles and produce the garments. Selection is made considering the production capabilities of suppliers. Then, the contract conditions (delivery time, quality standards and prices) are settled without the participation of the customer. When new designs are to be produced, Aztex’s personnel supports suppliers throughout the design transference phase to guarantee strict conformity with the product’s catalogue. Personal communication is sustained with the selected suppliers through multiple visits by Aztex’s supervisors resulting in behavioural control. Given the fragmentation of the textile and garment chains in Mexico, semi-finished garments and individual pieces of an outfit (e.g., trousers and jackets) are produced by a particular supplier and picked up by the trader that is in charge of the administration of product and information flows related to a particular order. The trader’s services portfolio is described in detail in the first column of Table 1. Based on this description, the value these activities represent to customers and suppliers is inferred and described in the second column of the Table. CASE 2. The second third party analysed, American Industries (AI), is defined by executives as “a knowledge intensive business services company” that facilitates production in Mexico by “easing the landing of international companies”. AI headquarters are located in Chihuahua City but the third party has offices in the main industrial zones in Mexico, including Querétaro, Guadalajara. Irapuato, Monterrey, Torreón, Chihuahua, Ciudad Juarez, Delicias and Laredo. Additionally, the company has the resources to open new offices in other zones, depending on the sourcing preferences of its customers. Currently, AI has 47 customers; all are leading firms of global supply chains driven by the product. Among these customers that operate in the aeronautics, automotive, electronics and medical devices sectors, Cessna, Embraer, Electrolux, Dana, and Federal Mogul stand out. AI offers “shelter services” to multinational companies that decided to produce in Mexico. AI’s portfolio of services is extensive and includes: assistance to select the best production sites based on current incentives for foreign investment, the competitiveness of the regional supplier base, the infrastructure supporting production, and logistics costs [31]. Additional services include the acquisition of land and the building of industrial facilities; the recruitment of personnel at all levels; business process outsourcing services (BPO); the selection of suppliers of supporting services catering) and indirect materials (lubricants, uniforms, and

Table 1. Value-added services provided By Aztex trading. Portfolio of services Benefits to suppliers/buyers MARKETING INTELLIGENCE TO SUPPLIERS Informal marketing research to Global market information about identify global fashion trends. fashion trends. Tracking of demands of leading Promotion of new fabrics with customers. customers. Information about local fashion TO CUSTOMERS trends and customers’ reactions to Local market information about new products by playing “mystery product’s acceptance/rejection. shopper”. Local availability of fabrics. SELECTION OF SUPPLIERS TO SUPPLIERS Continuous search of qualified (Informal) evaluation and feedback manufacturers of fabrics and that suppliers may use to correct garments, and “finishers”. performance. Judgmental selection of suppliers TO CUSTOMERS in terms of their production Identification and selection of abilities. skilled suppliers. DESIGN & MATERIALS TO SUPPLIERS SOURCING Assistance during the transference Collaboration with customer to from design to production. define catalogue. Inbound sourcing managed by the Maintenance of fabrics catalogue. trader. Sourcing of textiles and Recommendation of production trimmings techniques to keep pace with Identification/selection of fashion trends. manufacturers that finish the TO CUSTOMERS fabrics. Definition of seasonal catalogue. Prototypes preparation. Sourcing of all raw materials delegated to trader. PRODUCTION PLANNING TO SUPPLIERS Simultaneous (several seasons) Availability of fabrics. planning of production according Reservation of capacity guarantees to forecast workloads before beginning of the Inventory management of fabrics season. Assurance of production volumes TO CUSTOMERS to fulfil demand through Production planning delegated to reservation of suppliers capacity trader. according to forecasts Assignment of production loads to suppliers PRODUCTION TO SUPPLIERS Production management. Supervision relevant to support Quality control. continuous quality improvement. Integration of supply chain (textile, assembly and finishing firms). TO CUSTOMERS Production control delegated to trader. Management of product flows. Supplier control. ORDER MANAGEMENT TO SUPPLIERS Verification of final order Consolidation and delivery of completeness, quality and completed orders is the trader’s composition. responsibility. Order assembly by consolidation TO CUSTOMERS of the production of multiple Order management delegated to suppliers. trader. Order delivery to customer. Preparation of all documents required for exportation. LOGISTICS ACTIVITIES TO SUPPLIERS & CUSTOMERS Selection and management of Trader is in charge of transportation firms. transportation of in-process and Transportation scheduling of in- finished products and the process and finished products. management of product flows. Supplier and customer relation management. Source: The authors

97


Arroyo-López & Ramos-Rangel / DYNA 82 (191), pp. 93-102. June, 2015.

(cleaning, facility maintenance, surveillance, and safety equipment). In sum, AI is able to provide all the administrative transactions required to transfer production operations to Mexico. To deliver this broad portfolio of services, AI relies on its employees but also on external suppliers. AI’s policy is “international companies have the knowhow and expertise on production. They are not seeking strategic suppliers in the country, they ask theirs to move to Mexico but still need suppliers of many indirect materials and services, and we can find them”. Currently, AI has a base of approximately 8,000 suppliers from which to select the most qualified according to the customers’ needs. Suppliers in the AI directory are continuously evaluated in terms of product quality, reliability and delivery times; customers provide important feedback to assess the supplier’s performance. Some suppliers are asked to be certified to be included in the directory; for example, transportation companies are required to have the Customs Trade Partnership Against Terrorism (CTPAG), the Business Alliance for Secure Commercial (BASC)

and the “Nuevo Esquema de Empresas Certificadas” (NEEC) certifications to guarantee the security of the shipments. None of the suppliers subcontracted by AI are directly involved in manufacturing activities; therefore, according to Mexican laws, they are not considered employees of the international firm. This outsourcing scheme simplifies administrative processes and permits the inclusion of new suppliers and easy replacement of those with poor performance resulting in close behavioural control. 5. Analysis and discussion of results The analysis of the cases is summarized in the form of three theoretical propositions. Proposition 1. To advance supply chain integration, a sourcing agent needs to combine different modes of coordination: logistics synchronization, organization of production, information management, and incentive alignment. This integration is a requirement for full outsourcing and outsourcing of strategic purchases. This proposition is stated from the evidence of the first case and in contrast with the second case. Aztex’s top management indicated that garment assembly (maquila) at low cost is no longer an order winner. International buyers expect to source complete product lines without needing to negotiate with multiple suppliers or needing to supply raw materials (textiles and accessories). Therefore, the trader needs to coordinate production plans and the movement of semi- and finished products across several production units. However, this coordination of product and information flows is performed without simplifying processes or synchronizing decisions [18]. The main role of the sourcing company in the first case is as an integrator of a flexible supply chain organized according to a specific customer order [4]. To attain product flexibility, Aztex follows a strategy based on postponement; “the same textile can be dyed or finished in distinct forms to have a large variety of fabrics, and something similar can be done with a pattern to produce outfits with different designs”. To attain volume flexibility, the trader relies on the abilities of the suppliers to accommodate several orders given a reservation schedule negotiated well before the beginning of the season. With respect to the four coordination modes, the first one, logistics synchronization is centralized by the trader and executed by subcontracted transportation services. Aztex requires moving in-process products nationwide and delivering finished products to customer’s warehouses in Mexico; therefore, there is no need to coordinate activities with the producers. With respect to information integration, the trader provides suppliers with selective information regarding market trends and sustains continuous communication with them during the production phase. Information technologies are not used to enhance visibility (e.g., production status of other assemblers) or to facilitate communication with suppliers [12] because personal interaction is perceived as more cost-beneficial and convenient to promote collaboration. In terms of incentive alignment, the first point is the sourcing service agent does not compete with the GSC’s

Table 2. Value-added services provided by American industries Portfolio of services Benefits to suppliers/buyers FACILITY LOCATION TO CUSTOMERS Identification and selection of Selection of best production sites. industrial parks. Management of all (administrative, Acquisition or lease of land. operative and building) activities Building of facilities including required to setup the industry. production facilities and warehouses. HUMAN RESOURCES Recruitment of executive and operative personnel. Management of labour contracts and negotiation of employment benefits. Management of payroll. SELECTION OF SUPPLIERS Search, evaluation and selection of suppliers of BPO services, operative and secondary raw materials. Management of supplier’s contracts.

OUTSOURCING OF OPERATIVE MATERIALS AND BPO SERVICES Administration of prices and costs of purchasing operative products and secondary raw materials. Product substitution. Inventory management of operative and secondary raw materials.

TO SUPPLIERS Acquisition of new accounts for legal and accounting services. TO CUSTOMERS Lower administrative costs resulting from outsourcing Human Resource management to a third part with experience in Mexican laws. TO SUPPLIERS Evaluation of performance and request for certifications contribute to competitiveness. Integration to global supply chains via subcontracting. TO CUSTOMERS Management of lower-tier suppliers, and providers of commodities, operative materials and supporting services. TO SUPPLIERS New contracts and increased sales Feedback and some support to fulfil the sourcing requirements of multinationals. TO CUSTOMERS Cost reductions as a result of outsourcing the purchasing of nonstrategic and operative goods, and BPO services to an experienced third party. Outsourcing allows multinationals to offload liabilities and focus on the core activities of design and manufacturing.

Source: The authors 98


Arroyo-López & Ramos-Rangel / DYNA 82 (191), pp. 93-102. June, 2015.

value our suppliers […] close personal relationships are important to obtain supplier’s collaboration and to make suggestions to them”. AI also takes full responsibility for supplier performance; however, instead of supporting disadvantaged suppliers, AI substitutes them. This is possible due to the competitiveness of the market of suppliers of commodities, BPO services, and supplementary materials and services. Therefore, relations are more of the no-interdependence type (except for unique or highly qualified suppliers) but without competition. The combination of output and behavioural control utilized by the two third parties reduces the risk of poor supplier performance and is a substitute of the mandatory control that characterizes vertically integrated chains. Because suppliers are legally independent of the sourcing agents, the third parties do not have the authority to modify their internal processes; therefore, they use “softer” governance mechanisms such as qualification, process monitoring and substitution of suppliers. The actualization of supplier directories and the evaluation of a supplier’s performance are critical to the supplier management system of the third parties. Collaboration, knowledge interchanges and shared benefits are governance mechanisms used solely with outstanding suppliers; this conclusion is discussed in more detail after our last theoretical proposition. Proposition 3. The international sourcing facilitated by a third party contributes to the integration of local suppliers to global supply chains because of the promotion and endorsement of the sourcing agent. However, the contribution of the sourcing agent to the integration of local suppliers to GSC is moderated by the supplier’s profile. The third parties analysed contribute to promoting Mexico as a competitive sourcing region either by supporting the establishment of international manufacturers in Mexico or by sourcing complete products to leading firms in the clothing and apparel industry. Because of the outsourcing of strategic and nonstrategic purchases, global firms can focus on their core capabilities, reduce the risks of international sourcing and eliminate the transaction costs of managing a local supplier base. This clearly describes the value that the sourcing agents represent to the customer. With respect to suppliers, benefits are less evident. Additional analysis of the interviews and documents revealed that the sourcing agents need to compete with other “customers” to obtain outstanding suppliers, which are in a position to directly negotiate their participation in a global supply chain. In contrast, less powerful suppliers have less chance of being directly connected with the GSCs’ leading firms as evidenced by the following statement made by AI’s CEO: “There are good suppliers of commodities and [supporting] services, we know who they are and, we are ready to subcontract with them after the approval [if necessary] of the customer who relies on our recommendations”. Additionally, the supervision of production and services performed by the sourcing agent is a potential benefit to suppliers because the supervision provides access to information regarding the production qualifiers set by international buyers. In addition, the supervision provides rules to improve supplier’s performance and fulfil the specifications of leading customers.

leading firms or with the suppliers because the trader is a service company without brands or manufacturing facilities. With respect to local suppliers, the incentives of those working with a trader such as Aztex are: 1) to remain specialized while the trader links their production processes with those of other manufacturers (for example, textiles are directly delivered to assemblers); and 2) the indirect acquisition of new customers without the need to establish direct contact with them. Finally, the coordination of production is mainly accomplished through direct supervision and support during the design phase. This technical support has resulted in a major benefit for buyers in terms of a significant reduction in the duration of the design transfer and order cycle phases: “One of our major accomplishments is the reduction of the lead time by shrinking the time from design to manufacturing […] we can deliver orders in 8 weeks, while garments produced in East Asia take 3 months”. AI does not participate in any critical production activity nor does it supply critical goods; its role is to facilitate the establishment of a vertical SC in Mexico. It is the manufacturing foreign firm that dominates production, establishes and governs the dyadic relationships with the main suppliers through ownership or creation of strategic links. Given the structure of the GSC in which AI operates, a GSC driven by the product, inner production or internal supply management of strategic purchases is preferred because of the required tighter control over supplier performance [32]. Logistic synchronization and management of information are performed by AI when delivering commodities, but these activities do not involve high complexity because suppliers are local and solely provide finished goods. With respect to incentive alignment, the main motivation of both, AI and the suppliers in its network, is to obtain contracts from the multinationals. However, the guarantee of a purchasing contract from AI provides the supplier “the security of financing any investment required for fulfilling the contract” and results in an increased commitment to the third party. However, AI may easily substitute suppliers because of the type of purchases, and the long-term relationships with the most qualified suppliers help to lower transaction costs. Therefore, the contribution that the two third parties make to the integration of the supply chain is less relevant in comparison to the coordination structure identified as full outsourcing. This happens because Aztex only participates in the segment of the chain related to local production, and AI’s role is to simplify the establishment of a vertical chain in Mexico. Proposition 2. When an external firm, such as a third party with low power influence on other organizations, assumes the responsibility of managing multiple suppliers, it needs to develop a governance structure mainly based on the supervision of performance (output control). Behavioural and social controls are used only if the relation represents high benefits for the sourcing agent. For Aztex, its relations with suppliers are characterized as symmetric and interdependent. The firm’s executives consider close and cooperative relationships with suppliers are critical to generate trust, compromise and loyalty. “We 99


Arroyo-López & Ramos-Rangel / DYNA 82 (191), pp. 93-102. June, 2015.

The perception of a symbiotic relation with suppliers, expressed by one of the third parties, is mainly explained in terms of the interdependence of this sourcing agent on prime suppliers [33]. Within the segment of the supply chain coordinated by Aztex, different types of relationships coexist. Some are long-term and collaborative but others, even when durable, are transactional, sporadic and driven by economic and short-term benefits. Given the weak power influence of the third party on the supplier, this last firm is in a position to decide the degree of cooperation, commitment, trust and dependence of its relation with the third party. This result is in accordance with Gadde and Snehota [25] who state that a buying firm, in this case, the sourcing agent, may be highly involved with a limited number of suppliers because “heavy involvement with a supplier is not always feasible […] [because] the supplier may lack the necessary motivation and interest”. 6. Conclusions There are third parties in Mexico that provide sourcing and logistics services that facilitate the supply of final products and the production activities of multinational firms [11]. For the textile & clothing sector, characterized by buyer-driven chains, international buyers sourcing from Mexico demand full-package production that many national suppliers, particularly small ones, cannot offer. Therefore, third parties such as Aztex Trading support international sourcing by coordinating all the isolated production activities required to deliver a complete product line to these global customers. In contrast, given the preferred hierarchical structure of product-driven chains, it is the sourcing agent’s value-added services that enable the necessary conditions to start foreign manufacturing operations. These services include building facilities, staffing the plants and managing suppliers of operative and secondary raw materials and BPO services. The main findings of this research are summarized in three propositions that are aimed at understanding how sourcing agents govern their relations with domestic suppliers, what are the values they add to the international buyers and local suppliers, and finally how they contribute to the competitiveness of Mexico as a global sourcing region. The analysis of two contrasting cases (product and buyerdriven GSC) concluded that the different modes of coordination—logistics synchronization, management of information, incentive alignment and production organization—required to boost supply chain integration are deployed by the Mexican third parties to the extent needed to satisfy current demands. Currently, these demands are not very high because of the limited participation of both agents in the GSC. Hence, there is an opportunity to increase the value of the outsourcing services in particular with respect to logistics synchronization and management of information by participating in additional supply chain processes, for example distribution [12]. The governance structure of the supplier network coordinated by third parties also varies depending on the type of chain and purchase. For the clothing supply chain, the

sourcing agent seeks close and long-term relationships with their suppliers. However, only those with restricted accessibility to the GSCs because of their limited capabilities are motivated to establish symmetric interdependent relationships with the third party. Large suppliers able to offer full-package production achieve economic and shortterm benefits from their relationships with the sourcing agents. Because the sourcing agents studied mainly supply non-strategic products, output control is the main internal governance mechanism, whereas the competitiveness of the supplier market becomes the principal external mechanism. Therefore, third party-supplier relationships are characterized as non-interdependent but fairly symmetrical. The output control is exerted through identification, auditing, selection and substitution, whereas behavioural control is exerted via production monitoring and personal contact. Both forms of control require continuous information regarding customers’ needs and suppliers’ skills. Thus, effective sourcing agents are characterized as “knowledge” firms with extensive experience regarding the capabilities of the national supplier base within a particular sector. Finally, with respect to the contribution of third parties to the enhancement of Mexico as a sourcing region, this contribution was mainly assessed in this work through the evidence of the value added activities performed on behalf of the customers. International firms benefit from the experience and knowledge that third parties have accumulated regarding local suppliers. This substantially decreases the cost of supplier management and the risk of international sourcing or foreign production. The portfolio of services of the sourcing agent varies according to the demands of the international customers that are more advanced when complete product (lines) are required. However, the analysed Mexican agents do not qualify as “supply chain integrators” [11] because they administer the processes in the middle portion of the chain (the labourintensive production activities) or provide non-core supporting services. Front (sales and design) and back end (international logistics) activities, and relation management with strategic suppliers are processes that are still controlled by the GSC’s leading firms. This implies that the sourcing method preferred by global buyers in Mexico is house branding: design and distribution are controlled by the leading firm and selective manufacturing processes are subcontracted to local suppliers via third parties. The contribution that sourcing agents make, in terms of foreign investment or employee creation, was not measured because this contribution requires a quantitative approach and information of a representative sample of sourcing agents. Nevertheless, evidence of the sourcing agents’ contribution to supplier development via selection and control was presented mainly for Aztex. The major weakness of this work is the limited generalizability of results; additional cases including information provided by suppliers and customers are required to validate the theoretical propositions. Also, a quantitative approach is required to specifically measure the contribution of third parties to the development of local suppliers and to the competitiveness of Mexico as an international sourcing region.

100


Arroyo-López & Ramos-Rangel / DYNA 82 (191), pp. 93-102. June, 2015.

Acknowledgments As part of the National Research Network "Sistemas de Transporte y Logística”, the authors acknowledge all the support provided by the National Council of Science and Technology of Mexico (CONACYT) through the research program “Redes Temáticas de Investigación”. At the same time, we acknowledge the determination and effort performed by the Mexican Logistics and Supply Chain Association (AML) and the Mexican Institute of Transportation (IMT) for providing us an internationally recognized collaboration platform, the International Congress on Logistics and Supply Chain [CiLOG]. References [1] [2]

[3]

[4]

[5] [6]

[7]

[8] [9]

[10]

[11] [12] [13] [14] [15]

Popp, A., Swamped in information but starved of data: Information and intermediaries in clothing supply chains. Supply Chain Management, 5 (3), pp. 151-160, 2000. DOI: 10.1108/13598540010338910 Feenstra, R.C. and Hanson, G.H., Intermediaries in entrepốt trade: Hong Kong re-exports of Chinese goods. Journal of Economics & Management Strategy, 13 (1), pp. 3-35, 2004. DOI: 10.1111/j.14309134.2004.00002.x Lam, J.K.C. and Postle, R., Textile and apparel supply chain management in Hong Kong. International Journal of Clothing Science and Technology, 18 (4), pp. 265-277, 2006. DOI: 10.1108/09556220610668491 Masson, R., Iosif, L., MacKerron, G. and Fernie, J., Managing complexity in agile global fashion industry supply chains. The International Journal of Logistics Management, 18 (2), pp. 238-254, DOI: 10.1108/09574090710816959 Cao, N., Zhang, Z., To, K.M. and Ng, K.P., How are supply chains coordinated? Journal of Fashion Marketing and Management, 12 (3), pp. 384-397, 2008. DOI: 10.1108/13612020810889326 Gereffi, G., Shifting governance structures in global commodity chains, with special reference to the Internet. American Behavioral Scientist, 44 (10), pp. 1616-1637, 2001. DOI: 10.1177/00027640121958087 Bair, J. and Gereffi, G., NAFTA and the apparel commodity chain, in Gereffi, G., Spener, D. and Bair, J. (Eds.), Free trade and uneven development. Philadelphia: Ed. Temple University Press, pp. 23-50, 2002. Trent, R.J. and Monczka, R.M., Achieving excellence in global sourcing. Sloan Management Review, 47 (1), pp. 24-32, 2005. Cho, J. and Kang, J., Benefits and challenges of global sourcing: Perceptions of US apparel retail firms. International Marketing Review, 18 (5), pp. 542-560, 2001. DOI: 10.1108/EUM0000000006045 Leguízamo-Díaz T.P. and Moreno-Mantilla C.E., Effect of competitive priorities on the greening of the supply chain with TQM as a mediator. DYNA, 81 (187), pp. 240-248, 2014. DOI: 10.15446/dyna.v81n187.46106 Bitran, G.R., Gurumurthi, S. and Sam, S.L., Third-party coordination in supply chain governance. MIT Sloan Management Review, 48 (3), pp. 30-37, 2007. Correa-Espinal A. and Gómez-Montoya R. A., Information technologies in supply chain management. DYNA, 76 (157), pp. 3748, 2008. Chopra, S. y Meindl, P., Administración de la cadena de suministro. Estrategia, planeación y operación. México: Ed. Pearson & PrenticeHall, 2008. Monterrey-Meana M., New trends in process outsourcing. A study of Spanish and European cases. DYNA 80 (177), pp. 4-12, 2013. Fabbe-Costes, N., Jahre, M. and Roussat, C., Supply chain integration: The role of logistics service providers. International Journal of Productivity and Performance Management, 58 (1), pp. 71-91, 2009. DOI: 10.1108/17410400910921092

[16] Fung, P.K.O., Chen, I.S.N. and Yip, L.S.C., Relationships and performance of trade intermediaries: An exploratory study. European Journal of Marketing, 41 (1/2), pp. 159-180, 2007. DOI: 10.1108/03090560710718166 [17] Cooper, M.C., Lambert, D.M. and Pagh, J.D., Supply chain management: More than a new name for logistics. The International Journal of Logistics Management, 8 (1), pp. 10-18, 1997. DOI: 10.1108/09574099710805556 [18] Chen, H., Daugherty, P.J. and Roath, A.S., Defining and operationalizing supply chain process integration. Journal of Business Logistics, 30 (1), pp. 63-78, 2009. DOI: 10.1002/j.21581592.2009.tb00099.x [19] Mouritsen, J., Skjott-Larsen, T. And Kotzab, H., Exploring the contours of supply chain management. Journal of Manufacturing Technology Management, 14 (8), pp. 686-696, 2003. DOI: 10.1108/09576060310503483 [20] Rodrigues, A.M., Stank, T.P. and Lynch, D.F., Linking strategy, structure, process and performance in integrated logistics. Journal of Business Logistics, 25 (2), pp. 65-89, 2004. DOI: 10.1002/j.21581592.2004.tb00182.x [21] Richey, R.G. Jr., Roath, A.S., Whipple, J.M. and Fawcett, S.E., Exploring a governance theory of supply chain management: Barriers and facilitators to integration. Journal of Business Logistics, 31 (1), pp. 237-253, 2010. DOI: 10.1002/j.2158-1592.2010.tb00137.x [22] Mihm, B., Fast fashion in a flat world: Global sourcing strategies. International Business and Economics Research Journal, 9 (6), pp. 5563, 2010. [23] Agarwal, A. and Shankar, R., On-line trust building in e-enabled supply chain. Supply Chain Management: An International Journal, 8 (4), pp. 324-334, 2003. DOI: 10.1108/13598540310490080 [24] Knemeyer, A.M. and Murphy, P.R., Evaluating the performance of third-party logistics arrangements: A relationship marketing perspective. Journal of Supply Chain Management, 40 (1), pp. 35-51, 2004. DOI: 10.1111/j.1745-493X.2004.tb00254.x [25] Jambulingam, T., Kathuria, R. and Nevin, J.R., Fairness-trust-loyalty relationship under varying conditions of supplier-buyer interdependence. Journal of Marketing Theory and Practice, 19 (1), pp. 39-56, 2011. [26] Hernández-Espallardo, M., Rodríguez-Orejuela, A. and SánchezPérez, M., Inter-organizational governance, learning and performance in supply chains. Supply Chain Management: An International Journal, 15 (2), pp. 101–114, 2010. DOI: 10.1108/13598541011028714 [27] Wathne, K.H. and Heide, J.B., Relationship governance in a supply chain network. Journal of Marketing, 68 (January), pp. 73-89, 2004. [28] Gadde, L.E. and Snehota, I., Making the most of supplier relationships. Industrial Marketing Management, 29, pp. 305-316, 2000. [29] Yin, R.K., Case study research. Design and methods, 5th. Ed. London: Ed. Sage Publications. 2014 [30] Balabanis, G.I., Factors affecting export intermediaries’ service offerings: The Gritish example. Journal of International Business Studies, 31 (4), pp. 83-99, 2000. [31] Avelar-Sosa L., García-Alcaraz J.L., Cedillo-Campos M.G., AdarmeJaimes W., Effects of regional infrastructure and offered services in the supply chains performance: Case Ciudad Juarez. DYNA 81 (186), pp. 208-217, 2014. [32] Maltz, A. and Ellram, L., Outsourcing supply management, The Journal of Supply Chain Management, 35 (2), pp. 4-17, 1999. DOI: 10.1111/j.1745-493X.1999.tb00232.x [33] Camarero-Izquierdo, C. and Gutiérrez-Cillán, J., The interaction of dependence and trust in long-term industrial relationships. European Journal of Marketing, 38 (8), pp. 974-994, 2004. DOI: 10.1108/03090560410539122 P.E. Arroyo-López, is a Professor at the department of Industrial Engineering at Tecnológico of Monterrey campus Toluca, Mexico. She holds a PhD degree in Business Administration from the Tecnológico de Monterrey in Mexico. She is member of the Mexico National Research System and has published articles on diverse topics such as outsourcing, third party logistics, social marketing for health care and environmental protection in international journals such as Journal of Supply Chain Management: An International Journal, International Journal of Operations

101


Arroyo-López & Ramos-Rangel / DYNA 82 (191), pp. 93-102. June, 2015. & Production Management, Journal of Management and Sustainability, Qualitative Marketing Research and Mexican journals such as the Journal of Accounting and Business Administration UNAM. J.A. Ramos-Rangel, is a PhD student at the Industrial Engineering program at Tecnológico de Monterrey campus Toluca, Mexico. He holds a MSc. degree in Sciences Decision Making from the State of Mexico University. He has extensive professional logistics experience acquired by working for several years in the automotive industry. He is currently working on his PhD thesis, which is related to strategic purchasing, supplier evaluation and supplier development.

Área Curricular de Ingeniería Administrativa e Ingeniería Industrial Oferta de Posgrados

Especialización en Gestión Empresarial Especialización en Ingeniería Financiera Maestría en Ingeniería Administrativa Maestría en Ingeniería Industrial Doctorado en Ingeniería - Industria y Organizaciones Mayor información: E-mail: acia_med@unal.edu.co Teléfono: (57-4) 425 52 02

102


Modeling of CO2 vapor-liquid equilibrium in Colombian heavy oil using SARA analysis Oscar Ramirez a, Carolina Betancur b, Bibian Hoyos c & Carlos Naranjo d a

Facultad de Minas, Universidad Nacional de Colombia, Medellín, Colombia. ojramirezo@unal.edu.co Facultad de Minas, Universidad Nacional de Colombia, Medellín, Colombia. cbetancurg@unal.edu.co c Facultad de Minas, Universidad Nacional de Colombia, Medellín, Colombia. bahoyos@unal.edu.co d Instituto Colombiano del Petróleo, Ecopetrol S.A., Bogotá, Colombia.carlosed.naranjo@ecopetrol.com.co b

Received: February 26th, 2014. Received in revised form: October 30th, 2014. Accepted: November 15th, 2015

Abstract The solubility of CO2 in Colombian heavy oil was calculated using the Peng-Robinson cubic equation of state and the Lee-Kesler correlations. The crude was represented as a mixture of pseudo-components and for each one of them, the thermodynamic and critical properties were estimated. The results obtained in representing the oil with four, five and six pseudo-components show that all these representations produce similar results and therefore the use of four pseudo-components is sufficient and has a lower computational cost. Excellent results were obtained by comparing the experimental and calculated data. For this system, it is enough to have a complete characterization of the SARA analysis and to use four pseudo-components to adequately model the vapor-liquid equilibrium of CO2 -heavy oil. Keywords: Equation of state; pseudo-component; SARA analysis; crude; solvent; phase equilibrium.

Modelado del equilibrio líquido-vapor CO2-crudo pesado colombiano con análisis SARA Resumen La solubilidad del CO2 en un crudo pesado colombiano fue calculada usando la ecuación cúbica de estado de Peng-Robinson y la correlación de Lee-Kesler. El crudo fue representado como una mezcla de pseudo-componentes y para cada uno de ellos se calcularon las propiedades termodinámicas y críticas. Los resultados obtenidos en la representación del crudo con cuatro, cinco y seis pseudocomponentes muestran que todas las representaciones producen resultados similares y por lo tanto el uso de cuatro pseudo-componentes es suficiente y tiene un costo computacional más bajo. Fueron obtenidos excelentes resultados al comparar los datos experimentales y calculados. Para este sistema es suficiente tener una completa caracterización del análisis SARA y usar cuatro pseudo-componentes para modelar adecuadamente el equilibrio líquido-vapor de CO2-crudo pesado. Palabras clave: Ecuación de estado; pseudo-componente; análisis SARA; crudo; solvente; equilibrio de fases.

1. Introduction Oil is a complex mixture of hydrocarbons produced from sedimentary rocks in gas form (natural gas), liquid (crude oil), semisolid (bitumen) or solids (wax or asphaltite) [1]. Most of the world's oil resources correspond to viscous and heavy hydrocarbons, which are difficult and expensive to produce and refine. Heavy oil, extra heavy oil and bitumen make up 70% of the world’s total oil resources oscillating

between 9 and13 trillion barrels [2]. A significant number of laboratory studies have been undertaken to identify technologies that can be applied as a solution to the phenomena that negatively affect the performance of implemented secondary recovery processes. These methods focus mainly on increasing oil mobility and thus increase the production thereof. The goal is to reduce its viscosity so that oil can flow easily. Enhanced Oil Recovery (EOR) can be achieved through technologies such as gas

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 103-108. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.42308


Ramirez et al / DYNA 82 (191), pp. 103-108. June, 2015.

injection, chemical injection, microbial injection, or thermal recovery [3]. The implementation of EOR methods plays a key role as a technology to increase the recovery factor of Colombian fields, and to refurbish them in the implementation of other technologies designed to increase well productivity [4]. The main advantage of gas injection is its microscopic sweep. The gases used to EOR are mostly methane, butane, propane, nitrogen and carbon dioxide. Many of the aforementioned gases have the advantage of being produced in-situ, i.e. large volumes are available in the wellbore for injection. The choice of a particular gas is strongly linked to its availability and to the increased recovery generated. Thus, CO2 appears as one of the most promising gases for use in EOR. It is therefore important to have a thermodynamic model to adequately characterize the phase behavior of CO2 - heavy oil mixtures. Such models can provide judgment elements for decision-making regarding production, transportation, refining and upgrading oil. The experimental data reported in the literature to characterize the phase equilibrium for CO2 - heavy oil systems are generally few and so the difficulties in establishing appropriate thermodynamic models are high. Therefore, tools allowing phase equilibrium calculations of such systems with little experimental data are required. The aim of this work is to develop a model for the calculation of vapor-liquid equilibrium for CO2 - Colombian heavy oil (called here UnalMed), using the representation of pseudo-components and to estimate the critical and thermodynamic properties of those components. Additionally, we seek here to establish the right number of components to be used for a good representation of the system, with little experimental information and at low computational cost. This information may be useful for developing appropriate thermodynamic models for the Colombian oil industry. To our knowledge, data on thermodynamic properties of pseudo-components to represent Colombian heavy oils have not been previously published.

single fraction), given the complexity of the composition of this. With SARA analysis, for example, oil can be represented in four pseudo components, which are the saturates, aromatics, resins and asphaltenes, and each of them has its properties such as the boiling point, specific gravity, density, molecular weight, among others. In addition, each pseudo component can be modeled as a pure substance. The critical properties, molecular weight and acentric factor of each pseudo component can be calculated by correlations reported in the literature for heavy oil fractions. In this work, the critical temperature, critical pressure and acentric factor of each pseudo component were calculated using the Lee-Kesler correlations [8], as shown in (eq.1-6); the molecular weight was calculated with the Daubert-Riazi correlation (eq. 7) and the critical volume with the HallYarborough correlation (eq. 8). These correlations were selected by considering the good results that have been obtained with respect to experimental data along its applications [9,10]. The input parameters of these correlations are the boiling point (Tb) and the specific gravity (SG). 189.8 450.6 0.4244 0.1441 1.0069 10

5.689

( 1)

0.0566 4.1216

0.43639 0.21343

10 1.182

0.47579 0.15302

( 2)

10

2.4505 9.9099

2. Model Based on the nature of the mixture of oil, there are several ways to express its composition. Some of the most important types of composition [1,5,6] are PONA analysis (paraffins, olefins, aromatics and Naphthenes), PNA (paraffins, Naphthenes and Aromatics), PIONA (paraffins, Isoparaffins, Olefins, Naphthenes and Aromatics), elemental (C, H, S, N, O) and SARA analysis (Saturates, Aromatics, Resins and Asphaltenes) [7]. PNA and PINA analyses are useful for petroleum products in a range of low boiling temperatures, such as distillates of atmospheric distillation units for crude. However, the SARA analysis is useful for heavy oil fractions, residues and fossil fuels (i.e., coal liquids) that have a high content of aromatics, resins and asphaltenes. The above analyses are ways to represent oil as a mixture of pseudo components (several components represented in a

0.1174

0.8

For

7.904

for

104

10

0.007465

0.1352 8.359 1.408

0.01063

( 3)

Kesler and Lee [11] proposed the following correlation 0.8 .

.

.

.

. .

.

.

.

( 4)


Ramirez et al / DYNA 82 (191), pp. 103-108. June, 2015.

Where y are in K and boiling point defined as,

in bar and

is the reduced

0.45724

( 5)

,

is the Watson characterization factor [1], given by ⁄

1.8

42.965 2.097 ∗ 10 2.08476 ∗ 10

7.78712

.

.

is in Kelvin and the molecular weight,

Where g/mol.

1.56

.

.

0.37464

2

( 21) .

1

( 8)

0.07780

( 14) ( 15)

1

( 23)

( 24)

In this manner, the expression for the fugacity coefficient of a pure component is: 1

ln 2√2

ln

2.141 0.414

( 25)

was determined by , The functional form of using the values of vapor pressure in the literature and the Newton method to find the values of α to be used in (eq. 10, 25) in a way that the equilibrium condition

( 13)

,

( 22)

is the interaction coefficient between components i and j. These parameters have no theoretical basis but are empirical, and their role is to help overcome the deficiencies of the theorem of corresponding states [15]. The fugacity coefficient can be calculated from EOS (eq. 9) [13],

ln

The constants and b are related to the critical pressure, critical temperature and acentric factor by: ∗

. ⁄

8

1

( 10)

( 12)

( 18)

is in

( 11)

( 20)

Where ⁄

0.26992

( 7)

Where is the system pressure, is the system temperature, is a parameter of attraction, is a parameter of repulsion, is the molar volume and R is the ideal gas constant. In terms of the compressibility factor, 0

1.54226

( 17)

( 19)

( 9)

3

.

1

( 6)

In 1976, Peng and Robinson proposed a modification to the Soave-Redlich-Kwong equation of state (EOS) that has been well received internationally [26]. This equation of state is well accepted in technical publications, research and simulators to predict the behavior of natural hydrocarbon systems. PVT variables are related to predict the thermodynamic behavior of a system [12-14].

1

1

( 16)

( 26) is satisfied along the curve of vapor pressure [13]. Where y are the fugacity in the liquid and vapor phase, respectively. The fugacity coefficient of a k component in a mixture can be calculated by:

Where, 105


Ramirez et al / DYNA 82 (191), pp. 103-108. June, 2015.

ln ln

.

( 27)

.

Calculating fugacity coefficients for each component in both, liquid and gas phase with (eq. 27), the vapor-liquid distribution ratio can be calculated by (eq. 28): ɸ ɸ

⁄ ⁄

( 28)

The relationships described by (eq. 28) (one for each pseudo component) provide the main equations for VLE calculations using EOS. For the development of this work, the heavy oil (UnalMed) was modeled with three different fractions groups: four, five and six pseudo components and CO2 solubility with UnalMed oil was calculated, in order to determine the amount of pseudo components that should be used to obtain an acceptable thermodynamic model.

PSC 2, the third and fourth pseudo components correspond to resins and asphaltenes, respectively. UnalMed oil composition and CO2 solubility curve derived from experimental measurements performed by the Colombian Petroleum Institute (ICP). Table 2 shows the properties of the four pseudo components calculated using correlations presented in Section 2. The binary interaction coefficients between CO2 Table 1. Characterization of four pseudo components for UnalMed crude oil. SG % Wt Pseudo-component Tb (oC) PSC 1 350.00 0.5500 47.529 PSC 2 516.67 0.6833 36.787 PSC 3 673.34 0.8167 11.415 PSC 4 850.00 0.9500 4.269 Source: Prepared. 0,5

Experimental Solubility Model Solubility

Solubility of CO2 (Molar Fraction)

1

ln

0,4

0,3

0,2

3. Results To model the CO2 - UnalMed oil system, the crude was initially characterized with four pseudo components, using the SARA analysis. The initial parameters for the calculation of the solubility of CO2 in oil at different conditions, such as boiling temperature (Tb) and specific gravity (SG) were assumed following some tips outlined in the work of Kariznovi et al., [9]:  Boiling temperature of the distillable fraction (saturates and aromatics) is less than 600 °C.  Boiling temperature of the non-distillable fraction (resins) is greater than 600 °C.  Boiling temperature of the asphaltenes is greater than the boiling temperature of the non-distillable fraction. Table 1 shows the final parameters for each input of pseudo components that compose the UnalMed crude. These parameters were calculated iteratively within the developed algorithm. UnalMed crude characterization shown in Table 1 is described by the following order: the first pseudo component corresponds to saturate fraction, aromatics is represented by

0,1

0 0

200

300

400

500

600

Saturation Pressure (PSI)

700

800

900

Figure 1. CO2 solubility in UnalMed crude represented by four pseudo components at 80°F. Source: Prepared.

and UnalMed crude were tuned to give a closer approximation to the experimental data. Fig. 1 shows the results obtained by applying four pseudo components to the model. This figure shows the solubility of CO2 in the crude at 80°F and different pressures, the x-axis corresponds to the saturation pressure and the y-axis corresponds to the molar composition of the solvent in terms of mole fraction of CO2. It can be seen that having a relatively low operating temperature, and increasing the saturation pressure, CO2 is more highly compressed, increasing its solubility in UnalMed crude.

Table 2. Physical properties of four pseudo components and CO2 - UnalMed crude interaction Pc (MPa) Pseudo-component M (g/mol) Tc (oK) PSC 1 233.19 676.59 0.17655 PSC 2 512.58 827.39 0.17336 PSC 3 931.32 978.56 0.18003 PSC 4 1665.0 1147.5 0.16186 Source: Prepared.

For oil represented by five pseudo components, the same methodology as for the crude representation with four pseudo components was used. The additional pseudo component results from separation of the non-distillable oil fraction, namely resins, as these resins were separated into the asphaltene-free and resins containing asphaltenes. With this as mentioned, the order of

100

0.99553 1.5705 1.8565 2.0489

Vc (cm3/mol) 1324.4 2758 4757.4 8230.4

0.0374 0.0553 0.0685 0.0812

pseudo components is as follows: the first two pseudo components represent UnalMed crude distillable fraction, third and fourth represent the non-distillable fraction of resins (resins 1 and 2), and the fifth pseudo component represents asphaltenes. Table 3 shows the final input parameters of UnalMed crude represented by five pseudo components.

106


Ramirez et al / DYNA 82 (191), pp. 103-108. June, 2015.

Experimental Solubility Model Solubility

0,5

% Wt 47.529 36.787 7.991 3.425 4.269

Solubility of CO2 (Molar Fraction)

Table 3. Characterization of five UnalMed crude pseudo components SG Pseudo-component Tb (oC) PSC 1 350.00 0.5500 PSC 2 480.00 0.6500 PSC 3 590.00 0.7500 PSC 4 670.00 0.8500 PSC 5 850.00 0.9500 Source: Prepared.

Table 4 presents the properties of the five pseudo components calculated by the model, as well as the binary interaction coefficient between CO2 and UnalMed crude, which were tuned to provide a closer approximation to the experimental data.

0,4

0,3

0,2

0,1

0 0

100

300

400

500

600

Saturation Pressure (PSI)

700

800

900

Figure 2. CO2 solubility in UnalMed crude represented by five pseudo components at 80째F. Source: Prepared.

Table 4. Physical properties of five pseudo components and CO2- UnalMed crude interaction Pc (MPa) Pseudo-component M (g/mol) Tc (oK) PSC 1 233.19 676.6 0.17655 PSC 2 436.02 792 0.16773 PSC 3 689.46 899.3 0.18344 PSC 4 926.19 991.7 0.23425 PSC 5 1665 1148 0.16186 Source: Prepared.

Vc (cm3/mol) 1324.4 2382.4 3602.1 4579.7 8230.4

0.9955 1.4728 1.7227 1.7963 2.0489

0.0388 0.0536 0.0641 0.0701 0.0843

Table 5. Characterization of six UnalMed crude pseudo components. SG Pseudo-component Tb (oC) PSC 1 350.00 0.5500 PSC 2 450.00 0.6300 PSC 3 550.00 0.7100 PSC 4 650.00 0.7900 PSC 5 750.00 0.8700 PSC 6 850.00 0.9500 Source: Prepared. 0,5

Solubility of CO2 (Molar Fraction)

Fig. 2 shows the results obtained by applying the model of VLE of CO2 - UnalMed crude represented with five pseudo components. This figure shows the solubility of CO2 in the oil at 80 째 F and different pressures. The results are practically the same to those shown in Fig. 1, which is an indication that modelling the CO2 - UnalMed crude VLE with four pseudo components is appropriate. Now, for oil represented by six pseudo components, the same methodology as for the four and five pseudo components was used. In this case, the additional pseudo component represents a portion of the non-distillable fraction with a portion of the asphaltenes fraction, i.e. the resins and asphaltenes. This pseudo component is called transition pseudo component. The pseudo components order is as follows: The first two represent the distillable fraction, the third and fourth pseudo components represent the non-distillable fraction (resins 1 and 2), the fifth represents the transition between the resins and asphaltenes and the sixth pseudo component corresponds to asphaltenes. Table 5 shows the final input parameters of UnalMed crude represented by six pseudo components, and Table 6 shows the properties calculated for the six pseudo components as well as the binary interaction coefficients between CO2 and UnalMed crude.

200

% Wt 38.023 29.430 19.146 6.392 2.740 4.269

Experimental Solubility Model Solubility

0,4

0,3

0,2

0,1

0 0

100

200

300

400

500

600

700

800

Saturation Pressure (PSI)

Figure 3. CO2 solubility in UnalMed crude represented by six pseudo components at 80째F. Source: Prepared.

Table 6. Physical properties of six pseudo components and CO2 - UnalMed crude interaction Pc (MPa) Pseudo-component M (g/mol) Tc (oK) PSC 1 233.19 676.6 0.17655 PSC 2 383.52 766.3 0.17479 PSC 3 586.94 858.3 0.17265 PSC 4 855.26 952.6 0.1699 PSC 5 1205.9 1049 0.16636 PSC 6 1665 1148 0.16186 Source: Prepared 107

0.9955 1.3822 1.6488 1.8339 1.962 2.0489

Vc (cm3/mol) 1324.4 2107.2 3126.3 4428.7 6090.3 8230.4

0.0365 0.0475 0.057 0.0653 0.0727 0.0794

900


Ramirez et al / DYNA 82 (191), pp. xx-xx. June, 2015.

Fig. 3 shows again the same behavior as shown in Fig. 1, 2. In general, results show a high level of agreement with the UnalMed crude experimental data, which indicates that it is possible to model the CO2 - UnalMed crude VLE through four pseudo components, which can be commonly obtained with SARA analysis.

[8] [9] [10]

4. Conclusions For the calculation of CO2 - UnalMed crude VLE, the crude representation was tested with different amounts of pseudo components to compare the model results with experimental data. All crude representations showed very similar results and it is important to note that it is sufficient to have a complete characterization of the SARA analysis and use these four pseudo components to model the VLE. Having specific data, such as boiling point and specific gravity of each pseudo component, is useful for further reducing the uncertainty of the results. Binary interaction coefficients, required in the parameters of the cubic EOS are a key element in the study of the solubility of CO2 in the oil, because the saturation pressure proved to be highly sensitive to the values of binary interaction parameters, thus had to be tuned to obtain a better approximation of the experimental data. We observed that with a relatively low operating temperature, the solubility of CO2 in the oil is favored when the system is at high pressures, which is an important factor in the recovery of oil, because this could decrease the viscosity thereof without needing too much energy to heat the contents of the fields. Crude representation using four pseudo components and the model used were suitable for the analysis of phase behavior in these kinds of systems. References [1] [2]

[3]

[4] [5] [6] [7]

Riazi, M.R., Characterization and properties of petroleum fractions, ASTM International, Philadelphia, 2005. DOI: 10.1520/Mnl50_1stEb Alboudwarej, H., Felix, J., Taylor, S., Badry, R., Bremner, C., Brough, B., Skeates, C., Baker, A., Palmer, D., Pattison, K.., Beshry, M., Krawchuk, P., Brown, G., Calvo, R., Cañas Triana, J. A., Hathcock, R., Koerner, K., Hughes, T., Kundu, D., López de Cárdenas, J., and West, C., La importancia del petróleo pesado, Oilfield Review, 18, pp. 38-59, 2006. Al-Mjeni, R., Arora, S., Cherukupalli, P., Van Wunnik, J., Edwards, J., Jean-Felber, B., Gurpinar, O., Hirasaki, G. J., Miller, C.A., Jackson, C., Kristensen, M.R., Lim, F. y Ramamoorthy, R., ¿Llegó el momento para la tecnología EOR?, Oilfield Review, 22, pp. 16-35, 2010. Maya, G., Castro, R., Lobo, A., Ordoñez, A., Sandoval, J., Mercado, D., Trujillo, M., Soto, C. y Pérez, H.H., Estatus de la Recuperación Mejorada de Petróleo en Colombia. 2010. Ahmed, T., Equations of state and PVT analysis: Applications for improved reservoir modeling, Gulf Publishing Company, United States of America, 2007. Nji, G.N.-T., Characterization of heavy oils and bitumens, PhD. Thesis, University of Calgary, Calgary, Canada, 2010. Lache-García, A., Meléndez-Correa, L.V., Orrego, J.A., MejíaOspino, E. Pachón, Z. y Cabanzo, R., Predicción del análisis Sara de crudos colombianos por métodos quimiométricos utilizando

[11] [12] [13] [14] [15]

espectroscopía infrarroja-ATR, Revista Colombiana de Física, 43 (3), pp. 643-647, 2011. Kesler, M.G. and Lee, B.I., Improve prediction of enthalpy of fractions, Hydrocarbon Processing, 55, pp. 153-158, 1976. Kariznovi, M., Nourozieh, H. and Abedi, J., Bitumen characterization and pseudocomponents determination for equation of state modeling, Energy & Fuels, 24 (1), pp. 624-633, 2010. DOI: 10.1021/ef900886e Boozarjomehry, R.B., Abdolahi, F. and Moosavian, M.A., Characterization of basic properties for pure substances and petroleum fractions by neural network. Fluid Phase Equilibria, 231 (2), pp. 188-196, 2005. DOI:10.1016/j.fluid.2005.02.002 Lee, B.I. and Kesler, M.G., A generalized thermodynamic correlation based on three-parameter corresponding states, AIChE Journal, 21 (3), pp. 510-527, 1975. DOI: 10.1002/aic.690210313 Maldonado-Luis, D.A., Modelación del proceso de separación de gascrudo en la industria petrolera, Tesis, Istmo University, Tehuantepec, Santo Domingo, 2010. Peng, D.Y. and Robinson, D.B., A new two-constant equation of state. Ind. Eng. Chem. Fundam., 15 (1), pp. 59-64, 1976. DOI: 10.1021/i160057a011 Smith, J.M. Van Ness, H.C. y Abbott, M.M., Introducción a la Termodinámica en Ingeniería Química. McGraw-Hill 7th ed, Mexico, 2007. Reid, R.C., Prausnitz, J.M. and Poling, B.E., The properties of gases and liquids. McGraw-Hill 4th ed, United States of America, 1987. DOI: 10.1036/0070116822

O.J. Ramirez, completed a BSc. degree in Chemical Engineering in 2013 from the Universidad Nacional de Colombia and worked in Process Safety in Ecopetrol S.A during the traineeship. C. Betancur, completed a BSc. degree in Chemical Engineering in 2013 from the Universidad Nacional de Colombia and worked in Process Technology in Ecopetrol S.A during the traineeship. B.A. Hoyos, completed BSc. in 1994 and MSc. in 2003 degrees both in Chemical Engineering and a PhD degree in Energy Systems in 2010, all of them from the Universidad Nacional de Colombia campus Medellín, Colombia. Currently he is a full professor in the Departamento de Procesos y Energía, Facultad de Minas, Universidad Nacional de Colombia, campus Medellín, Colombia. His research interests include molecular simulation, modeling asphaltene aggregation and viscosity, modelling wettability alteration by surfactants and specific adsorption. C. E. Naranjo, completed a BSc. degree in Petroleum Engineering in 1993 from the Universidad Industrial de Santander, Colombia and a MSc. degree in Hydrocarbon Engineering in 2010. He has over fourteen years of experience as a reservoir engineer in the area of secondary and Enhanced Oil Recovery with emphasis on water injection and thermal recovery processes. Two (2) applications for invention patents in Republic of Colombia, coauthor of six (6) Declaration technology products in Ecopetrol S.A., codirected research projects through Labor and Jobs Master’s Degree, and several academic and national and international technical recognitions and achievements.

108


R&D best practices, absorptive capacity and project success Silvia Vicente-Oliva a, Ángel Martínez-Sánchez b & Luis Berges-Muro c a PhD. Centro Universitario de la Defensa, Zaragoza, España. silviav@unizar.es PhD. Dep. de Dirección y Organización de Empresas, Escuela de Ingeniería y Arquitectura, Universidad de Zaragoza, España. anmarzan@unizar.es c PhD. Dep. de Ingeniería de los Procesos de Fabricación, Escuela de Ingeniería y Arquitectura, Universidad de Zaragoza,. España. bergesl@unizar.es b

Received: March 14th, 2014. Received in revised form: January 16th, 2015. Accepted: April 07th, 2015

Abstract Research and Development (R&D) project management style in companies can be different according with their size and technological intensity. R&D Projects are characterized by high technical and economic uncertainty. And their execution carried out a learning process, although its aims were achieve the results planned. Results from a sample of 71 Spanish firms evidence that best practice in R&D project management promote the innovation in two ways. The purpose of practices is positively related to absorptive capacity of knowledge (CAC) in the organisation, and in turn, CAC affects the success of R&D projects. Keywords: innovation management; R&D project management; absorptive capacity; R&D project success, organizational learning.

Buenas prácticas en la gestión de proyectos de I+D+i, capacidad de absorción de conocimiento y éxito Resumen El modo en que se gestionan los proyectos de investigación, desarrollo e innovación (I+D+i) en las empresas puede ser diferente en función del tamaño o de la intensidad tecnológica con la que operen. Los proyectos de I+D+i se caracterizan por su alta incertidumbre técnica y económica. Y su realización constituye un proceso de aprendizaje aunque su objetivo fuera alcanzar los resultados previstos en la planificación. Los resultados del estudio de una muestra de 71 empresas españolas evidencian que las buenas prácticas en gestión de proyectos de I+D+i promueven la innovación de dos maneras. Primeramente, el grado de uso de las buenas prácticas está relacionado positivamente con la capacidad de absorción de conocimiento en la organización (CAC) y, a su vez, ésta influye en el éxito de los proyectos. Palabras clave: gestión de la innovación, dirección de proyectos de I+D, capacidad de absorción, éxito de proyectos de I+D+i, aprendizaje organizacional.

1. Introducción La innovación en la empresa puede considerarse en ocasiones parte de un proceso cuya unidad de flujo es el proyecto de investigación, desarrollo e innovación. Las siglas I+D+i se emplean a menudo para denominar esta clase de proyectos en la que pueden incluirse ideas que toman forma en los laboratorios, nuevos productos, procesos e incluso formas para gestionar mejor las organizaciones. El término “innovación” se representa por convención con una “i” minúscula e incluye actividades que no son nuevas objetivamente, sino que se trata de nuevas actividades que no se han realizado con anterioridad en la empresa. Pero esto no

significa que la innovación tenga menos importancia para la gestión que la investigación y el desarrollo, conceptos que se refieren a las actividades que sí son nuevas objetivamente, es decir, que no hay evidencia de que se hayan realizado o aplicado con anterioridad dentro o fuera de la empresa. Un proyecto de I+D+i representan para las organizaciones involucradas un esfuerzo temporal y único, que es especialmente elevado en algunas circunstancias, es decir, cuanta mayor incertidumbre haya respecto a la investigación o el potencial desarrollo de la invención. Al igual que cualesquiera otros, se planifican de acuerdo a unos objetivos y para ejecutarlos de forma efectiva y eficiente, requieren de la aplicación de conocimientos, habilidades y técnicas específicas.

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 109-117. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.42558


Vicente-Oliva et al / DYNA 82 (191), pp. 109-117. June, 2015.

Es cada día más necesario ofrecer bienes y servicios nuevos y/o mejorados al cliente actual y para ello, puede ser necesario aprender del exterior [1] y generar capacidad de absorción de conocimiento (en adelante, CAC). Se trata de que la empresa tenga habilidades para detectar conocimiento fuera de la organización, sea capaz de apropiarse del mismo y utilizarlo uniéndolo con sus capacidades y habilidades de las que disponía previamente. En la gestión cotidiana de un proyecto es habitual centrarse en los objetivos técnicos del proyecto, ello puede hacer que se desperdicien otros beneficios derivados de su realización debido a que aún no se ha estudiado cómo influye cada una de las fases de la CAC en el éxito de los proyectos. La elevada incertidumbre técnica económica, así como la aceptación por parte del cliente, caracteriza a muchos proyectos de I+D+i [2]. Ello provoca que la interrelación entre gestión de proyecto y gestión del conocimiento sea un área prioritaria de atención para gestores y directivos, aun cuando su preocupación fundamental sea cumplir los compromisos adquiridos con el proyecto. Aunque una mejor capacidad de innovación y de absorción de conocimiento en las empresas se relaciona positivamente con la obtención de resultados en proyectos [34], no existen en la literatura trabajos empíricos que desciendan al nivel de detalle de examinar prácticas concretas que sean adecuadas para la gestión y analizadas, además, a nivel de proyecto [5]. La motivación de nuestro estudio surge de esta constatación y del convencimiento de que este análisis permitiría obtener información sobre la realidad en la gestión de proyectos de I+D+i, distinguiendo capacidades que son difícilmente detectables en las encuestas oficiales porque no descienden a tanto nivel de detalle en el estudio (empresa individual). El objetivo de este artículo es estudiar qué buenas prácticas en gestión de proyectos influyen en la CAC y en el éxito de los proyectos de I+D+i. En concreto, queremos analizar cuál es su significatividad en cada una de las fases de la adquisición, transformación y explotación del conocimiento externo. Se plantea una investigación que permite analizar la relación de las buenas prácticas en cada una de las fases, al mismo tiempo que el éxito de los proyectos de I+D+i se relaciona con la CAC de la organización. La metodología del estudio consta de una encuesta a empresas innovadoras españolas, complementada con entrevistas a directivos responsables de proyectos de I+D+i con el fin de analizar las prácticas que utilizan para gestionarlos, esta información se plasma dentro del cuestionario analizado con técnicas estadísticas. Todo ello permite, en primer lugar un análisis descriptivo de las prácticas más empleadas en la gestión de proyectos de I+D+i acorde con el tamaño y la intensidad tecnológica del sector en el que operan las empresas; y en segundo lugar un análisis multivariante que permite extraer conclusiones y proporcionar varias recomendaciones prácticas para la gestión de la innovación y de los proyectos de I+D+i.

económico a largo plazo. La innovación puede concebirse como un proceso, una actividad recurrente dentro de las organizaciones, en la que el proyecto de I+D+i es la unidad de flujo llegando a considerar la empresa como una cartera de proyectos y programas [6]. El conocimiento puede considerarse el principal input de los procesos de innovación [7] y uno de los recursos más importantes que la empresa puede poseer o controlar [8] debido a que puede generar ventajas competitivas sostenibles. El conocimiento afecta a la estrategia tecnológica de la empresa y a la organización de sus proyectos de I+D+i [9]. En concreto, la CAC puede influenciar la efectividad de las actividades de innovación [10], y actuar como un puente entre compartir conocimiento y la capacidad de innovación de la empresa [11]. 2.1. Capacidad de absorción de conocimiento La literatura evidencia una relación positiva entre CAC y resultados de innovación medida mediante diferentes aproximaciones [12-13]. No obstante, en el caso de las pequeñas y medianas empresas (pymes) españolas, la falta de CAC aparece como una barrera principal para la externalización de la I+D+i [14]. Se ha evidenciado que si una empresa adquiere conocimiento o tecnología del exterior pero no dispone de la capacidad de absorción necesaria, no obtendrá resultados de innovación o estos serán mucho menos relevantes [15]. “La investigación y el desarrollo generan, obviamente, innovación” y también CAC, definida como la “habilidad de la empresa para identificar, asimilar y explotar el conocimiento del entorno” [1]. La realización de proyectos de I+D+i constituye un proceso de aprendizaje ya que se crean conocimientos. Sin embargo, el objetivo de los proyectos es alcanzar los resultados previstos en la planificación, y de ahí la importancia de la CAC que se adquiera y utilice durante dicho proceso. La mayoría de los estudios consideran la CAC una variable independiente y, salvo contadas excepciones, pocos llegan a analizar sus componentes separadamente [16-17]. Se distingue la habilidad de vigilar el entorno en busca de nuevo conocimiento y tecnología, frente a la habilidad de integrarlos en el proceso de innovación de la empresa para utilizarlos [18-20] ya que las empresas no pueden explotar el conocimiento externo que no hayan adquirido previamente [21]. Por ello, en este estudio se ha dividido la CAC en capacidad potencial y capacidad realizada siguiendo una secuencia (Fig. 1):  Potencial-CAC1: Reúne las habilidades, tanto para identificar, como para adquirir el conocimiento generado externamente.

2. Planteamiento teórico Hoy en día, la innovación es un elemento clave de la competitividad empresarial y una necesidad estratégica para mantenerse en el mercado y poder aumentar el crecimiento

Figura 1. Fases de la CAC. Fuente: Autores inspirado en Cohen y Levinthal (1989)

110


Vicente-Oliva et al / DYNA 82 (191), pp. 109-117. June, 2015.

 Realizada-CAC2: La transformación de conocimiento permite el desarrollo y ajuste de rutinas para facilitar la unión entre el conocimiento existente con el adquirido y asimilado.  Realizada-CAC3: La explotación de conocimiento consiste en la aplicación del conocimiento adquirido para crear nuevo y conseguir resultados comercializables a través del aprendizaje [22].

Se establece una relación positiva entre el grado de uso de MPP y la CAC de la empresa. A su vez, disponer de conocimiento previo para afrontar un nuevo proyecto, tener capacidad de transformarlo durante la ejecución y explotarlo permite establecer, en principio, una relación positiva entre cada una de las fases de la capacidad de absorción y el grado de éxito en un proyecto de I+D+i.

2.2. Prácticas en gestión de proyectos de I+D+i

2.3. Éxito del proyecto de I+D+i

Desde el punto de vista estratégico, existen lazos entre la estrategia tecnológica empresarial, la cartera de proyectos en desarrollo por parte de la empresa y los proyectos de I+D+i que se vayan a acometer en el futuro. Estas relaciones han sido establecidas y estudiadas en la literatura durante las últimas dos décadas y se ha demostrado que las prácticas en gestión de proyectos (MPP) influyen en el éxito del proyecto de I+D+i [23]. En los estudios sobre gestión de proyectos se distingue a menudo entre éxito del proyecto y éxito en su gestión. Una buena gestión puede influir en el éxito pero no es una garantía del mismo, y ni siquiera el cumplimiento de los objetivos puede determinar el éxito o el fracaso de un proyecto a lo largo del tiempo [24]. Las rutinas organizativas, directivas e instrucciones directas, y la propia organización de los equipos constituyen los principales mecanismos que faciliten la integración del conocimiento con el trabajo que se realiza en los proyectos [25]. Para las MPP existen metodologías desarrolladas por diferentes asociaciones profesionales y grupos de expertos como Project Management Institute (PMI), que generan desde los años 80 del siglo XX el PMBOK® [26] actualizando versiones conforme avanza más el conocimiento y el consenso entre quienes se dedican a la dirección de proyectos. El Project Management Body Of Knowledge (PMBOK) se reconoce como un estándar internacional para la gestión de proyectos. Atendiendo en concreto, al entorno de innovación, en el caso de España, existe la norma la UNE 166.001 de gestión de proyectos de I+D+i, considera todas las fases desde el inicio del proyecto al cierre, así como la previsión de explotación del producto, servicio o proceso desarrollado valorizando la invención. Pero existen investigaciones que buscan detectar las mejores prácticas de gestión y, por ello, las más frecuentes en el entorno empresarial como modo de mejora continua. En concreto, para este estudio se han utilizado las MPP contrastadas más significativas y frecuentes en empresas europeas por Turner et al., (2010) [27]: atención a los requerimientos del cliente, elaboración de un mapa de ruta; planificación detallada de actividades (o Estructura de Descomposición del Proyecto, EDP); uso de metodologías ágiles; establecimiento de una matriz de responsabilidades para cada actuación en el proyecto; elaboración de un plan de recurso; el hecho de “hacer equipo” como un modo de trabajo; la gestión del riesgo; la gestión comercial; el grado de dominio de conocimiento que tiene la empresa en relación con el proyecto que se afrontará; el uso de las Tecnologías de la Información y la Comunicación (TIC) para desarrollar y controlar la evolución del proyecto; y el hecho de disponer de una Oficina de Proyectos que apoye en todas las fases de su desarrollo.

El grado de cumplimiento de la triple restricción de tiempo, coste y alcance previstos en la planificación de un proyecto se ha considerado en la literatura de gestión de proyectos como un factor determinante de su éxito, aunque se ha comprobado que no es suficiente para asegurar que el objeto del proyecto se ha alcanzado [28]. Para determinar el éxito en un proyecto de I+D+i no hay consenso entre los expertos para proponer una medida del mismo. Se ha investigado mucho acerca de los factores críticos que favorecen y dificultan el éxito en estos proyectos, llevados a cabo en diferentes tipos de organización así como la eficiencia en su gestión [29-30]. Según la UNE 166.001, la medida de éxito de un proyecto de I+D+i radica en los beneficios de su utilización a corto, medio o largo plazo para la organización individual, para un sector económico y para la totalidad de la sociedad [31] por el efecto desbordamiento que tiene la innovación. Y esto es, los resultados de un proyecto pueden generar beneficios para quienes que lo ha llevado a cabo pero también para su entorno y para la sociedad. De este modo, el éxito de un proyecto combina el éxito en su gestión con el logro de los resultados del proyecto a lo largo del tiempo [32-33] y se establecen vínculos entre los proyectos exitosos y el éxito de la organización [34]. La función de los directores de proyectos ha sido siempre reconocida en el mundo de la gestión, pero desde hace unos años interesa mucho su estudio en el ámbito académico, en concreto conocer cuál es el papel que desarrollan los directores de proyectos y cómo es su estilo de gestión [35]. Por ello, incorporar su criterio en juzgar cómo se ha conducido el proyecto, qué logros se han alcanzado así como su aprovechamiento total para la organización que lo llevó a cabo es una medida de éxito practicable. En todo caso, para unas condiciones dadas de tiempo, coste y alcance, una organización que se organice mejor y gestione mejor, si dispone de una mayor CAC sería capaz de favorecer la obtención mejores resultados de sus proyectos entorno a nuevos productos, procesos o formas de organizarse internamente. Ello puede ser debido a que identificaría, asimilaría y explotaría con usos comerciales el conocimiento en el futuro al mismo tiempo que en el proyecto actual. Así, Johnston y Gibbons (1975) [36], comprobaron cómo en el desarrollo de una innovación, la información obtenida desde fuera de la empresa contribuye significativamente más a la solución de los problemas técnicos que la interna, especialmente en las industrias más tradicionales con menor gasto en I+D+i. En la medida en que la organización pueda combinar sus capacidades internas de innovación con conocimiento y la tecnología que localice y asimile del exterior, la probabilidad

111


Vicente-Oliva et al / DYNA 82 (191), pp. 109-117. June, 2015.

Figura 2. Modelo de la investigación. Fuente: Elaboración propia

de obtener mejores resultados en próximos proyectos puede verse reforzada positivamente. En consecuencia, se propone que la relación esperada entre la CAC de la organización y el éxito de los proyectos de I+D+i habría de ser positiva. La Fig. 2 presenta el modelo de la investigación e ilustra las relaciones a contrastar empíricamente. Si consideramos que las MPP están relacionadas con la CAC, sería relevante para gestores y académicos determinar cuáles de ellas lo hacen significativamente en cada una de sus fases; en el análisis se pueden tomar agrupadamente y junto con las variables de control. 3. Metodología Para averiguar si la CAC está relacionada positivamente con el éxito de los proyectos de I+D+i, se realiza el análisis considerando todas y cada una de sus fases. Además, se toman como variables de control las más habituales en trabajos académicos sobre innovación: el tamaño de la organización (años desde su creación, en logaritmo), la experiencia en gestión (años desde su constitución), la dedicación a I+D+i del personal (respecto al total de empleados en plantilla) y la intensidad tecnológica de las actividades del sector al que pertenece (alta, media y baja). Debido a que la unidad de análisis es el proyecto, la triple restricción (tiempo, coste y alcance realizados conforme al planificado) también forma parte del grupo de variables de control, considerando, con respecto a la planificación, la consecución de los objetivos del proyecto conforme al presupuesto y en el tiempo establecido. La muestra del estudio está compuesta por empresas que han realizado proyectos de desarrollo individual y en cooperación financiados por el Centro de Desarrollo Tecnológico Industrial (CDTI) y proyectos en cooperación con la Universidad de Zaragoza en el área del Valle Medio del Ebro (Aragón, Navarra y La Rioja) entre los años 2005 y 2010. El análisis en una región permite contextualizar el estudio en un marco de referencia más estable para las empresas aunque asumiendo las limitaciones inherentes al mismo.

Se elaboró un cuestionario a partir de una extensa revisión de la literatura, entrevistas con empresas y expertos. Se recibieron 71 respuestas telemáticas, entre octubre de 2011 y junio de 2012 y dos se invalidaron por falta de datos suficiente. La tasa de respuestas válidas fue del 38,98%, un error de +/9,40%. No se observaron diferencias significativas en las características de las empresas que contestaron más temprano la encuesta respecto a las que lo hicieron más tarde, mediante análisis de la prueba de la t de Student (porcentaje de personal dedicado a I+D+i, esfuerzo en I+D+i, margen de beneficios y beneficios netos sobre ventas). Los directores de proyectos consultados tienen experiencia y cualificación aunque no tengan certificación profesional ya que un 64% ha dirigido más de cinco proyectos y su experiencia media en el puesto supera los once años, teniendo una titulación universitaria el 94%. Tan solo un 5,79% estaba acreditado por alguna asociación profesional (excluidos los colegios de ingenieros). El 86,96% de las empresas de la muestra son pequeñas y medianas empresas (PYMES) y tienen una edad de 20,9 años, realizando innovación continua un 65,2%. El análisis a nivel de proyecto se ha realizado una vez que se ha finalizado, presenta ventajas ya que aporta una cierta perspectiva para poder valorar si el proyecto ha tenido éxito al cabo un del tiempo. En este caso se trata de proyectos que terminaron como muy tarde en el año 2010. Para medir el grado de éxito del último proyecto de I+D+i, se solicitó una valoración subjetiva del director de proyecto [37] mediante escala Likert de siete puntos donde valoraba entre 1 y 7 puntos su grado de acuerdo con que el último proyecto de I+D+i que había dirigido, se había realizado conforme a lo planificado. De este modo se amplía la visión de éxito del proyecto más allá de la triple restricción (especificaciones alcanzadas con el presupuesto y tiempo planificado), que se toma como una variable de control en la que se han agrupado los tres aspectos respecto a la valoración. Esto permite distinguir al director del proyecto aquellos desarrollos que ha liderado y que son exitosos, de aquellos que no han sido satisfactorios debido a que los proyectos tuvieron desviaciones o dificultades varias pero sí han sido exitosos para la empresa. Es decir, distinguir proyectos de I+D+i que han cumplido los objetivos previstos por su planificación, de aquellos otros que son resultados intermedios o finales y que se han obtenido mediante su ejecución proporcionando bienes, servicios, procesos y métodos de organización nuevos o significativamente mejorados que se considera realmente un éxito para su empresa. La CAC se midió utilizando el constructo validado por Flatten et al. (2011) [38] y que permite a los encuestados seleccionar en una escala Likert de siete puntos su grado de acuerdo con las afirmaciones presentadas en relación a cómo su empresa identifica, asimila, transforma y explota conocimiento. Debido a que se recoge la información de una único fuente podría existir riesgo de varianza común, es decir, el director del proyecto podría contaminar todas las mediciones siguiendo una dirección común; como remedio post hoc se ha realizado el test de Harman [39] para las variables incluidas en cada modelo, mediante el análisis factorial (componentes principales), obteniendo varios factores y el primero de ellos explica el 31,39% de la varianza del modelos en el que se ha incorporado el constructo, por lo que no existe problema de varianza común. Las

112


Vicente-Oliva et al / DYNA 82 (191), pp. 109-117. June, 2015.

variables que se generaron agrupando ítems para triple restricción y la CAC, a partir de un análisis factorial y confirmando la fiabilidad mediante el cálculo de las alphas de Cronbach, siendo todas ellas superiores al valor recomendado de 0,7 [40, p. 816]. 4. Resultados En primer lugar, se ha realizado un análisis descriptivo, para estudiar el diferente grado de uso de las MPP según el tamaño de la empresa y la intensidad tecnológica del sector en el que operan (Figs. 3 y 4). Cabe destacar la frecuencia de uso elevada en todo tipo de tamaño de organización y sectores, de las prácticas comunes en el área europea para la gestión de proyectos en empresas exceptuando la implantación de metodologías ágiles y disponer de una oficina de proyectos. Y especialmente ésta última puede tener repercusiones para la gestión empresarial en el largo plazo. Todas las herramientas relacionadas con la planificación y la consideración de los requerimientos del cliente están muy implantadas en las empresas estudiadas en general. Las empresas pequeñas y muy pequeñas (de acuerdo al tamaño considerado en la Unión Europea tienen menos de 50 y menos de 10 empleados, respectivamente) realizan una gestión comercial más intensa y se encuentran más apegados a los requerimientos del cliente cuando van a desarrollar un proyecto de I+D+i. Además sobresalen en el dominio de conocimiento que van a usar en el proyecto, por lo que su especialización resulta mayor. Las empresas que operan en sectores de alta tecnología (de acuerdo con la Clasificación Nacional de Actividades, CNAE-2009) tienen menor dominio del conocimiento en sus proyectos y ello es debido a que se enfrentan con innovaciones más radicales y los cambios en las tecnologías son más rápidos para ellas, por lo que también son las empresas que más gestionan el riesgo en sus proyectos.

Figura 3. MPP en función del tamaño de empresa. Fuente: Elaboración propia

Figura 4. MPP en función de la intensidad tecnológica del sector. Fuente: Elaboración propia

Los resultados de este análisis, entre otros aspecto, indican que las empresas pequeñas tienen más en cuenta la consideración de los requisitos del cliente, y las empresas de sectores con mayor intensidad tecnológica utilizan más las tecnologías de la información y la comunicación (TICs) para gestionar los proyectos de I+D+i. En segundo lugar, para contrastar la relación propuesta en el modelo de investigación entre el uso de MPP y la CAC se ha efectuado un análisis ANOVA que permite analizar si existen diferencias entre las medias de las poblaciones, así como el efecto de interacción entre las MPP y cada una de las fases de la CAC (Tabla 1). Solamente la práctica de hacer equipo es diferente significativamente en cada una de las fases. Por ello, se configura como la práctica con mayor impacto y relación en que la empresa sea capaz de identificar, asimilar, transformar y explotar conocimiento externo. La capacidad potencial presenta muy poca diferencia significativa con el resto de MPP, excepto el plan de recursos cuya diferencia es solamente significativa al 0,1%. En cambio, la capacidad realizada exhibe mayores diferencias de las MPP, por lo que es donde la gestión resulta especialmente significativa, sobre todo cuando se trata de la asignación de responsabilidades y de la identificación de riesgos. La estructura de descomposición del proyecto, que puede estar muy relacionada con el plan de recursos, resulta significativamente diferente en capacidad realizada, mientras que planificar recursos es más significativo en las primeras fases de la capacidad de absorción sin serlo en la de explotación (CAC3). Los requisitos del cliente también tienen una alta significatividad en la capacidad realizada como cabría esperar, ello puede ser debido a que muchos proyectos de I+D+i se han basado en nuevas necesidades de sus clientes actuales o en la previsión de las necesidades que podrían tener en el futuro estos clientes, así como nuevos mercados a los que se podría vender. El plan de recursos es significativo en la capacidad potencial pero también en la capacidad de transformar

113


Vicente-Oliva et al / DYNA 82 (191), pp. 109-117. June, 2015. Tabla 1. ANOVA de CAC con cada práctica en gestión de proyectos Capacidad Capacidad POTENCIAL REALIZADA CAC1 CAC2 CAC3 Requerimientos del cliente 1,862 3,062** 3,619** Mapa de ruta 0,069 1,463 0,224 EDP 1,033 1,107 3,787** Metodologías ágiles 0,575 0,357 0,21 Asignación responsabilidades 0,732 2,361* 2,847** Plan recursos 2,503* 2,923** 1,794 Hacer equipo 3,856** 4,115** 4,284** Gestión del riesgo 1,24 2,310* 2,282* Gestión comercial 0,476 2,343* 1,913 Dominio de conocimiento 0,904 1,433 1,473 Uso de TICs 0,679 1,387 1,792 Oficina de proyectos 0,522 0,907 1,152 Contraste F con nivel de significación: *p<0,1; **p<0,05. Fuente: Elaboración propia

Tabla 2. Análisis de regresión del Éxito del proyecto de I+D+i Experiencia de gestión de la empresa Tamaño Dedicación a I+D del personal Sector Alta Tecnología Sector Media Tecnología Triple restricción Adquisición de conocimiento (CAC1) Transformación de Conocimiento (CAC2) Explotación del Conocimiento (CAC3)

conocimiento o CAC2, mostrando la necesidad de destinar recursos a adquirir nuevo conocimiento en los proyectos de I+D+i incluyéndolo en el planeamiento inicial. La falta de significatividad en la relación entre CAC y tener oficina de proyectos, o un alto grado de uso de metodologías ágiles ya se presentaba en indicios por el análisis descriptivo. Son prácticas de baja frecuencia de uso en todas las empresas estudiadas. El análisis de regresión que contrasta la relación entre la CAC y el éxito alcanzado en el proyecto es un modelo confiable (F del modelo 15,449, altamente significativo y con un ajuste de 0,661). De acuerdo a la información que se presenta en la Tabla 2, la CAC2 está relacionada positivamente con el éxito del proyecto, mientras que la CAC1 está relacionada negativamente y la CAC3 no muestra relación significativa. Estos resultados sugieren que para desarrollar con éxito un proyecto de I+D+i es fundamental que la empresa pueda transformar conocimiento que ha adquirido del exterior implementándolo en el proyecto que desarrolla (relación positiva de CAC2). La CAC3 parece estar relacionada con tareas propias de la organización que no tienen impacto en el grado de éxito del proyecto (coeficiente beta no significativo), como se apreciaba en el análisis ANOVA incluido en la Tabla 1, se trata de prácticas de índole comercial fundamentalmente, o bien podría señalar una clara desvinculación entre las áreas técnicas de la empresa y las áreas comerciales. De este modo, las empresas pueden no estar alcanzando todo el potencial de la organización [41]. La relación negativa entre CAC1 y el éxito percibido del proyecto puede manifestar la dificultad de acceder a nuevo conocimiento una vez que se está desarrollando un proyecto si no se encontraba ya dentro de la empresa. Pero como se apreciaba en el análisis ANOVA, incluido en la Tabla 1, una alta relación con el plan de recursos, podría realizarse mejor la planificación del proyecto para considerar su inclusión, en lugar de que el impacto en el éxito se perciba como negativo. Respecto a las variables de control, cabe señalar la significatividad que tiene la triple restricción (p<0,001). Aunque se considera que el grado con el que se alcanzan las especificaciones planificadas del proyecto en tiempo y plazo

-0,018 (0,192) -0,021 (0,218) 0,184* (2,387) -0,068 (0,820) 0,025 (0,313) 0,688*** (8,126) -0,350*** (8,881) 0,547**** (4,649) -0,064 (0,599)

F (modelo completo): 15,449**** R2 ajustada: 0,661 Variable dependiente: Éxito del proyecto. Coeficiente de beta estandarizado; entre paréntesis los valores de t-Student. Nivel de significación: *p<0,1; **p<0,05; ***p<0,01; ****p<0,001. Fuente: Elaboración propia

no tiene porqué ser una medida de éxito por si misma, según la percepción de los directores de proyecto, está muy positivamente relacionada con el éxito. 5. Discusión y conclusiones El análisis efectuado sobre las mejores prácticas en la gestión de proyectos de las empresas estudiadas evidencia cómo dichas prácticas están relacionadas con el desarrollo de innovaciones y con la capacidad de absorción de conocimiento realizada, o habilidad para transformar conocimiento externo y aplicarlo creando un nuevo conocimiento que la organización puede utilizar con fines comerciales. Una mayor capacidad de transformar conocimiento adquirido del exterior podría beneficiar a los proyectos futuros mediante el establecimiento de prácticas y rutinas de gestión, óptimas, ya que se ha comprobado que la capacidad de absorción contribuye al éxito de los proyectos de I+D+i, como un factor importante en las actividades de aprendizaje para tener éxito a largo plazo en proyectos [4]. Muchas empresas no planifican el proyecto de I+D+i teniendo en cuenta su capacidad para identificar conocimiento externo relacionado con las tecnologías del proyecto (los avances y fusiones de éstas), de ahí que en nuestro análisis el signo es significativamente negativo. Esto puede ser debido a que consideran que éste pueda no ser suficiente para realizar una tarea planeada o resolver un problema que surge [26]. Buscar y acceder nuevo conocimiento externo puede pertenecer a la estructura de la empresa o a una oficina de proyectos con competencias acerca de ello, que a la dirección o la implementación de este nuevo conocimiento en el proyecto. Por ello, basándonos en la percepción del director del proyecto parece que si se dedican recursos del proyecto que se está realizando para acceder a nuevo conocimiento, puede perderse focalización en el proyecto actual e incrementar costes y plazos. Esta situación abre un nuevo frente de investigación en relación a

114


Vicente-Oliva et al / DYNA 82 (191), pp. 109-117. June, 2015.

las funciones que debe desempeñar una oficina de proyectos dentro de las empresas, o quiénes deben gestionar el conocimiento más puntero dentro de la empresa para que esté disponible, por ejemplo, en los nuevos proyectos de I+D+i. La gestión del riesgo del proyecto es significativa para la capacidad de transformar conocimiento y obtener un mayor grado de éxito en los proyectos. Es recomendable aumentar su uso a la hora de planificar, ejecutar y controlar el proyecto estableciendo medidas de riesgo y planes de contingencia (incluso de cancelación anticipada del proyecto de I+D+i si no se alcanzan determinados hitos intermedios). La importancia de los aspectos relacionados con las recursos humanos en la gestión de proyectos de I+D+i, se ponen de manifiesto en la significatividad de hacer equipo y del grado de aprendizaje de la propia empresa, así como de todos los interesados en la realización de un proyecto. Incorporar aspectos relacionados con la gestión de recursos humanos, de las comunicaciones, de los grupos de interés y de integración de actividades del proyecto, al mismo tiempo que se prevé la continuidad del conocimiento para proyectos futuros, es básico para contribuir al éxito de los proyectos de I+D+i actuales a la vez que se apoya al éxito de los siguientes. Ello llevaría a ampliar el abanico de actividades propias de los proyectos de I+D+i más allá de las estrictamente técnicas y de gestión de la tecnología considerando el conocimiento, el aprendizaje, las personas y su compromiso como factores clave del desarrollo [42] y parte del éxito presente y futuro. La separación que existe entre transformar y explotar el conocimiento externo, en relación al éxito de los proyectos de I+D+i conlleva que las empresas puedan tener buenos equipos de desarrollo de proyectos y, sin embargo, no sean capaces de aprovechar comercialmente esta circunstancia. En entornos complejos se ha demostrado la relación positiva entre el conocimiento del cliente y los resultados de los proyectos [43]. Y aunque nuestros datos muestran la relación significativa entre considerar los requisitos del cliente y la capacidad realizada, la relación con el éxito del proyecto de una de sus partes (capacidad de explotación) no ha resultado significativa en la muestra de empresas considerada. En cambio, la capacidad transformar conocimiento externo está muy positivamente relacionada con el éxito del proyecto de I+D+i. El éxito de un proyecto y los beneficios derivados de la realización del mismo se superponen a menudo para conformar los objetivos del mismo, por ello la definición de éxito siempre es un concepto más amplio que cumplir la planificación estimada. De este modo, el estudio de los factores de éxito que determinan directa o indirectamente un proyecto, así como los factores críticos que determinan una medición por la que juzgar el éxito o el fracaso de un proyecto resulta de vital importancia en la literatura académica [34,44] pero en la gestión diaria supone que se considere el éxito en los proyectos de I+D+i como un todo, considerando que durante el tiempo de ejecución de un proyecto o su análisis posterior evolucione la perspectiva acerca de su éxito [45]. Este artículo contribuye a la corriente que estudia cómo el conocimiento externo que adquieren y utilizan las empresas puede ayudar a mejorar, tanto sus actividades de innovación en forma de proyectos de I+D+i, como sus resultados de innovación [12,13,46],y cómo los factores

relacionados con las prácticas internas de la empresa que se alinean en la consecución de nuevos productos, procesos y formas de organización [47]. Es una cuestión muy actual sobre la que se sigue investigando. El estudio de LealRodríguez, et al. (2014) [48] comprueba que no existe una relación directa entre capacidad potencial y resultados de innovación. Pero no se trata solo de gestionar la acumulación de conocimiento externo, sino también adaptar las capacidades de la empresa para sistematizar, coordinar y socializar para tener éxito con la innovación estratégica [49]. En este sentido, la capacidad realizada desempeña un papel esencial y aunque ésta se ha vinculado a la obtención de beneficios a corto plazo [50], la obtención de éxito a largo plazo en proyectos e, incluso, en la cartera de proyectos quedó demostrada por Biedenbach y Müller (2012) [4]. Estas conclusiones e implicaciones deben interpretarse dentro del marco de limitaciones del estudio, principalmente las derivadas del reducido tamaño de la muestra y el uso de datos transversales. Futuros estudios podrían incorporar un mayor número de observaciones con datos longitudinales, que permitan validar la causalidad de las relaciones encontradas entre cada una de las MPP y la CAC, además de la relación entre fases de la capacidad de absorción y éxito de los proyectos. El estudio empírico realizado contribuye a mantener el extenso debate académico acerca del diseño original por Cohen y Levinthal [1], la reconceptualización de Zahra y George [21] y diversos autores posteriores que han confeccionado escalas de medición de la capacidad de absorción; así como la necesidad de validar un constructo en el que se recojan las mejores prácticas para gestionar los proyectos de I+D+i. El presente trabajo y sus conclusiones invita a investigar otros aspectos relativamente poco explorados hasta el momento, como puede ser la contribución de los conocimientos adquiridos en el desarrollo de proyectos de I+D+i, mediante su adecuada gestión por parte de la empresa, a los resultados empresariales futuros o a la estrategia tecnológica. Agradecimientos A la labor de los revisores de DYNA-Colombia por sus aportaciones para presentar los análisis con mayor riqueza y precisión de conceptos; y a las empresas que colaboraron con nuestro estudio por su atención. References [1] [2] [3]

[4]

[5]

115

Cohen, W.M and Levinthal, D.A. Innovation and learning: The two faces of R & D. The Economic Journal. September, 99 (397),p p. 569596, 1989. DOI 10.2307/2233763. Gallego, J., El cambio tecnológico y la economía neoclásica. DYNA, 70 (138), pp. 67-78, 2003. Bakker, R.M., Cambré, B., Korlaar, L. and Raab, J., Managing the project learning paradox: A set-theoretic approach toward project knowledge transfer. International Journal of Project Management, 29 (5), pp. 494-503, 2011. DOI 10.1016/j.ijproman.2010.06.002. Biedenbach, T. and Müller, R., Absorptive, innovative and adaptive capabilities and their impact on project and project portfolio performance. International Journal of Project Management, 30 (5), pp. 621-635, 2012. DOI 10.1016/j.ijproman.2012.01.016. Hoang, H. and Rothaermel, F.T., Leveraging internal and external experience: Exploration, exploitation and R&D project performance.


Vicente-Oliva et al / DYNA 82 (191), pp. 109-117. June, 2015.

[6] [7]

[8] [9] [10]

[11]

[12]

[13] [14] [15]

[16] [17]

[18]

[19] [20]

[21] [22] [23]

[24] [25] [26]

Strategic Management Journal. 31, pp. 734-758, 2010. DOI 10.1002/smj.834. López-Paredes, A., Pajares-Gutiérrez, J. and Galán-Ordax, J., La empresa como cartera de proyectos y programas, DYNA Ingeniería e Industria, 85 (1), pp. 39-46, 2010. DOI: 10.6036/2927. Miller, D.J., Fern, M.J. and Cardinal, L.B., The use of knowledge for technological innovation within diversified firms. Academy of Management Journal. 1. 50 (2), pp. 308-326, 2007. DOI 10.5465/AMJ.2007.24634437. Porter L.J., Knowledge, strategy, the theory of the firm. Strategic Management Journal, 17, (Winter Special Issue), pp. 93-107, 1996. Cassiman, B., Di Guardo, M.C. and Valentini, G., Organizing links with science: Cooperate or contract? Research Policy, 39 (7), pp. 882892, 2010. DOI: 10.1016/j.respol.2010.04.009. Henderson, R. and Cockburn, I., Measuring competence? Exploring firm effects in pharmaceutical research. Strategic Management Journal, 15 (Special Issue: Competitive organizational behavior Winter), pp. 63-84, 1994. DOI 10.2307/2486811. Liao, S.H., Fei, W.C. and Chen, C.C., Knowledge sharing, absorptive capacity and innovation capability: An empirical study of Taiwan’s knowledge-intensive industries. Journal of Information Science, 33 (3), pp. 340-359, 2007. DOI: 10.1177/0165551506070739. Escribano, A., Fosfuri, A. and Tribó, J.A., Managing external knowledge flows: The moderating role of absorptive capacity. Research Policy, 38 (1), pp. 96-105, 2009. DOI: 10.1016/j.respol.2008.10.022. Fosfuti, A. and Tribó, J.A., Exploring the antecedents of potential absorptive capacity and its impact on innovation performance. Omega, 36 (2), pp. 173-187, 2008. DOI: 10.1016/j.omega.2006.06.012. Arbussà, A., Bikfalvi, A. y Valls, J., La I + D en las pymes : Intensidad y estrategia. Universia Business Review, Vol. Primer tri, pp. 41-49, 2004. Chen, Y-S, Lin, M.J, and Chang, C-H. The positive effects of relationship learning and absorptive capacity on innovation performance and competitive advantage in industrial markets. Industrial Marketing Management, 38 (2), pp. 152-158, 2009. DOI: 10.1016/j.indmarman.2008.12.003. Koza, M.P. and Lewin, A.Y., The Co-evolution of strategic alliances. Organization Science, 9 (3), pp. 255-264, 1998. DOI: 10.1287/orsc.9.3.255. Volberda, H.W., Foss, N.J. and Lyles, M., Absorbing the concept of absorptive capacity: How to realize its potential in the organization field. Organization Science, 21 (4), pp. 931-951, 2010. DOI: 10.1287/orsc.1090.0503. VeugelersS, R. and Cassiman, B., R&D cooperation between firms and universities. Some empirical evidence from Belgian manufacturing. International Journal of Industrial Organization, 23 (5-6), pp. 355-79, 2005. DOI 10.1016/j.ijindorg.2005.01.008. Arora. A. and Gambardella, A., Evaluating technological information and utilizing it. Journal of Economic Behavior & Organization, 24 (1), pp. 91-114, 1994. DOI: 10.1016/0167-2681(94)90055-8. Arbussà, A. and Coenders, G., Innovation activities, use of appropriation instruments and absorptive capacity: Evidence from Spanish firms. Research Policy, 36 (10), pp. 1545-1558, 2007. DOI: 10.1016/j.respol.2007.04.013. Zahra, S.A. and George, G., Absorptive capacity: A review, reconceptualization, and extension. The Academy of Management Review, 27 (2), pp. 185-203 2002. DOI: 10.2307/4134351. Lane, P.J., Koka, B.B. and Pathak, S., The reification of absorptive capacity: A critical review and rejuvenation of the construct. Academy of Management Review, 31 (4), pp. 833-863, 2006. Tatikonda, M.V. and Rosenthal, S.R., Successful execution of product development projects: Balancing firmness and flexibility in the innovation process. Journal of Operations Management, 18 (4), pp. 401-425., 2000. DOI: 10.1016/S0272-6963(00)00028-0. Wit, A.D., Measurement of project success. International Journal of Project Management, 6 (3), pp. 164-170, 1988. DOI: 10.1016/02637863(88)90043-9. Gasik, S., A model of project knowledge management. Project Management Journal, 42 (3), pp. 23-44, 2011. DOI: 10.1002/pmj.20239. Project Management Institute. Guía del PMBOK®. 1987.

[27] Turner J.R., Ledwith, A. and Kelly, J., Project management in small to medium-sized enterprises: Matching processes to the nature of the firm. International Journal of Project Management, 28 (8), pp. 744755, 2010. DOI: 10.1016/j.ijproman.2010.06.005. [28] Atkinson, R., Project management: Cost, time and quality, two best guesses and a phenomenon, it´s time to accept other success criteria. International Journal of Project Management, 17 (6), pp. 337-342, 1999. DOI: 10.1016/S0263-7863(98)00069-6. [29] Barragán-Ocaña, A. and Zubieta-García, J., Critical factors toward successful R & D projects in public research centers : A primer. Journal of Applied Sciences and Technologies, 11 (6), pp. 866-875, 2013. [30] Martínez-Sánchez, A. and Pérez-Pérez, M., R & D project efficiency management in the Spanish industry. International Journal of Project Management, 20 (20), pp. 545-560, 2002. [31] UNE166.001. UNE 166001:2006. Gestión de la I+D+i: Requisitos de un proyecto de I+D+i. 2006. [32] Munns, A. and Bjeirmi, B., The role of project management in achieving project success. International Journal of Project Management, 14 (2), pp. 81-87, 1996. DOI: 10.1016/02637863(95)00057-7. [33] Baccarini, D., The logical framework method for defining project success. Project Management Journal, 30 (4), pp. 25-32, 1999. [34] Cooke-Davies, T., The “real” success factors on projects. International Journal of Project Management, 20, pp. 185-190, 2002. [35] Söderlund, J., Building theories of project management: Past research, questions for the future. International Journal of Project Management, 22 (3), pp. 183-191, 2004. DOI: 10.1016/S0263-7863(03)00070-X. [36] Johnston, R. and Gibbons, M., Characteristics of information usage in technologial innovation. IEE Transactions of Engineering Management. Vol. EM-22, pp. 27-34, 1975. [37] Dvir, D., Lipovetsky, S., Shenhar, A. and Tishler, A., In search of project classification: A non-universal approach to project success factors. Research Policy. 27 (9), pp. 915-935, 1998. DOI: 10.1016/S0048-7333(98)00085-7. [38] Flatten, T.C., Engelen, A., Zahra, S.A. and Brettel, M., A measure of absorptive capacity: Scale development and validation. European Management Journal, 29 (2), pp. 98-116, 2011. DOI: 10.1016/j.emj.2010.11.002. [39] Podsakoff, P.M. and Organ, D.W., Self-Reports in organizational research: Problems and prospects. Journal of Management. 12, pp. 531-544, 1986. [40] Hair, J.F., Black, W.C., Babin, B.J. and Anderson, R.E., Multivariate data analysis, 7/E. Homewood: Pearson Prentice Hall, 2010. ISBN-10: 0138132631. [41] Eriksson, P.E., Exploration and exploitation in project-based organizations: Development and diffusion of knowledge at different organizational levels in construction companies. International Journal of Project Management, 31 (3), pp. 333-341, 2012. DOI: 10.1016/j.ijproman.2012.07.005. [42] Formoso, F., Couce-Carral, L., Iglesias-Rodríguez, B., Castro-Ponte, A. y Rodríguez-Guerreiro, M.J., La integración de los sistemas de gestión. necesidad de una nueva cultura empresarial. DYNAColombia. 78 (167), pp. 44-49, 2011. [43] Yang, L-R.., Huang C-f and Hsu, T-J., Knowledge leadership to improve project y organizational performance. International Journal of Project Management, 32 (1), pp. 40-53, 2014. DOI: 10.1016/j.ijproman.2013.01.011. [44] De Wit, A., Measurement of project success. International Journal of Project Management, 6 (3), pp. 164-170, 1988. [45] Pinto, J.K. and Slevin, D.P., Critical success factors across the project life cycle. Management Journal, 19 (3), pp. 67-75, 1988. [46] Stock, G.N., Greis, N.P. and Fischer, W., Absorptive capacity and new product development. The Journal of High Technology Management Research, 12 (1), pp. 77-91, 2001. DOI: 10.1016/S10478310(00)00040-7. [47] Schreyögg, G. and Duchek, S., Absorptive capacity und ihre Determinanten : Ergebnisse einer Fallstudienanalyse in deutschen Hightech- Unternehmen, 21 (2012), pp. 204-218, 2012. [48] Leal-Rodríguez, A.L., Roldán, J.L., Ariza-Montes, J.A. and LealRodríguez A., From potential absorptive capacity to innovation outcomes in project teams: The conditional mediating role of the realized absorptive capacity in a relational learning context.

116


Vicente-Oliva et al / DYNA 82 (191), pp. 109-117. June, 2015. International Journal of Project Management, 32 pp. 894-907, 2014. DOI: 10.1016/j.ijproman.2014.01.005. [49] Gebauer, H., Worch, H. and Truffer, B., Absorptive capacity, learning processes and combinative capabilities as determinants of strategic innovation. European Management Journal, 30 (1), p. 57´-73. 2012. DOI: 10.1016/j.emj.2011.10.004. [50] Jansen, J.J.P., Van den Bosch, F.A.J. and Volberda, H.W., Managing potential and realized absorptive capacity: How do organizational antecedents matter? Academy of Management Journal, 48 (6), pp. 9991015, 2005. DOI: 10.5465/AMJ.2005.19573106. S. Vicente-Oliva, Dra. Vicente-Oliva desempeñó labores de promoción tecnológica y gestión de proyectos de I+D+i en la Oficina de Transferencia de Resultados de Investigación (OTRI) de la Universidad de Zaragoza, España. Ha desarrollado proyectos en colaboración con empresas para innovación e internacionalización de producto, diversificación y prospectiva tecnológica. Á. Martínez-Sánchez, es Dr. e Ing. Industrial, catedrático de Universidad, sus investigaciones desde hace más de dos décadas tienen como objeto la innovación y el cambio tecnológico, la flexibilidad de las organizaciones, así como la adopción del teletrabajo y, más recientemente, prospectiva tecnológica. L. Berges, es Dr. Ing. Industrial, profesor titular con acreditación de catedrático de Universidad. Ha desempeñado numerosos cargos como Vicerrector de Infraestructuras y Servicios Universitarios; Director del Departamento de Ingeniería de Diseño y Fabricación de Zaragoza; y Director de la Oficina de Transferencia de Resultados de Investigación de la Universidad de Zaragoza (OTRI).

Área Curricular de Ingeniería Administrativa e Ingeniería Industrial Oferta de Posgrados

Especialización en Gestión Empresarial Especialización en Ingeniería Financiera Maestría en Ingeniería Administrativa Maestría en Ingeniería Industrial Doctorado en Ingeniería - Industria y Organizaciones Mayor información: E-mail: acia_med@unal.edu.co Teléfono: (57-4) 425 52 02

117


Technologies for the removal of dyes and pigments present in wastewater. A review Leonardo Fabio Barrios-Ziolo a, Luisa Fernanda Gaviria-Restrepo b, Edison Alexander Agudelo c & Santiago Alonso Cardona-Gallo d a

Facultad de Minas, Universidad Nacional de Colombia, Medellín, Colombia. lfbarriosz@unal.edu.co Facultad de Minas, Universidad Nacional de Colombia, Medellín, Colombia. lufgaviriare@unal.edu.co c Facultad de Minas, Universidad Nacional de Colombia, Medellín, Colombia. eaagudelo@unal.edu.co d Dpto. de Geociencias y Medio Ambiente, Facultad de Minas, Universidad Nacional de Colombia, Medellín, Colombia. scardona@unal.edu.co b

Received: April 2th, 2014. Received in revised form: October 28th, 2014. Accepted: March 23th, 2015.

Abstract Dyes and pigments are beginning to do in the country considered as a series of compounds that can have toxicological characteristics beyond the aesthetic aspects in wastewater. This review attempted to cluster the most effective treatments for the removal, destruction and mineralization of dyes and pigments present in wastewater depend on the physicochemical properties of the constituent molecules. The kinetics of removal of BOD, COD, "real" colour and "apparent" in effluents, in addition to operating times were studied to determine the set of physical technologies, chemical, biological and combined major and influence nowadays. Among the most relevant treatment technologies highlights the adsorption and filtration, advanced oxidation technologies (photocatalysis, ozonation, fenton / UV, electrocoagulation, etc.) and sequential biological processes (type anaerobic - aerobic). The influence of variables such as the pH, the initial concentration of dye and solubility, among others, on the kinetics of removal of specific dyes are evident. Keywords: Dyes, pigments, wastewater treatment technologies, removal of color and pigments.

Tecnologías para la remoción de colorantes y pigmentos presentes en aguas residuales. Una revisión Resumen Los colorantes y pigmentos están comenzando a ser considerados en el país como compuestos que pueden presentar características toxicológicas más allá de los aspectos estéticos en las aguas residuales. El estado del arte presenta los tratamientos más efectivos para la remoción, destrucción y mineralización de colorantes y pigmentos presentes en aguas residuales en función de las propiedades fisicoquímicas de las moléculas constituyentes. Las cinéticas de remoción de DBO5, DQO, color “real” y “aparente” en los efluentes, además de los tiempos de operación, fueron estudiadas para determinar el conjunto de tecnologías físicas, químicas, biológicas y combinadas de mayor importancia e influencia en la actualidad. Entre las tecnologías de tratamiento más relevantes se destacan los procesos de adsorción y filtración, las tecnologías avanzadas de oxidación (fotocatálisis, ozonación, fenton/UV, electrocoagulación, etc) y los procesos biológicos secuenciales (del tipo anaerobio – aerobio). Se evidenció la influencia de variables como el pH, la concentración inicial del colorante y la solubilidad, entre otras, sobre las cinéticas de remoción de colorantes específicos. Palabras clave: Colorantes, pigmentos, aguas residuales, tecnologías de tratamiento de aguas residuales, remoción de colorantes y pigmentos.

1. Introducción El deterioro ambiental que cada día se agudiza como consecuencia del desarrollo de actividades humanas sin cuidado por el medio ambiente, pone cada vez retos más

importantes en áreas de la ciencia como la ingeniería ambiental. Uno de estos retos es la depuración de las aguas residuales. Entre la variedad de sustancias contaminantes descargadas, es posible relacionar desde metales pesados (mercurio, cadmio, cromo, arsénico, plomo, etc),

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 118-126. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.42924


Barrios-Ziolo et al / DYNA 82 (191), pp. 118-126. June, 2015.

contaminantes emergentes (compuestos orgánicos persistentes, disruptores endocrinos, antibióticos, etc), hidrocarburos, materia orgánica, hasta compuestos que producen la coloración de los efluentes (colorantes y pigmentos) [1,2]. Estas últimas sustancias han llamado la atención tanto de las autoridades nacionales, Ministerio de Ambiente y Desarrollo Sostenible (MADS) como de la autoridad ambiental a nivel local (Area Metropolitana del valle de Aburrá-Municipio de Medellín). Medellín es la segunda Ciudad en importancia en Colombia y la primera en materia de políticas públicas para el saneamiento ambiental, es por ello que los reiterados vertimientos de efluentes coloreados realizados por empresas al río Aburrá-Medellín, han generado preocupación no solo en las autoridades locales sino también en la comunidad en general, debido a que este tipo de eventos evidencian el alto grado de impactación que sufren diariamente los recursos naturales y muestran la despreocupación por parte de las empresas de tratar y cuidar que sus efluentes no afecten el medio ambiente natural. De acuerdo a trabajos publicados por la Cámara de Comercio de Medellín para Antioquia [3] sobre los sectores productivos más importantes que tienen presencia en el Área Metropolitana, se pudo establecer que los sectores que más impacto pueden ofrecer sobre el rio en términos de vertimientos coloreados, tanto por su cantidad como calidad son: el sector textil, de alimentos y bebidas, curtimbres, productos químicos, etc. De acuerdo con Doerner [4], los colorantes a diferencia de los pigmentos son solubles en un medio o solvente específico, esta definición implica que se designe como “colorante” o “pigmento” a una misma sustancia dependiendo del tipo de solvente en el cual se encuentre; por lo tanto es la solubilidad, una propiedad física importante en investigaciones relacionadas con colorantes y pigmentos cuyo conocimiento contribuye en el desarrollo de tecnologías más eficientes de tratamiento. Las sustancias que producen la coloración de las aguas residuales, pueden ser agrupadas en dos categorías: compuestos de origen sintético y compuestos naturales. Los colorantes sintéticos presentan propiedades únicas como su alta solidez en medio húmedo, generación de tonos brillantes en las superficies y su relativo bajo costo de producción, además de su buena solubilidad, resistencia a la luz solar, al contacto con el agua y al ataque de una variedad de compuestos químicos [5,6] por lo cual resultan atractivas para las industrias textiles, de cueros, imprenta y pinturas [6]. La literatura reporta en el caso de los colorantes sintéticos basados en bencidina (entre otros compuestos constituyentes de las aguas residuales coloreadas), algunas características tóxicas (mutagénicas y cancerígenas) relacionadas con humanos, asociadas en algunos casos a la generación de subproductos del metabolismo aeróbico y/o anaeróbico de una variedad de microorganismos [6,7]. Las bajas concentraciones de colorantes y pigmentos en el agua, disminuyen gradualmente la penetración de la luz con efectos significativos sobre la flora acuática, generando así una reducción de la actividad fotosintética y del oxígeno disuelto en el medio [8,9]. El grupo de colorantes sintéticos más utilizados en el mundo, el tipo azo (--N==N--, grupo funcional cromóforo) representa el 70% de los colorantes encontrados en las aguas residuales municipales [5]; la presencia de estas sustancias se

atribuye a la capacidad que posee el grupo –N==N-- de ser sustituido por una variedad de estructuras orgánicas e inorgánicas que le otorgan propiedades químicas específicas a cada molécula, por esta razón existen más de 3000 variedades de colorantes azo [5]. Los colorantes naturales (los cuales se extraen de fuentes primarias de la naturaleza) se presentan en menor proporción a los sintéticos. Generalmente son polímeros con una amplia variedad de grupos funcionales y estructuras químicas orgánicas complejas (como ciclos y grupos aromáticos) que pueden afectar también los ecosistemas acuáticos y la salud humana [10]. La clasificación de colorantes y pigmentos puede realizarse considerando las propiedades fisicoquímicas de los grupos funcionales constitutivos. Estas propiedades de acuerdo con muchos autores, son claves en la selección de una tecnología o grupo de tecnologías aplicables para la decoloración de aguas residuales contaminadas. Tomando como referencia el diccionario de química de la universidad de Oxford [11], es posible establecer diferencias entre los grupos de colorantes en función de la forma de aplicación del tinte o del soporte sobre el sustrato utilizado, de esta manera se presentan diferentes compuestos generadores de color como: Ácidos: Cuyo cromóforo hace parte de un ión negativo, utilizados para teñir fibras proteicas (lana y seda) o poliamidas y fibras sintéticas. Aplicados en las industrias de alimentos, imprenta, cuero, madera y nylon. Solubles en agua. Básicos: Los cuales poseen un cromóforo que forma parte de un ión positivo (generalmente una sal de amina o un grupo imino ionizado), utilizados para teñir fibras acrílicas, en la síntesis de nylon modificado, poliéster modificado y muchos medicamentos. Solubles en agua. Dispersos: Tintes insolubles que se aplican formando una dispersión muy fina en el agua. Se usan para teñir acetato de celulosa y otras fibras sintéticas (poliéster y fibras de acrílico) Directos: Presentan una gran afinidad por materiales de algodón, rayón y otras fibras de celulosa, generalmente son sales de ácidos sulfónicos. Solubles en agua. Reactivos: Presentan grupos de compuestos capaces de reaccionar con el sustrato formando enlaces covalentes, usados para teñir fibras de celulosa y algodón, en general. Baño: Sustancias insolubles usadas para teñir algodón. Suelen presentar grupos cetónicos (C=O). Este grupo de colorantes es oxidado por acción del aire y precipitado en forma de pigmento sobre las fibras; el índigo y la antraquinona son ejemplos de este grupo. Aplicaciones sobre algodón y fibras de celulosa De acuerdo con las revisiones de Robinson [12] y Eren [13], entre los procesos de tratamiento más utilizados en la actualidad para la remoción de colorantes y pigmentos presentes en aguas residuales, se destacan los sistemas de oxidación y mineralización avanzada, pertenecientes a la categoría de procesos químicos, estos comprenden: la reacción de fenton (óptima para la decoloración de efluentes con presencia de colorantes y pigmentos), la ozonación (aplicada en estado gaseoso, la cual no incrementa el volumen de tratamiento) y la electrocoagulación, entre otras. Estas tecnologías producen en general, grandes volúmenes de lodos que podrían ser reducidos a partir de la implementación

119


Barrios-Ziolo et al / DYNA 82 (191), pp. 118-126. June, 2015.

de reactores fotocatalíticos y la tecnología de ultrasonido (efectiva en el rompimiento de estructuras moleculares cíclicas, entre otras). Otro grupo de tecnologías apropiadas para la remoción de color, en particular de pigmentos, corresponde a los procesos físicos para el tratamiento de efluentes coloreados, entre ellos, los sistemas de filtración y los procesos de adsorción mediante el uso de materiales como carbón activado, residuos agroindustriales (viruta, aserrín, bagazo de caña, cascarilla de arroz, entre otros); los cuales pueden ser selectivos y óptimos para un grupo particular de colorantes. Los procesos biológicos aplicados en el tratamiento de colorantes y pigmentos se subdividen teniendo en cuenta el tipo de aceptor final de electrones, de esta manera se presentan procesos biológicos aerobios y anaerobios, cuyas desventajas se asocian a los grandes tiempos de operación requeridos para alcanzar tasas óptimas de remoción de color. A continuación, se presentan las investigaciones sobre la aplicación de tecnologías para la decoloración de efluentes contaminados con los diferentes tipos de colorantes y pigmentos. Como resultado del estudio de las variables de mayor contribución dentro de cada subgrupo de tecnologías de remoción, destrucción y mineralización, se presentarán los intervalos de operación de cada tratamiento, porcentajes de remoción del color “real” y/o “aparente”, evaluando los tiempos de operación y la disminución en la demanda química y biológica de oxígeno en el efluente. Se conocerán los factores físicos, químicos y biológicos que más influyen en los procesos de decoloración. Esta información es clave para la selección de la tecnología o grupos de tecnologías más eficientes para el tratamiento de colorantes y pigmentos, considerando la diversidad de sustancias generadoras de color en el mundo y en especial, de los compuestos utilizados a nivel local en los sectores productivos relacionados. 2. Colorantes y pigmentos: Tecnologías de remoción, destrucción y mineralización Al analizar el comportamiento de las sustancias que producen la coloración de las aguas residuales, se pueden definir cuatro tipos de tecnologías para el tratamiento de los efluentes contaminados, las cuales se agrupan dentro de las categorías físicas, químicas, biológicas y combinadas. 2.1. Tratamientos físicos Entre las principales tecnologías para el tratamiento físico de efluentes contaminados por la presencia de colorantes y pigmentos en el medio acuoso, se relacionan los procesos de adsorción, los sistemas de filtración y las resinas de intercambio iónico como las más importantes. Entre los materiales adsorbentes reportados en la literatura con mejores porcentajes de remoción de color, se encuentran desde residuos agroindustriales de bajo costo como palma de aceite, viruta, aserrín, bambú, algas, hojas de pino, tallos de canola y quitosano, entre otros, hasta minerales como lignito, magnetita, carbón activado, bentonita, etc. Producto de las interacciones electrostáticas entre los materiales adsorbentes y los compuestos que producen la coloración de las aguas

residuales, el proceso de adsorción es afectado por las condiciones del medio (pH y temperatura), las características moleculares de los colorantes (grupos funcionales constitutivos) y el tiempo de contacto, entre otras [14,15]. Estas interacciones electrostáticas pueden ser mejoradas mediante el pre-tratamiento de los materiales adsorbentes utilizando agentes químicos modificadores [16-18], los cuales actúan a nivel de superficie causando la protonación o desprotonación de las moléculas expuestas del material adsorbente. Existe una influencia significativa del pH sobre las cinéticas de adsorción. Las condiciones acidas en general, favorecen la remoción de grupos de colorantes ácidos, directos, reactivos y dispersos, mientras que los medios alcalinos, incrementan la remoción de colorantes básicos [15,17,19-22]. En el caso de los colorantes que en solución o en medio acuoso presentan valores de pH alcalinos, los procesos de adsorción más eficientes relacionan como adsorbente materiales de origen vegetal como viruta y hojas de pino, con porcentajes de remoción de color mayores al 90% y hasta el 98% respecto a otros materiales adsorbentes [17]. El carbón activado, puede llegar a remover por adsorción hasta el 96% del color generado por colorantes directos [16,20] y el 70% del color asociado al tipo reactivo y disperso [19,23] antes de la saturación del material adsorbente. Materiales como quitosano y fosfato de calcio, alcanzan porcentajes de remoción de colorantes reactivos del 90% [15,24], mientras que los polímeros ligados a magnetita y algunos residuos agroindustriales como los tallos de canola, presentan cineticas de remoción significativas (>90%) sobre colorantes ácidos [21], [22]. 2.2. Tratamientos químicos En el grupo de procesos químicos implementados para el tratamiento de efluentes contaminados con sustancias colorantes, se presentan algunas técnicas de oxidación química como: los procesos de ozonación, fenton, ultrasonido, fotocatálisis (ultravioleta), oxidantes convencionales (peróxido de hidrogeno), procesos de coagulación / floculación y electrocoagulación, entre otras tecnologías. Entre este grupo de tecnologías, la fotocatálisis y el tratamiento fenton/UV, presentan porcentajes de remoción de color cercanos al 100%. La remoción de colorantes utilizando sistemas de ozonación puede ser favorecida por las condiciones acidas del medio y el uso de iones metálicos como hierro y manganeso, entre otras variables, no obstante, al igual que en los procesos de adsorción, existe una gran contribución de la naturaleza del colorante (acido, básico, reactivo, etc) sobre el intervalo óptimo de operación de la tecnología. Utilizando un sistema de ozonación es posible alcanzar porcentajes de remoción entre el 98 y 100% para los diferentes tipos de colorantes, en periodos de tiempo inferiores a los 15 minutos, con concentraciones iniciales de colorantes entre 86 y 2000 mg/L [25-,27]. La reducción en los valores de demanda química de oxigeno (DQO) reportados para la ozonación, pueden variar en general, entre el 10 y el 48% [25,26] mientras que los

120


Barrios-Ziolo et al / DYNA 82 (191), pp. 118-126. June, 2015.

flujos óptimos de ozono se mantienen alrededor de los 15 litros/min o 360 mg O3/h [26,27]. Otro tipo de tecnología utilizada para la remoción de color en aguas residuales, es la electrocoagulación. La tecnología utiliza valores de densidad de corriente entre 4.45 y 200 A/m2 [28-34], alcanzado porcentajes de remoción de color y DQO superiores al 90 y 80%, respectivamente, en tiempos de operación inferiores a 120 minutos para los diferentes tipos de colorantes evaluados [29-31,35,36]. La cinética de remoción de color puede ser influencia por el tipo de electrodo implementado, el cual puede ser de aluminio, hierro, grafito, acero, dióxido de titanio, plomo, platino; entre otros materiales [28,32-37], La acción del tratamiento fenton/UV sobre los diferentes tipos de colorantes y pigmentos, mejora las cinéticas de remoción del carbono orgánico total (COT >70%) y de la demanda química de oxígeno, manteniendo los porcentajes de decoloración cercanos al 100%, en periodos de tiempo inferiores a los 20 minutos [38-42]. Las cinéticas son optimizadas al aumentar la concentración de peróxido de hidrógeno y de hierro (II)/(III). Los intervalos de tiempos requeridos para alcanzar los máximos de remoción varían dependiendo la relación entre la concentración inicial del colorante, concentración del ion Fe (II), la concentración de peróxido y el tiempo de irradiación. La concentración inicial de colorantes fue evaluada entre 40 y 500 mg/L [38-40,42]. Los procesos de coagulación – floculación utilizan diferentes especies químicas para remover el color y la carga contaminante presente en los efluentes. Entre los compuestos utilizados, la especie de aluminio Al13 presenta mayores cinéticas de remoción de color respecto a las sales comerciales de aluminio [43]. No obstante, agentes como el sulfato de aluminio y el cloruro férrico, pueden remover entre el 53 y 100% de color de una solución de colorante de 40 a 4000 mg/L, en periodos de operación inferiores a las 2 horas [37,44]. Los procesos fotocatalíticos análogos al proceso fenton/UV, utilizan como principal catalizador el dióxido de titanio (TiO2). A partir de él es posible mineralizar el COT en un 60%, mientras se remueve el color entre 90 y 100%, para periodos inferiores a los 120 minutos [45]. La fotocatálisis con TiO2 puede ser optimizada modificando este material con iones de plata (Ag+), entre otros. [10] En el grupo de tecnologías más destacadas para el tratamiento químico de colorantes, el proceso fotocatalítico, alcanzó la máxima cinética de remoción de color (100%) junto con el tratamiento fenton/UV durante periodos de operación inferiores a los 90 y 20 minutos, respectivamente; obteniendo además porcentajes de remoción de COT y DQO entre el 80 y 100%. Entre los factores claves para potencializar los rendimientos de las reacciones fenton, las concentraciones de Fe (II) y peróxido de hidrogeno fueron importantes para obtener cinéticas de remoción de color significativas. Teniendo en cuenta la capacidad que tienen los procesos fenton para remover los grupos de colorantes previamente definidos, es posible considerarla entre las tecnologías adecuadas para el tratamiento de un amplio grupo de colorantes, en función no solo de la eficiencia de remoción de color, sino también de los valores de DQO y COT. Estas eficiencias pueden ser incrementadas por la generación

adicional de radicales hidroxilos resultantes del tratamiento con irradiación UV; dos fuentes potenciales de agentes oxidantes dentro de una misma tecnología, reducen el efecto de factores externos como la presencia de sales, el pH y la temperatura, entre otros. Las tecnologías de tratamientos basadas en la electrocoagulación, fueron favorecidas por la aplicación de electrodos de óxido de titanio encontrándose porcentajes de remoción de color superiores al 90%. Las condiciones ácidas (bajos valores de pH) incrementaron los porcentajes de remoción con la ozonación. 2.3. Tratamientos biológicos La actividad metabólica de los microorganismos puede ser optimizada con la adición de co-sustratos o fuentes de carbono y energía secundarias, las cuales pueden acelerar la asimilación del colorante (fuente de energía objetivo). Los procesos biológicos implementados, evalúan la influencia de factores como el modo de operación del reactor (batch, feedbatch o continuo), de variables como la temperatura del medio, el pH, la concentración inicial de colorante y la concentración de microorganismos, además de la presencia de oxígeno en el sistema. Los tratamientos biológicos permiten obtener porcentajes de remoción de color y COT significativos, sin embargo se realizan a bajas velocidades lo que incrementa sustancialmente los tiempos de tratamiento. En condiciones adecuadas, los organismos pueden reducir una diversidad de sustancias químicas recalcitrantes; La diversidad biológica hace posible encontrar enzimas y microorganismos especializados en la degradación de colorantes específicos. Entre los procesos aplicados para el tratamiento biológico de efluentes con presencia de colorantes y pigmentos, los tratamientos anaerobios producen una remoción de color y DQO entre el 80 y 100% en periodos que oscilan entre los 2 y 58 dias [46,47]. Los tratamientos aerobios de mayor importancia reportados en la literatura, tiene como base los sistemas de lodos activados y el uso de hongos como Phanerochaete chrysosporium y Pleurotus sajorcaju, entre otros [49-51] Bajo condiciones anaerobias es posible remover hasta el 95% de colorantes como el azul Índigo, durante 5 días de operación [41,46]. Los reactores de flujo ascendentes de película fija (anaerobios) acoplados por sistemas de reactores aeróbicos pueden alcanzar cinéticas de remoción de color y DQO, superiores al 98 y 95% respectivamente, durante 16 horas de operación [52]. La secuencia de tratamiento anaerobia – aerobia a partir de consorcios microbianos, presenta rendimientos en los valores de remoción de color, DQO y DBO sobre el 60, 80 y 90%, respectivamente, con la utilización de co-sustratos como almidón, glucosa y ácido acético y, en tiempos de residencia hidráulicos de 16 horas y 8 días [41,48]. Otros modelos de reactores ampliamente aplicados, son: los reactores tipo Wetland, los cuales pueden reducir un 70% del color de los efluentes, mientras se presentan porcentajes de remoción de DQO y COT superiores al 88% [53]; las algas Nostoc lincki y Oscillatoria rubescens, las cuales pueden

121


Barrios-Ziolo et al / DYNA 82 (191), pp. 118-126. June, 2015.

remover hasta en un 82% colorantes ácidos [54]; y algunos microorganismos aerobios del genero Bacillus y Pseudomonas, con eficiencias de remoción sobre el 90% [55] Los sinergismos resultantes de las interacciones microbianas (tratamientos de colorantes con la aplicación consorcios), favorecen las cinéticas de degradación del color, respecto al uso de especies individuales [56]. Los organismos aislados de suelos contaminados con colorantes, pueden alcanzar cinéticas de remoción más altas de COT y color (superiores al 90%) dada su capacidad de adaptación. Los resultados permiten identificar una diversidad de especies de bacterias capaces de remover colorantes presentes en efluentes contaminados, dentro de los microorganismos estudiados el género Bacillus sp, presentó la máxima capacidad de remoción de colorantes (100%) en 14.5 horas de tratamiento, a diferencia de los tratamientos químicos, los procesos biológicos requieren tiempos de tratamiento prolongados para obtener rendimientos efectivos en la remoción de la DQO. La degradación de colorantes puedes ser incrementada con la adición de co-sustratos como la glucosa y ácido acético (entre otros). Entre las secuencias de tratamiento biológico, la combinación de etapas anaerobias – aerobias (en su orden) fueron importantes para lograr degradar colorantes recalcitrantes. 2.4. Tratamientos combinados Producto de la combinación de tecnologías físicas, biológicas y químicas, se logran optimizar desde las cinéticas de remoción de color, DQO y COT, hasta los tiempos de operación y/o de residencia hidráulica de los efluentes. Los procesos combinados pueden reducir desde la generación de lodos [13], hasta favorecer el escalado de procesos a nivel industrial. Entre las tecnologías combinadas, la secuencia ozonación – UV/H2O2 puede alcanzar porcentajes de remoción de color entre 80 y 100% en periodos de 5 y 15 minutos, reduciendo el COT entre el 75 y 80% de colorantes directos [57]. El método coagulación – adsorción, utilizando carbón activado y alumbre y, una concentración inicial de colorantes reactivos de 100 mg/L, logra remover cerca del 100% del color presente en el medio, aplicando dosis de coagulantes entre 250 y 350 mg/L [58]. La aplicación del sistema ozonación – ultrasonido, presenta porcentajes de remoción de color superiores al 98% en periodos inferiores a 1 minuto en el caso de algunos colorantes directos, tras la aplicación de un flujo de ozono cercano a 3.2 g/h y una intensidad de corriente alrededor de 176 W/L [59], [59]. Lo cual disminuye considerablemente el volumen de los lodos producidos. Entre otras tecnologías aplicadas, la combinación de la técnica de ultrasonido/peróxido de hidrógeno/grafito y ultrasonido/UV, producen reducciones del 90% en la coloración del efluente, durante periodos de 120 minutos, aplicando frecuencias de ultrasonido entre 20 y 817 kHz sobre concentraciones iniciales de colorantes de 300 mg/L [60]. El acoplamiento de procesos de coagulación seguidos por procesos biológicos en condiciones batch, puede alcanzar valores de remoción de color del 100% para concentraciones iniciales de colorantes en el orden de 600 mg/L y, con tiempos de residencia

hidráulica de 5 horas [44] La secuencia de oxidación química mediante ozono y la implementación de filtros biológicos aireados de flujo ascendente, puede alcanzar eficiencia de remoción de color y DQO, del 97 y 90% respectivamente [61]. Finalmente los procesos fenton –ultrasonido, en presencia de hierro cero valente, en general pueden remover en un 99% el color aparente del medio contaminado durante 10 minutos, tras la aplicación de densidades de corriente cercanas a 120 W/L y una concentración de hierro de 1 g/L [62]. La aplicación de tratamientos combinados disminuyó los tiempos promedios de remoción de color, DQO y de la operación en general, mientras se favoreció el incremento de la escala o de los volúmenes sobre los cuales se desarrollaron los protocolos (hasta 6 litros). La combinación de tratamientos biológicos y químicos (degradación biológica y oxidación con peróxido de hidrogeno) permitió obtener una eficiencia de remoción superior al 93%, mientras que los procesos iniciales de adsorción con carbón activado más coagulación y oxidación química, eliminaron en su totalidad el colorante en el medio. El estudio favorece como pretratamientos, los procesos de adsorción o de degradación biológica, seguidos por la oxidación química (O3, H2O2, ultravioleta, fenton, etc), esto hace posible reducir además, la generación de lodos maximizados en tratamientos con coagulantes, la electrocoagulación o la actividad biológica individual. La Tabla 1 muestra los rangos de variación de la eficiencia de remoción de color (para los grupos de tecnologías de tratamiento estudiadas) de acuerdo a la clasificación de colorantes y pigmentos presentada en el capítulo 1. Tabla 1. Eficiencias de remoción de color por grupos de colorantes. Tipo de Caracteristicas generales Tipo de tratamiento % RC Autores colorante Ácido Biológico 80 - 100 [48], [63], [64] Ácido Físico 90 - 99 [18], [65], [66] Ácido Físico - Químico 95 - 100 [38], [67] Ácido Químico 100 [20], [68], [69] Ácido Químico - Biológico 90 - 100 [70], [71], [72] Baño Biológico 90 - 98 [73], [74], [46] Baño Físico 15 - 95 [75], [76], [77] Baño Químico 50 - 98 [32], [39], [78] Básico Físico 90 - 99 [79], [80], [81] Básico Físico - Biológico 99,2 [51] Básico Químico 80 -99,9 [29], [82], [83] Directo Biológico 80 - 98 [84], [85], [86] Directo Físico 25 - 95 [20], [16], [75] Directo Físico - Químico 90 - 100 [72], [57], [139] Directo Químico 90 - 100 [87], [45], [34] Directo Químico - Físico 99 [62] Disperso Físico >99 [88] Disperso Físico - Biológico 78,9 [44] Disperso Físico - Químico 100 [89], [90] Disperso Químico 100 [148], [36], [25] Reactivo Biológico 65 - 92 [91], [92], [16] Reactivo Físico 90 [15] Reactivo Físico - Químico 99 [93] Reactivo Químico 90 - 99 [94], [95], [96] Reactivo Químico - Biológico 97 [97] Fuente propia.

122


Barrios-Ziolo et al / DYNA 82 (191), pp. 118-126. June, 2015.

De acuerdo con Quintero [98] la elección de una tecnologia especifica para el tratamiento de colorantes, se realiza en función de la calidad del agua efluente, del uso final del recurso, de los costos, ventajas y desventajas de las tecnologías. 3. Conclusiones Dentro los tratamientos físicos aplicados, las condiciones ácidas aumentaron la remoción de los colorantes tipo ácido, reactivo y disperso, en general. La adsorción de colorantes básicos estuvo influenciada por condiciones básicas del medio. Estos efectos se asocian a la protonación y desprotonación de grupos funcionales que incrementan o disminuyen las interacciones electrostáticas con el material adsorbente. Las técnicas de adsorción pueden adaptar diversos tipos de “residuos” agroindustriales, entre otros productos, alcanzando cinéticas de decoloración superiores al 99% reduciendo los costos de operación. En el grupo de tecnologías más destacadas para el tratamiento químico de colorantes, los procesos fotocatalíticos y fenton/UV, alcanzaron la máxima cinética de remoción de color (100%) en algunos casos en periodos de tiempo menores a 20 minutos, con remociones de COT y DQO de 80 y 100%, respectivamente. Entre los factores claves para potencializar los rendimientos de las reacciones fenton, las concentraciones de Fe (II) y peróxido de hidrogeno fueron importantes para obtener cinéticas de remoción de color significativas. Estas eficiencias pueden ser incrementadas por la generación adicional de radicales hidroxilos resultantes del tratamiento con irradiación UV. En los tratamientos biológicos los resultados permiten identificar una diversidad de especies de bacterias capaces de remover colorantes presentes en efluentes contaminados, dentro de los microorganismos estudiados el género Bacillus sp, presentó la máxima capacidad de remoción de colorantes (100%) en 14.5 horas de tratamiento, a diferencia de los tratamientos químicos, los procesos biológicos requieren tiempos de tratamiento prolongados para obtener rendimientos efectivos en la remoción de la DQO. La degradación de colorantes puede ser incrementada con la adición de co-sustratos como la glucosa y ácido acético (entre otros). Entre las secuencias de tratamiento biológico, la combinación de etapas anaerobias – aerobias (en su orden) se convierte en una alternativa para la reducción de los tiempos de operación manteniendo el valor de remoción de color y DQO superior al 90%. La evidencia experimental determinó que las características insolubles de los colorantes dispersos y tipo baño, favorecen la acción de tratamientos asociados a tecnologías como la coagulación y electrocoagulación, a diferencia de colorantes solubles (ácidos y básicos). Las combinaciones de tecnologías a partir de ozono y ultrasonido, permitieron remover más del 98% del color directo presente en las muestras estudiadas, durante 1 minuto de operación. Estas tecnologías se convierten en un método eficaz para la degradación adicional de la DQO y COT, con la disminución del volumen de lodos generados, en tiempos mínimos de residencia hidráulica. Además de la combinación anterior, el tratamiento instantáneo fenton/ultrasonido, alcanzó la

remoción del 99% del color presente durante 10 minutos de operación. De acuerdo a los resultados presentados por los diferentes autores respecto al tratamiento de los diferentes tipos de colorantes y pigmentos existentes, se presentan como grupo de tecnologías más importantes y eficientes para la decoloración, destrucción y mineralización de sustancias generadoras de color en efluentes: los procesos de adsorción, la fotocatálisis, el tratamiento fenton/UV, el sistema ozono – ultrasonido y el tratamiento biológico anaerobio –aerobia. La potencialización y adaptación de secuencias de tratamiento de colorantes, aplicando algunas de las tecnologías presentadas anteriormente, puede convertirse en una alternativa eficiente para reducir el vertimiento de efluentes coloreados por parte de los sectores productivos del Valle de Aburrá – Medellín. Agradecimientos Los autores agradecemos a la Facultad de Minas de la Universidad Nacional de Colombia Sede Medellín; al Laboratorio de Hidráulica y al Ingeniero Luis Fernando Ospina Herrera. References [1]

Eriksson, E., Christensen, N., Ejbye Schmidt, J. and Ledin, A., Potential priority pollutants in sewage sludge. Desalination, 226 (1– 3), pp 371-388, 2008. DOI: 10.1016/j.desal.2007.03.019 [2] Deblonde, T., Cossu-Leguille, C. and Hartemann, P., Emerging pollutants in wastewater: A review of the literature. International Journal of Hygiene and Environmental Health, 214 (6), pp. 442-448, 2011. DOI: 10.1016/j.ijheh.2011.08.002 [3] Cámara de Comercio de Medellín para Antioquia, Las 500 empresas más grandes de Antioquia, 2011.RAED. Revista Antioqueña de Economía y Desarrollo. Ed. 4. Oct 2012. [4] Doerner, M., Los materiales de pintura y su empleo en el arte, 6ta ed., Barcelona, Reverté S.A., Impreso, 425 P., 1998. [5] Saratale, R., Saratele, G., Chang, J. and Govindwarm S., Bacterial decolorization and degradation of Azo dyes: A review. Journal of the Taiwan Institute of Chemical Engineers, 42 (1), pp 138-157, 2011. DOI: 10.1016/j.jtice.2010.06.006 [6] Soon, A.N. and Hameed, B.H., Heterogeneous catalytic treatment of synthetic dyes in aqueous media using Fenton and photo-assisted Fenton process. Desalination, 269 (1–3), pp 1-294, 2011. DOI: 10.1016/j.desal.2010.11.002 [7] Eichlerová, I., Homolka, L. and Nerud, F., Decolorization of high concentrations of synthetic dyes by the white rot fungus Bjerkandera adusta strain CCBAS 232. Dyes and Pigments, 75 (1), pp 38-44, 2007. DOI: 10.1016/j.dyepig.2006.05.008 [8] Hosseini K.E, Alavi MR and Hashemi SH., Evaluation of integrated anaerobic/aerobic fixed-bed sequencing batch biofilm reactor for decolorization and biodegradation of azo dye Acid Red 18: Comparison of using two types of packing media. Bioresource Technology, 127 (1), pp. 415-42, 2013. DOI: 10.1016/j.biortech.2012.10.003 [9] Pandey, A., Singh, P. and Iyengar L., Bacterial decolorization and degradation of Azo dyes. International Biodeterioration & Biodegradation, 59 (2), pp 73-84, 2007. DOI: 10.1016/j.ibiod.2006.08.006 [10] Han, F., Subba, V., Srinivasan, M., Rajarathnam, D. and Naidu, R., Tailored titanium dioxide photocatalysts for the degradation of organic dyes in wastewater treatment: A review. Applied Catalysis A: General, 359 (1–2), pp. 25-40, 2009. DOI: 10.1016/j.apcata.2009.02.043 [11] Universidad de Oxford – Complutense., Diccionario de Química, 1er ed., Madrid, Complutense, 160 P., Impreso, 1999.

123


Barrios-Ziolo et al / DYNA 82 (191), pp. 118-126. June, 2015. [12] Robinson, T., Mcmullan, G., Marchant, R. and Nigam, P., Remediation of dyes in textile effluent: A critical review on current treatment technologies with a proposed alternative. Bioresource Technology, 77 (3), pp. 247-255, 2001. DOI: 10.1016/S09608524(00)00080-8 [13] Eren, Z., Ultrasound as a basic and auxiliary process for dye remediation: A review. Journal of Environmental Management, 104 (1), pp. 127-14, 2012. DOI: 10.1016/j.jenvman.2012.03.028 [14] Cai, J., Cui, L., Wang, Y. and Liu, C.. Effect of functional groups on sludge for biosorption of reactive dyes. Journal of Environmental Sciences, 21 (4), pp. 534-53, 2009. DOI: 10.1016/S10010742(08)62304-9 [15] Momenzadeh, H., Tehrani-Bagha, A.R., Khosravi, A., Gharanjig, K. and Holmberg, K., Reactive dye removal from wastewater using a chitosan nanodispersion. Desalination, 271 (1–3), pp. 225-230, 2011. DOI: 10.1016/j.desal.2010.12.036 [16] Wang, L. and Yan, G., Adsorptive removal of direct yellow 161dye from aqueous solution using bamboo charcoals activated with different chemicals. Desalination, 274 (1–3), pp. 81-90, 2011. DOI: 10.1016/j.desal.2011.01.082 [17] Janoš P., Coskun S., Pilařová V. and Rejneket J., Removal of basic (Methylene Blue) and acid (Egacid Orange) dyes from waters by sorption on chemically treated wood shavings. Bioresource Technology, 100 (3), pp. 1450-1453, 2009. DOI: 10.1016/j.biortech.2008.06.069 [18] Sun, X., Wang, S., Cheng, W., Fan, M., Tian, B., Gao, B. and Li, X., Enhancement of acidic dye biosorption capacity on poly (ethylenimine) grafted anaerobic granular sludge. Journal of Hazardous Materials, 189 (1–2), pp. 27-33, 2011. DOI: 10.1016/j.jhazmat.2011.01.028 [19] Al-Degs, Y.S., El-Barghouthi, M.I., El-Sheikh, A.H. and Walker, G.M., Effect of solution pH, ionic strength, and temperature on adsorption behavior of reactive dyes on activated carbon. Dyes and Pigments, 77 (1), pp. 16-23, 2008. DOI: 10.1016/j.dyepig.2007.03.001 [20] El-Ashtoukhy, S.Z., Loofa egyptiaca as a novel adsorbent for removal of direct blue dye from aqueous solution. Journal of Environmental Management, 90 (8), pp. 2755-2761, 2009. DOI: 10.1016/j.jenvman.2009.03.005 [21] Luo, X., Zhan, Y., Huang, Y., Yang, L., Tu, X. and Luo, S., Removal of water-soluble acid dyes from water environment using a novel magnetic molecularly imprinted polymer. Journal of Hazardous Materials, 187 (1–3), pp. 274-282, 2011. DOI: 10.1016/j.jhazmat.2011.01.009 [22] Hamzeh, Y., Ashori, A., Azadeh, E. and Abdulkhani, A., Removal of Acid Orange 7 and Remazol Black 5 reactive dyes from aqueous solutions using a novel biosorbent. Materials Science and Engineering: C, 32 (6), pp. 1394-1400, 2012. DOI: 10.1016/j.msec.2012.04.015 [23] Gerçel, Ö., Gerçel, H.F., Koparal, A.S. and Öğütveren, Ü.B., Removal of disperse dye from aqueous solution by novel adsorbent prepared from biomass plant material. Journal of Hazardous Materials, 160 (2–3), pp 668-674, 2008. DOI: 10.1016/j.jhazmat.2008.03.039 [24] El- Boujaady, H., El-Rhilassi, A., Bennani-Ziatni, M., El-Hamri, R., Taitai, A. and Lacout, J.L., Removal of a textile dye by adsorption on synthetic calcium phosphates. Desalination, 275 (1–3), pp. 10-16, 2011. [25] Arslan. I., Treatability of a simulated disperse dye-bath by ferrous iron coagulation, ozonation and ferrous iron-catalyzed ozonation. Journal of Hazardous Materials, 85 (3), pp. 229-24, 2001. DOI: 10.1016/S0304-3894(01)00232-1 [26] Oguz, E., Keskinler, B. and Çelik, Z., Ozonation of aqueous Bomaplex Red CR-L dye in a semi-batch reactor. Dyes and Pigments, 64 (2), pp. 101-108, 2005. DOI: 10.1016/j.dyepig.2004.04.009 [27] Pachhade, K., Sandhya, S. and Swaminathan, K., Ozonation of reactive dye, Procion red MX-5B catalyzed by metal ions. Journal of Hazardous Materials, 167 (1–3), pp. 313-318, 2009. DOI: 10.1016/j.jhazmat.2008.12.126 [28] Kim, T., Park, C., Shin, E. and Kim, S., Decolorization of disperse and reactive dyes by continuous electrocoagulation process.

[29]

[30]

[31]

[32]

[33]

[34]

[35]

[36]

[37]

[38]

[39]

[40]

[41]

[42]

[43]

[44]

124

Desalination, 150 (2), pp. 165-175, 2002. DOI: 10.1016/S00119164(02)00941-4 Daneshvar, N., Oladegaragoze, A. and Djafarzadeh, N., Decolorization of basic dye solutions by electrocoagulation: An investigation of the effect of operational parameters. Journal of Hazardous Materials, 129 (1–3), pp. 116-122, 2006. DOI: 10.1016/j.jhazmat.2005.08.033 Kobya, M., Demirbas, E., Can, O.T. and Bayremoglu M., Treatment of levafix orange textile dye solution by electrocoagulation. Journal of Hazardous Materials, 132 (2–3), pp 183-188, 2006. DOI: 10.1016/j.jhazmat.2005.07.084 Merzouk, B., Gourich, B., Sekki, A., Madani, K., Vial, C. and Barkaoiu, M., Studies on the decolorization of textile dye wastewater by continuous electrocoagulation process. Chemical Engineering Journal, 149 (1–3), pp. 207-214, 2009. DOI: 10.1016/j.cej.2008.10.018 Kariyajjanavar, P., Narayana, J. and Arthoba, Y., Degradation of textile dye C.I. Vat Black 27 by electrochemical method by using carbon electrodes. Journal of Environmental Chemical Engineering, In Press, Corrected Proof, 2013. Sengil, I.A., Ozacar, M., The decolorization of C.I. Reactive Black 5 in aqueous solution by electrocoagulation using sacrificial iron electrodes. Journal of Hazardous Materials, 161 (2–3), pp. 1369-1376, 2009. DOI: 10.1016/j.jhazmat.2008.04.100 Zodi, S., Merzouk, B., Potier, O., Lapicque, F. and Leclerc, J., Direct red 81 dye removal by a continuous flow electrocoagulation / flotation reactor. Separation and Purification Technology, 108 (1), pp. 215222, 2013. DOI: 10.1016/j.seppur.2013.01.052 Arslan-Alaton, İ., Kabdaşlı, I., Vardar, B. and Tünay, O. Electrocoagulation of simulated reactive dyebath effluent with aluminum and stainless steel electrodes. Journal of Hazardous Materials, 164 (2–3), pp. 1586-1594, 2009. DOI: 10.1016/j.jhazmat.2008.09.004 Paschoal, F.M.M., Anderson, M.A. and Zanoni, M.V.B., The photoelectrocatalytic oxidative treatment of textile wastewater containing disperse dyes. Desalination, 249 (3), pp. 1350-1355, 2009. DOI: 10.1016/j.desal.2009.06.024 Chafi, M., Gourich, B., Essadki, A.H., Vial, C. and Fabregat, A., Comparison of electrocoagulation using iron and aluminium electrodes with chemical coagulation for the removal of a highly soluble acid dye. Desalination, 281 (1), pp. 285-292, 2011. DOI: 10.1016/j.desal.2011.08.004 Chacón, J.M., Leal, M.T., Sánchez, M. and Bandala, E.R., Solar photocatalytic degradation of Azo-dyes by photo-Fenton process. Dyes and Pigments, 69 (3), pp. 144-150, 2006. DOI: 10.1016/j.dyepig.2005.01.020 Liu, R., Chiu, H., Shiau, C., Yeh, R. and Hung, Y., Degradation and sludge production of textile dyes by Fenton and photo-Fenton processes. Dyes and Pigments. 73 (1), pp. 1-6, 2007. DOI: 10.1016/j.dyepig.2005.10.002 Tekbaş, M., Yatmaz, H.C. and Bektaş, N., Heterogeneous photoFenton oxidation of reactive Azo dye solutions using iron exchanged zeolite as a catalyst. Microporous and Mesoporous Materials, 115 (3), pp. 594-602, 2008. DOI: 10.1016/j.micromeso.2008.03.001 Ay, F., Catalkaya, E.C. and Kargi, F., A statistical experiment design approach for advanced oxidation of Direct Red Azo-dye by photoFenton treatment. Journal of Hazardous Materials, 162 (1), pp .230236, 2009. Orozco, S.L., Bandala, E.R., Arancibia-Bulnes, C.A., Serrano, B., Suárez-Parra, R. and Hernández-Pérez, I., Effect of iron salt on the color removal of water containing the Azo-dye reactive blue 69 using photo-assisted Fe(II)/H2O2 and Fe(III)/H2O2 systems. Journal of Photochemistry and Photobiology A: Chemistry, 198 (2–3), pp. 144149, 2008. DOI: 10.1016/j.jphotochem.2008.03.001 Shi, B., Li, G., Wang, D., Feng, C. and Tang, H., Removal of direct dyes by coagulation: The performance of preformed polymeric aluminum species. Journal of Hazardous Materials, 143 (1–2), pp. 567-574, 2007. DOI: 10.1016/j.jhazmat.2006.09.076 El-Gohary, F. and Tawfik, A., Decolorization and COD reduction of disperse and reactive dyes wastewater using chemical-coagulation


Barrios-Ziolo et al / DYNA 82 (191), pp. 118-126. June, 2015.

[45]

[46]

[47]

[48]

[49]

[50]

[51]

[52]

[53]

[54]

[55]

[56]

[57]

[58]

[59]

[60]

[61]

followed by sequential batch reactor (SBR) process. Desalination, 249 (3), pp. 1159-1164, 2009. DOI: 10.1016/j.desal.2009.05.010 Sohrabi M.R. and Ghavami, M., Photocatalytic degradation of Direct Red 23 dye using UV/TiO2: Effect of operational parameters. Journal of Hazardous Materials, 153 (3), pp. 1235-1239, 2011. DOI: 10.1016/j.jhazmat.2007.09.114 Manu, B., Chaudhari, S., Decolorization of indigo and azo dyes in semicontinuous reactors with long hydraulic retention time. Process Biochemistry, 38 (8), pp. 1213-1221, 2003. DOI: 10.1016/S00329592(02)00291-1 Méndez, D., Omil, F. and Lema, J.M., Anaerobic treatment of Azo dye Acid Orange 7 under batch conditions. Enzyme and Microbial Technology, 36 (2–3), pp. 264-272, 2005. DOI: 10.1016/j.enzmictec.2004.08.039 Wang, X., Cheng, X. and Sun, D., Interaction in anaerobic biodecolorization of mixed azo dyes of Acid Red 1 and Reactive Black 5 under batch and continuous conditions. Colloids and Surfaces A: Physicochemical and Engineering Aspects, 379 (1-3), pp 127–135, 2011. DOI: 10.1016/j.colsurfa.2010.11.065 Chagas, E.P. and Durrant, L.R., Decolorization of Azo dyes by Phanerochaete chrysosporium and Pleurotus sajorcaju Enzyme and Microbial Technology, 29 (8–9), pp. 473-477, 2001. DOI: 10.1016/S0141-0229(01)00405-7 Chu, H.C. and Chen, K.M., Reuse of activated sludge biomass: I. Removal of basic dyes from wastewater by biomass. Process Biochemistry, 37 (6), pp. 595-600, 2002. DOI: 10.1016/S00329592(01)00234-5 Ghoreishi, S.M. and Haghighi, R., Chemical catalytic reaction and biological oxidation for treatment of non-biodegradable textile effluent. Chemical Engineering Journal, 95 (1–3), pp. 163-169, 2003. DOI: 10.1016/S1385-8947(03)00100-1 Khehra, M., Saini, H., Sharma, D. and Chadha, B., Biodegradation of Azo dye C.I. Acid Red 88 by an anoxic- aerobic sequential bioreactor. Dyes and Pigments, 70 (1), pp. 1-7, 2006. DOI: 10.1016/j.dyepig.2004.12.021 Ojstršek, A., Fakin, D. and Vrhovšek, D., Residual dyebath purification using a system of constructed wetland. Dyes and Pigments, 74 (3), pp. 503-507, 2007. DOI: 10.1016/j.dyepig.2006.10.007 El-Sheekh, M.M., Gharieb, M.M. and Abou-El-Souod, G.W., Biodegradation of dyes by some green algae and cyanobacteria. International Biodeterioration & Biodegradation ration, 63 (6), pp. 699-704, 2009. Dave, S. and Dave, R., Isolation and characterization of Bacillus thuringiensis for Acid red 119 dye decolourisation. Bioresource Technology, 100 (1), pp. 249-253, 2009. DOI: 10.1016/j.biortech.2008.05.019 Lade, H., Waghmode, T., Kadam, A. and Govindwar, S., Enhanced biodegradation and detoxification of disperse Azo dye Rubine GFL and textile industry effluent by defined fungal-bacterial consortium. International Biodeterioration & Biodegradation, 72 (1), pp. 94-107, 2012. DOI: 10.1016/j.ibiod.2012.06.001 Shu, H., Degradation of dyehouse effluent containing C.I. Direct Blue 199 by processes of ozonation, UV/H2O2 and in sequence of ozonation with UV/H2O2. Journal of Hazardous Materials, 133 (1– 3), pp. 92-98, 2006. DOI: 10.1016/j.jhazmat.2005.09.056 Lee, J., Choi, S., Thiruvenkatachari, R., Shim, W. and Moon, H., Evaluation of the performance of adsorption and coagulation processes for the maximum removal of reactive dyes. Dyes and Pigments, 69 (3), pp. 196-203, 2006. DOI: 10.1016/j.dyepig.2005.03.008 Song, S., Ying, H., He, Z. and Chen, J., Mechanism of decolorization and degradation of CI Direct Red 23 by ozonation combined with sonolysis. Chemosphere, 66 (9), pp. 1782-1788, 2007. DOI: 10.1016/j.chemosphere.2006.07.090 Fan, Li., Zhou, Y., Yang, W., Chen, G. and Yang, F., Electrochemical degradation of aqueous solution of Amaranth azo dye on ACF under potentiostatic model Original. Dyes and Pigments, 76 (2), pp. 440446, 2008. DOI: 10.1016/j.dyepig.2006.09.013 Lu, X., Yang, B., Chen, J. and Sun, R., Treatment of wastewater containing Azo dye reactive brilliant red X-3B using sequential

[62]

[63]

[64]

[65]

[66] [67]

[68]

[69]

[70]

[71]

[72]

[73]

[74]

[75] [76]

[77] [78]

125

ozonation and up flow biological aerated filter process. Journal of Hazardous Materials, 161 (1), pp. 241-245, 2009. DOI: 10.1016/j.jhazmat.2008.03.077 Weng, C.H., Lin, Y.T., Chang, C.K., Liu, N., Decolourization of direct blue 15 by Fenton/ultrasonic process using a zero-valent iron aggregate catalyst. Ultrasonic Sonochemistry, 20 (3), pp. 970-977, 2013. DOI: 10.1016/j.ultsonch.2012.09.014 Buitron, G., Quezada, M., and Moreno, G., Aerobic degradation of the Azo dye acid red 151 in a sequencing batch biofilter. Bioresource Technology, 92 (2), pp. 143-149, 2004. DOI: 10.1016/j.biortech.2003.09.001 Mohan, S.V., Rao, N.C. and Sarma, N.C., Simulated acid azo dye (Acid black 210) wastewater treatment by periodic discontinuous batch mode operation under anoxic–aerobic–anoxic microenvironmentconditions. Ecological Engineering, 31(4), pp. 242250, 2007. DOI: 10.1016/j.ecoleng.2007.07.003 Kousha, M. Daneshvara, E., Sohrabia, M., Jokarb, M. and Bhatnagarc. A., Adsorption of acid orange II dye by raw and chemically modified brown macroalga Stoechospermum marginatum. Chemical Engineering Journal, 192 (1), pp. 67-76, 2012. DOI: 10.1016/j.cej.2012.03.057 Greluk, M. and Hubicki, Z., Efficient removal of Acid Orange 7 die from water using the strongly basic anion exchange resin Amberlite IRA-958 Desalination, 278 (1–3), pp. 219-226, 2011. Monteagudo, J.M., Duran, A, and Lopez, A.C., Homogeneus ferrioxalate-assisted solar photo-Fenton degradation of Orange II aqueous solutions. Applied Catalysis B: Environmental, 83 (1–2), pp. 46-55, 2008. DOI: 10.1016/j.apcatb.2008.02.002 Chakrabortty, D. and Gupta S.S., Photo-catalytic decolourisation of toxic dye with N-doped Titania: A case study with Acid Blue 25. Journal of Environmental Sciences, 25 (5), pp. 1034-1043, 2013. DOI: 10.1016/S1001-0742(12)60108-9 Lackey, L.W, Mines, R. and McCreanor, P., Ozonation of acid yellow 17 dye in a semi-batch bubble column. Journal of Hazardous Materials, 138 (2), pp. 357-362, 2006. DOI: 10.1016/j.jhazmat.2006.05.116 Vajnhandl, S. and Le-Marechal, A.M., Case study of the sonochemical decolouration of textile azo dye Reactive Black 5. Journal of Hazardous Materials, 141 (1), pp. 329-335, 2007. DOI: 10.1016/j.jhazmat.2006.07.005 Zhao, H.Z. and Sun, Y., Xu, L.N. and Ni, J., Removal of Acid Orange 7 in simulated wastewater using a three-dimensional electrode reactor: Removal mechanisms and dye degradation pathway. Chemosphere, 78 (1), pp. 46-51, 2010. DOI: 10.1016/j.chemosphere.2009.10.034 Arslan-Alaton, I., Gursoy, B.H. and Schmidt J.E., Advanced oxidation of acid and reactive dyes: Effect of Fenton treatment on aerobic, anoxic and anaerobic processes. Dyes and Pigments, 78 (2), pp. 117-130, 2008. DOI: 10.1016/j.dyepig.2007.11.001 Radha, K.V., Regupathi, I., Arunagiri, A. and Murugesan, T., Decolorization studies of synthetic dyes using Phanerochaete chrysosporium and their kinetics. Process Biochemistry, 40 (10), pp. 3337-3345, 2005. DOI: 10.1016/j.procbio.2005.03.033 Senthilkumar, S., Perumalsamy and H. Prabhu, J. Decolourization potential of white-rot fungus Phanerochaete chrysosporium on synthetic dye bath effluent containing Amido black 10B. Journal of Saudi Chemical Society, 18 (6), pp. 845-853, 2014. DOI:10.1016/J.JSCS.2011.10.010 Mishra, A. and Bajpai, M., The flocculation performance of Tamarindus mucilage in relation to removal of vat and direct dyes. Bioresource Technology, 97 (8), May 2006, pp 1055-1059, 2006. Smelcerovic, M., Dordevic, D., Novakovic, M. and Mizdrakovic, M., Decolorization of a textile vat dye by adsorption on waste ash. Journal of the Serbian Chemical Society, 75 (6), pp. 855 – 872, 2010. DOI: 10.2298/JSC090724057S Goksen, C., Yetis, U. and Yilmaz, L., Membrane based strategies for the pre-treatment of acid dye bath wastewaters. Journal of Hazardous Materials, 135 (1–3), pp. 423-430, 2006. Schrank, S.G., Santos, J.N.R. dos, Souza, D.S. and Souza, E.E.S., Decolourisation effects of Vat Green 01 textile dye and textile wastewater using H2O2/UV process.Journal of Photochemistry and


Barrios-Ziolo et al / DYNA 82 (191), pp. 118-126. June, 2015.

[79]

[80]

[81]

[82]

[83]

[84]

[85]

[86] [87] [88]

[89]

[90]

[91]

[92]

[93]

[94]

Photobiology A: Chemistry, 186 (2–3), pp. 125-129, 2007. DOI: 10.1016/j.jphotochem.2006.08.001 Fernandez, M.E., Nunell, G.V., Bonelli, P.R. and Cukierman, A.L., Effectiveness of Cupressus sempervirens cones as biosorbent for the removal of basic dyes from aqueous solutions in batch and dynamic modes. Bioresource Technology, 101 (24), pp. 9500-9507, 2010. DOI: 10.1016/j.biortech.2010.07.102 Djilali, Y., Elandaloussi, E.H., Aziz, A. and Ménorval, L.C., Alkaline treatment of timber sawdust: A straightforward route toward effective low-cost adsorbent for the enhanced removal of basic dyes from aqueous solutions. Journal of Saudi Chemical Society, [on line], pp. 2010-216, 2012. Available at: http://www.sciencedirect.com/science/article/pii/S13196103120016 76 DOI:10.1016/J.JSCS.2012.10.013 Kiakhani, S., Arami, M. and Gharanjig, K., Preparation of chitosanethyl acrylate as a biopolymer adsorbent for basic dyes removal from colored solutions. Journal of Environmental Chemical Engineering, 1 (3), pp. 406-415, 2013. DOI: 10.1016/j.jece.2013.06.001 Awad, H.S. and Galwa, N.A., Electrochemical degradation of Acid Blue and Basic Brown dyes on Pb/PbO2 electrode in the presence of different conductive electrolyte and effect of various operating factors. Chemosphere, 61 (9), pp. 1327-1335, 2005. DOI: 10.1016/j.chemosphere.2005.03.054 Ghaly, M.Y., Farah, J.Y. and Fathy, A.M., Enhancement of decolorization rate and COD removal from dyes containing wastewater by the addition of hydrogen peroxide under solar photocatalytic oxidation. Desalination, 217 (1–3), pp. 74-84, 2007. DOI: 10.1016/j.desal.2007.01.013 Abd El-Rahim, W.M., El-Ardy, O.A.M. and Mohammad, F.H.A. The effect of pH on bioremediation potential for the removal of direct violet textile dye by Aspergillus Niger. Desalination, 249 (3), pp. 1206-1211, 2009. DOI: 10.1016/j.desal.2009.06.037 Asgher, M., Batool, S., Bhatti, H.N., Noreen, R., Rahman, S.U. and Javaid Asad, M., Laccase mediated decolorization of vat dyes by Coriolus versicolor IBL-04. International Biodeterioration & Biodegradation, 62 (4), pp. 465-470, 2008. DOI: 10.1016/j.ibiod.2008.05.003 Sirianuntapiboon, S. and Srisornsak, P., Removal of disperse dyes from textile wastewater using bio-sludge. Bioresource Technology, 98 (5), pp. 1057-1066, 2007. DOI: 10.1016/j.biortech.2006.04.026 Ertugay N., Removal of COD and color from Direct Blue 71 Azo dye wastewater by Fenton’s oxidation: Kinetic study. Arabian Journal of Chemistry, In Press, Corrected Proof, 2013. Hasnain, Isa, M., Siew-Lang, L., Asaari, F.A.H., Aziz, H.A., AzamRamli, N. and Dhas, J.P.A., Low cost removal of disperse dyes from aqueous solution using palm ash. Dyes and Pigments, 74 (2), pp. 446453, 2007. DOI: 10.1016/j.dyepig.2006.02.025 Osugi, M.E., Rajeshwar, K., Ferraz, E.R.A., de Oliveira, D.P., Araújo, Â.R. and Zanoni, M.V.B., Comparison of oxidation efficiency of disperse dyes by chemical and photoelectrocatalytic chlorination and removal of mutagenic activity. Electrochimica Acta, 54 (7), pp. 20862093, 2009. DOI: 10.1016/j.electacta.2008.07.015 Salazar, R., Garcia-Segura, S., Ureta-Zañartu, M.S., Brillas, E., Degradation of disperse azo dyes from waters by solar photoelectroFenton. Electrochimica Acta, 56 (18), pp. 6371-6379, 2011. DOI: 10.1016/j.electacta.2011.05.021 Saba, B., Khalid, A., Nazir, A., Kanwal, H. and Mahmood, T., Reactive black-5 Azo dye treatment in suspended and attach growth sequencing batch bioreactor using different co-substrates. International Biodeterioration & Biodegradation, In Press, Corrected Proof, 2013. Libra J.A., Borchet, M., Vigelahn L. and Tormenta, T., Two stage biological treatment of a diAzo reactive textile dye and the fate of the dye metabolites. Chemosphere, 56 (2), pp. 167-180, 2004. DOI: 10.1016/j.chemosphere.2004.02.012 Kim, T., Park, C., Yang, J. and Kim, S., Comparison of disperse and reactive dye removals by chemical coagulation and Fenton oxidation. Journal of Hazardous Materials, 112 (1–2), pp. 95-103, 2004. DOI: 10.1016/j.jhazmat.2004.04.008 Katsumata, H., Koike, S., Kaneco S., Suzuki, T. and Ohta, K., Degradation of Reactive Yellow 86 with photo-Fenton process driven

[95] [96]

[97]

[98]

by solar light. Journal of Environmental Sciences, 22 (9), pp. 14551461, 2010. DOI: 10.1016/S1001-0742(09)60275-8 Huang, Y., Tsai, S., Huang, Y. and Chen, C., Degradation of commercial Azo dye reactive Black B in photo/ferrioxalate system. Journal of Hazardous Materials, 140 (1–2), pp. 382-388, 2007. Sengil, I.A. and Ozacar, M., The decolorization of C.I. Reactive Black 5 in aqueous solution by electrocoagulation using sacrificial iron electrodes. Journal of Hazardous Materials, 161 (2–3), pp. 1369-1376, 2009. DOI: 10.1016/j.jhazmat.2008.04.100 Lu, X., Yang, B., Chen, J. and Sun, R. Treatment of wastewater containing Azo dye reactive brilliant red X-3B using sequential ozonation and up flow biological aerated filter process. Journal of Hazardous Materials, 161 (1), pp. 241-245. DOI: 10.1016/j.jhazmat.2008.03.077 Quintero, L. and Cardona, S., Technologies for the decolorization of dyes: indigo and indigo carmine, DYNA, 77 (162), pp. 371-386, 2010.

L.F. Barrios-Ziolo, received the BSc. Eng in Biological Engineering in 2014, and actually he is studying the MSc degree in Resources hydraulics. From 2013 he worked in process and treatment wasterwater and hazard currently, He is currently a young researcher. L.F. Gaviria-Restrepo, received the BSc. Eng in Biological Engineering in 2012, and actually he is studying the MSc degree environment and develop in the Universidad Nacional de Colombia, sede Medellín. She is currently a young researcher. E.A. Agudelo, received the BSc. Eng in chemical Engineering in 1998, the MSc degree environment and develop in Universidad Nacional de Colombia, sede Medellín. in 2010, and actually he is studying the PhD degree in Resources Hydraulics. Since 1998 to 2007, he worked for differents companies of chemical process and treatment wasterwater and hazard currently, he is a parcial Professor in the Corporacion Universitaria La Sallista, Medellín, Colombia. His research interests include: wasterwater treatments, hazards waste, design of process. S.A. Cardona-Gallo, received the BSc. Eng. in Sanitary Engineering in 1997, the MSc. degree Enviromental Engineering in National Autonomous University of Mexico in 2000, the PhD., the degree Enviromental Engineering in National Autonomous University of Mexico in 2004, the Post. PhD., the degree Enviromental Engineering in Rice University in 2012. He is currently a Professor in the Departamento de Geociencias y Medio Ambiente, Facultad de Minss, Universidad Nacional de Colombia, sede Medellin, Colombia, since 2005. His research interests are water quality, soil quality, remediation, bioremediation, hazardous waste and design of process.

126


Compromise solutions in mining method selection - case study in colombian coal mining Jorge Iván Romero-Gélvez a, Félix Antonio Cortes-Aldana b & Giovanni Franco-Sepúlveda c a

Escuela de Ciencias Exactas e Ingeniería, Universidad Sergio Arboleda, Bogotá, Colombia. jorge.romero@usa.edu.co b Universidad Nacional de Colombia, sede Bogotá, Colombia. facortesa@unal.edu.co c Universidad Nacional de Colombia, sede Medellín, Colombia. gfranco@unal.edu.co Received: April 7th, 2014. Received in revised form: January 17th, 2014. Accepted: March 25th, 2015.

Abstract The purpose of this paper is to present a quantitative approach to the selection of the mining extraction method by developing a methodological problem of discrete multicriteria decision making (MCDM), this approach seeks to support the process of planning and mine design. Select the extractive method is one of the problems of Discrete Multicriteria Decision (MCDM) where decision-makers had problems in assigning weight to each criterion. To solve this problem, this article suggests the ENTROPY method. This paper wants to handle the subjectivity inherent to this problem by using the VIKOR method, which yields results in a compromise alternative. The methodology proposed in this article applies in coal deposit located in the western side of Cerro Tasajero in Norte de Santander, Colombia Keywords: Multicriteria decision analysis (MCDA), Mining method selection, ENTROPHY, VIKOR.

Soluciones de compromiso en la selección del método extractivo minero – caso de estudio en minería de carbón colombiana Resumen El propósito de este artículo es presentar un enfoque cuantitativo para la selección del método extractivo minero mediante el desarrollo metodológico de un problema de toma de decisión multicriterio discreta (DMD), este enfoque se propone para soportar el proceso de planeación y diseño minero. Seleccionar el método extractivo es uno de los problemas de Decisión Multicriterio Discreta (DMD) donde los decisores han tenido problemas en la asignación de peso a cada criterio. Para resolver este problema, este artículo propone el método de la ENTROPIA. El presente escrito quiere manejar la subjetividad inherente a esta problemática mediante el uso del método VIKOR, el cual arroja como resultado una alternativa de compromiso. La metodología propuesta en este artículo se aplica en yacimiento de carbón localizado en el costado occidental del Cerro Tasajero en Norte de Santander, Colombia. Palabras clave: Análisis de decisión multicriterio (MCDA), selección del método extractivo, ENTROPIA, VIKOR

1. Introducción El problema de selección del método extractivo se convierte en el aspecto más importante de la explotación minera, ya que se debe seleccionar el método que mejor encaje con los criterios únicos de cada yacimiento tales como son las características espaciales, condiciones geológicas, hidrogeológicas, geotecnia y otras consideraciones tales como las económicas, factores tecnológicos y ambientales. Dichas consideraciones o también llamados criterios,

usualmente se encuentran en conflicto y tienen múltiples interesados sobre uno o más de ellos; en el ámbito de la selección del método extractivo usualmente el decisor es el Ingeniero de Minas y como puede verse en la revisión de la literatura existente en [1] los métodos desarrollados hasta el momento no toman gran parte del extenso número de criterios que pueden considerarse. Sin embargo, la literatura ha identificado 5 grandes áreas en las cuales pueden agruparse la mayoría de los criterios, según [2] estas son: 1. Características espaciales del depósito, 2. Condiciones geológicas e

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 127-136. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.42966


Romero-Gélvez et al / DYNA 82 (191), pp. 127-136. June, 2015.

hidrogeológicas. 3. Consideraciones económicas, 4. Factores Tecnológicos y 5. Consideraciones ambientales. Una explotación minera puede realizarse en superficie o debajo de ella, dependiendo de la profundidad del yacimiento y de otros parámetros técnicos, por lo cual las extracciones se clasifican principalmente en minería a cielo abierto y minería subterránea, cada una con diferentes métodos correspondientes a unos parámetros particulares; los métodos subterráneos se emplean cuando la profundidad del yacimiento es excesiva para llegar por explotación a cielo abierto [2]. La gran cantidad de criterios que pueden considerarse para seleccionar un método extractivo hace de esta una decisión bastante compleja para el decisor. La selección de métodos extractivos en la minería es uno de los problemas de selección más antiguos de la humanidad, por tratarse de una actividad que tiene miles de años; la literatura científica más relevante al respecto comienza con Boshkov y Wright en 1973 [3] quienes plantean uno de los primeros esquemas cualitativos de clasificación para seleccionar los métodos extractivos. Un par de años después Morrison en 1976 [4] propone una sistema de clasificación el cual divide la minería subterránea en tres grupos basado en las condiciones del terreno asignando a cada uno el tipo de soporte requerido. Laubsher en 1981 [5] propone una metodología de selección para el método de extracción subterráneo basada en el sistema de clasificación R.M.R por sus siglas en inglés (rock mass raiting). La primera aproximación a un método de selección cuantitativo se da en el año de 1981cuando David E. Nicholas [6] formula una aproximación numérica para la selección de método extractivo con su trabajo “Selection Procedure - A Numerical Approach” el cual formula el uso de una escala para la ponderación de cada uno de los métodos extractivos. Hartman (1987) [2] desarrolla un esquema de selección basado en la geometría del yacimiento y las condiciones del terreno para escoger el método extractivo. Posteriormente Miller-Tait, L., Panalkis, R., Poulin, R., en 1995 [7] de universidad de British Columbia modifican el método Nicholas y agregan nuevos valores a la escala. Finalmente en la actualidad existen algunos abordajes al problema de selección del método de explotación, mediante análisis de decisión multicriterio entre las cuales se destaca la aplicación de lógica difusa en los trabajos de Bitarafan, M.R., Ataei, M. (2004) [8] y también en Karadogan, A., Kahriman, A., & Ozer, U (2008) [9]; otro método de decisión multicriterio utilizado para resolver este problema ha sido AHP (Analitic hierarchy proces) o PAJ (proceso de análisis jerarquico); utilizado en los trabajos de Alpay, S., & Yavuz, M. (2009) [10], Azadeh, A., Osanloo, M., & Ataei, M. (2010) [11] y Bogdanovic, D., Nikolic, D., & Ivana, I. (2012) [12] en los cuales se expresa que trabajar con decisores hace que la selección de la alternativa final dependa de la experticia de los mismos ya que las metodologías planteadas en los trabajos anteriores reflejan los juicios de valor de dichos expertos. Una revisión de los principales métodos multicriterio puede consultarse en Figueira, J., Greco, S., and Ehrgott, M., [13]. Tomando el problema de selección como un problema de análisis de decisiones multicriterio (MCDA) se hace necesario un detallado planteamiento de la manera como se

resolverá este mismo. Se ha discutido mucho acerca del planteamiento y solución ideal de los problemas multicriterio, pero lo cierto es que cada método tiene sus ventajas y sus desventajas, una descripción detallada de los principales métodos puede consultarse en [14] y en [15]. Adicionalmente puede consultarse sobre desarrollos más recientes en [13]. Este artículo aplica el método UBC desarrollado por [7], para reducir las alternativas y utilizar solo alternativas viables técnicamente en el desarrollo del análisis de decisión multicriterio; posteriormente para asignar los pesos a los criterios se aplica el método de la entropía [16]. Luego, se aplicara el método VIKOR para obtener una alternativa de compromiso, descrita como la más cercana a la ideal. Al final se presentan las conclusiones del caso. 2. Planteamiento del problema Se quiere escoger la alternativa de extracción más adecuada para un yacimiento de carbón ubicado en el costado occidental del cerro Tasajero, departamento de Norte de Santander, Colombia. Este yacimiento es descrito en detalle por un grupo interdisciplinar de Geólogos e Ingenieros, por medio de estos se obtienen los parámetros técnicos de la Tabla 1. Algunas de estas características pueden corroborarse en: Regiones y zonas con carbón en Colombia de [17] lo cual genera validez a los datos y permite su comprobación. Para el análisis de toma de decisiones multicriterio, se tomaran como alternativas válidas las que tengan una ponderación positiva al aplicar la técnica UBC [7] lo que permite eliminar alternativas que no son viables técnicamente y deja únicamente alternativas aplicables. Si bien hay una buena cantidad de métodos extractivos, solo se consideraran 9 de los 10 métodos descritos en [6] y en [2] Según la revisión bibliográfica anterior a continuación presentamos un resumen de las alternativas tomando como referencia [7] y [2] 2.1. Descripción de alternativas y criterios para solucionar el problema Alternativa 1 Cielo Abierto (Open pit): La técnica de explotación a cielo abierto, es la más común dentro de los métodos de explotación superficial. Los parámetros de relación de desmonte y ángulo de la pendiente son críticos en la determinación de si es aplicable este método en particular. Alternativa 2 Hundimiento de bloques (Block caving): Consiste en dividir el yacimiento en grandes bloques de sección cuadrangular de varios miles de metros cuadrados. Cada bloque se socava practicando una excavación Tabla 1. Parámetros técnicos del Manto 20 Parámetro Calidad Espesor Entre 0.90 m y 2.10 m Profundidad 600 m Tipo Tabular Grado de distribución Uniforme Inclinación Entre 12° y 20° RSS (rock substance strength) Resistencia Muy baja RMR (rock mass raiting) Entre 21 y 40 Fuente: Desarrollo propio a partir de información de la empresa minera

128


Romero-Gélvez et al / DYNA 82 (191), pp. 127-136. June, 2015.

horizontal con explosivos en su base. Los yacimientos donde se aplica deben ser de gran potencia y extensión, con pocas intercalaciones de estéril y ramificaciones. Alternativa 3 Cámaras por Subniveles (Sublevel Stoping): Se aplica a yacimientos verticales o con fuerte pendiente y que genéricamente se clasifican en cráteres invertidos, barrenos largos y barrenos de abanico. Alternativa 4 Hundimiento por subniveles (sublevel caving): Consiste en la división del yacimiento en niveles y estos a su vez, en subniveles que se van extrayendo en sentido descendente Alternativa 5 Tajos Largos (longwall): Este método puede utilizarse en la explotación de yacimientos estratificados delgados, de espesores uniformes e inclinaciones preferentemente de pequeñas a moderadas Alternativa 6 Cámaras y pilares (room and pillar): En este método se crea un conjunto de cámaras dejando pilares para sostener el techo. Las dimensiones de las cámaras y la sección de los pilares dependen de las características del mineral, de la estabilidad de los hastíales, del espesor de recubrimiento y de las tensiones sobre la roca. Alternativa 7 Cámaras almacén (Shrinkage Stoping): En el método de explotación por almacenamiento provisional o cámaras almacén, el mineral es cortado en tajos horizontales, comenzando de la parte baja y avanzando hacia arriba. El minado por almacenamiento provisional es un método bastante utilizado en vetas con buzamientos pronunciados donde el mineral es lo suficientemente resistente como para mantener sin soporte tanto las rocas encajonadas como el techo del tajeo. Para un minado eficiente el grado del depósito debe tener un ángulo de inclinación mayor de 60 grados. Alternativa 8 Corte y Relleno (Cut and fill): El mineral se arranca por rebanadas horizontales, en sentido ascendente, desde la galería de fondo. Una vez volado se extrae completamente de la cámara, a través de unos coladeros, efectuándose a continuación el relleno del hueco creado con estériles, con lo que se consigue crear una plataforma de trabajo estable y el sostenimiento de los hastíales. Alternativa 9 Método de Entibación con Cuadros. (Square set): Consiste en el sostenimiento con madera, disponiendo esta en forma de paralelepípedo rectos donde los elementos verticales o estemples soportan las presiones verticales, los horizontales o codales las presiones de los hastíales y los cuatro elementos de unión restantes rigidizan el conjunto Los criterios utilizados en este problema de selección son cinco, características espaciales del depósito, condiciones geológicas hidrogeológicas y geotecnia, consideraciones económicas, factores tecnológicos y por ultimo factores ambientales. Un caso de selección del método extractivo para un yacimiento difícilmente es igual a otro, ya que las condiciones del depósito y el tipo de mineral necesita de un análisis particular, por esto mismo pueden ampliarse o reducirse los criterios y las alternativas en un caso específico. Se puede ver un análisis en detalle de cada alternativa y criterio en [2] donde se exponen las diferentes alternativas de extracción y los criterios que influyen a favor y en contra de cada una de ellas. Criterio1. Características espaciales del depósito. Son probablemente los más determinantes, debido a que ellos

deciden ampliamente si se escoge minería a cielo abierto o minería subterránea, la tasa de producción, el manejo del material y el diseño de la mina en el depósito. Sc1. Tamaño (especialmente altura o espesor) Maximizar Sc2. Forma (tabular, lenticular, masivo, irregular) Maximizar Sc3. Posición (Inclinación o caída) Maximizar Sc4. Profundidad. Maximizar Criterio2. Condiciones geológicas e hidrogeológicas y propiedades Geotécnicas (Mecánica de rocas y suelos). Las características geológicas del depósito mineral y de los materiales adyacentes al depósito, influencian la selección del método extractivo. Especialmente en los métodos subterráneos donde se necesitan parámetros de control de la excavación en el subsuelo. La hidrogeología afecta el drenaje y los requerimientos de bombeo en superficie y subterráneos. La mineralogía regula los requisitos de procesamiento del mineral. Las propiedades mecánicas de los materiales que comprimen el depósito y la roca in situ (y suelo si es considerable su espesor) son factores clave en la selección del equipo a utilizar en minería superficial y a su vez la cantidad de clases de métodos si esta es subterránea (no soportada, soportada y excavada). Sc5. Distribución. Maximizar Sc6. Rock mass ratings (RMR) Maximizar Sc7. Rock substance strength (RSS) Maximizar Criterio 3 Consideraciones económicas. Al final, estas consideraciones determinan el éxito de una empresa minera. Estos factores gobiernan la elección del método minero porque afectan la salida de material, la inversión, el flujo de caja, el periodo de retorno de la inversión y de beneficio. Sc8.Tasa de desempeño. Maximizar Sc9.Producción (Producción por unidad de tiempo) Maximizar Sc10. Inversión de Capital. Minimizar Sc11. Productividad (toneladas por turno de empleado) Maximizar Sc12. Costos comparativos de los métodos de minería posibles. Minimizar Criterio4 Factores Tecnológicos La mejor combinación entre el terreno y el método que se intenta escoger. El método escogido puede tener impactos negativos en el mineral extraído en cuanto a su uso posterior (procesamiento, fundición) Sc13. Recuperación de la mina (porción del depósito extraído actualmente) Maximizar Sc14. Dilución (cantidad de desperdicios producidos con el mineral) Minimizar Sc15. La flexibilidad del método con el cambio de condiciones. Maximizar Sc16. Selectividad del método para distinguir el mineral y los residuos. Maximizar Criterio5 Consideraciones ambientales no sólo el ambiente físico, sino el clima social, político, económico. Sc17. Estabilidad de las aberturas. Maximizar Sc18. Subsidencia, o efectos en la superficie de excavación. Maximizar Sc19. Condiciones de salud y seguridad. Maximizar

129


Romero-Gélvez et al / DYNA 82 (191), pp. 127-136. June, 2015.

3. Propuesta metodológica La metodología propuesta como ayuda para toma de decisiones en la selección del método extractivo se desarrolla con base a la propuesta metodológica planteada en [18] y adaptada en [1]. Se propone inicialmente la aplicación del método UBC desarrollado por [7] para reducir el conjunto de alternativas y utilizar únicamente Alternativas viables técnicamente. La construcción de la matriz de decisión se realiza en base a la información de [2], quienes comparan las ventajas y desventajas de los sub criterios 8 al 19 con cada una de las alternativas planteadas para resolver este problema de selección, por lo cual los valores asignados a cada uno de estos serán constantes en cualquier problema de selección del método extractivo. Por otra parte los sub criterios 1 al 7 variaran de acuerdo a los parámetros técnicos de cada situación específica y serán determinados una modificación de la escala UBC, conservando la cardinalidad y proporcionalidad de los datos y eliminando los valores negativos propios de la escala del método UBC. La matriz de decisión genérica vista en la Tabla 2, se construye a partir de la conversión de las valoraciones comparativas entre cada uno de los métodos extractivos construida por Hartman mediante una escala E: (1,…k) siendo k el mejor valor de la escala [2]. Todas las valoraciones dadas en la Tabla 2 a los criterios son cualitativas exceptuando el Sc12 la cual es porcentual, por lo que se remplaza cada valoración con un índice numérico de 1 a k: Escala 1: Higuest 5, High 4, Moderate 3, Low 2, Lowest 1; Escala 2: Good 3, Moderate 2, Poor 1; Escala 3: Large scale 4, Large 3, Moderate 2, Small 1; Escala 4: Large 4, High 3, Moderate 2 y Small 1. Dando como resultado la matriz genérica cuantitativa presente en la Tabla 3. Para terminar la matriz de decisión e incluir la información correspondiente a los Sub criterios 1,2,3,4,5,6 y 7 se utilizara la escala que propone el método UBC la cual plantea un intervalo entre -46 y 6 el cual considera valores negativos únicamente para mostrar que métodos no son Tabla 2. Matriz de decisión Genérica. C Sc A1 A2 Sc1 Sc2 C1 Sc3 Sc4 Sc5 C2 Sc6 Sc7 Sc8 Rapid Slow Sc9 Large Scale Large C3 Sc10 Large High Sc11 High Low Sc12 5 10 Sc13 High High Sc14 Moderate High C4 Sc15 Moderate Low Sc16 Low Low High Moderate Sc17 C5 Sc18 High High Sc19 Good Good Fuente: Desarrollo propio a partir de [7] y [2]

Figura 1. Propuesta metodológica Fuente: Elaboración propia

viables técnicamente; para trabajar únicamente con valores positivos se modificara dicha escala a valores positivos comenzando en 1 y terminando en 56 por lo cual 49 se remplaza con 1 y el valor de 6 con 56 para mantener la proporcionalidad y cardinalidad de los datos.

A3

A4

A5

A6

A7

A8

A9

Moderate Large Moderate Low 20 Moderate Moderate Low Low High Low Good

Moderate Large Moderate Moderate 15 High Moderate Moderate Low Moderate High Good

Moderate Large High High 15 High Low Low Low High High Good

Rapid Large High High 20 Moderate Moderate Moderate Low Moderate Moderate Good

Rapid Moderate Low Low 45 High Low Moderate Moderate High Low Good

Moderate Moderate Moderate Moderate 55 High Low Moderate High High Low Moderate

Slow Small Low Low 100 Higuest Lowest High High High Low Poor

130


Romero-Gélvez et al / DYNA 82 (191), pp. 127-136. June, 2015.

Tabla 3. Matriz de decisión Genérica numérica. C Sc A1 A2 A3 A4 A5 Sc1 Sc2 C1 Sc3 Sc4 Sc5 C2 Sc6 Sc7 Sc8 3 1 2 2 2 Sc9 4 3 3 3 3 4 3 2 2 3 C3 Sc10 Sc11 4 2 2 3 4 Sc12 5 10 20 15 15 Sc13 4 4 3 4 4 Sc14 3 4 3 3 2 C4 Sc15 3 2 2 3 2 Sc16 2 2 2 2 2 Sc17 4 3 4 3 4 C5 Sc18 4 4 2 4 4 Sc19 3 3 3 3 3 Fuente: Desarrollo propio a partir de [7] y [2]

Tabla 4. Criterios del método UBC. A6

A7

A8

A9

1. Forma general Todas las dimensiones están en el mismo orden de magnitud Dos dimensiones tienen muchas veces el mismo Tabular espesor, que no suele superar los 35m Irregular Las dimensiones varían en cortas distancias. 2. Espesor del manto Muy estrecho <3m Estrecho 3-10m Intermedio 10-100m Grueso 30-100m Muy Grueso >100m 3. Inclinación Plano <20 grados Intermedio 20-55 grados Empinado >55 grados 4. Profundidad Superficial 0-100 m Intermedio 100-600m Profundo >600m 5. Distribución El grado en cualquier punto del depósito no varía Uniforme significativamente de la calificación media. El grado tiene características zonales, y cambia Gradaciones gradualmente de un lugar a otro. El grado cambia radicalmente en cortas Errático distancias. 6. Rock mass ratings Muy débil 0-20 Débil 20-40 Moderado 40-60 Fuerte 60-80 Muy Fuerte 80-100 7. Rock substance strength (Fuerza uniaxial / Estres principal) Muy débil <5 Débil 5-10 Moderado 10-15 Fuerte >15 Fuente: [7] Equi-dimensional

3 3 3 4 20 3 3 3 2 3 3 3

3 2 1 2 45 4 2 3 3 4 2 3

2 2 2 3 55 4 2 3 4 4 2 2

1 1 1 2 100 5 1 4 4 4 2 1

Finalmente, debido a el tamaño de la matriz de decisión y al manejo de la subjetividad, se aplica el método de asignación de pesos mediante la entropía, planteado por [16] el cual determina pasivamente los pesos de los criterios sin una intención consciente de tomar decisiones, eliminando la preferencia y expectativas generadas por el uso de decisores, como puede verse en [19]. El método de VIKOR planteado por [20] es utilizado para determinar la alternativa de compromiso. Según [21], citando a [22], el método VIKOR es una herramienta efectiva en el Análisis de Decisión Multicriterio, particularmente en situaciones donde el decisor no está capacitado, o no sabe cómo expresar su preferencia. A continuación se describen en detalle cada una de las herramientas aplicadas para solucionar este problema. 4. Método UBC El método de selección UBC (ver Tabla 4) es llamado así por haberse desarrollado en la University of British Columbia por [7]. Este método surge ante la necesidad de mejorar la técnica desarrollada por [6]. Como nuevo aporte se introduce el valor -10 para asignar un peso negativo sin eliminar un método completamente, tal com o lo hacía la técnica de Nicholas con el valor de -49. Por otra parte, las calificaciones de mecánica de rocas fueron ajustadas con nuevos valores. Puede consultarse el método a profundidad en [7]

respecto a dicho criterio. Cuanta mayor diversidad haya en las evaluaciones de las alternativas mayor importancia deberá tener dicho criterio. La medida de esta diversidad está basada en el concepto de la entropía en un canal de información planteado por [23]. El procedimiento es el siguiente: Se toman las evaluaciones aij (i=1,m)(j=1,n) normalizadas como fracción de la suma

i

ij

de las

evaluaciones originales de cada criterio j. Se calcula la entropía (Ej).

Ej 1/logm*i aij *logaij

5. Método de la entropía El método de la Entropía fue desarrollado por [16] como un método objetivo de asignación de los pesos, en función de la matriz de decisión, sin que afecte la preferencia del decisor, según [15] la importancia relativa del criterio j en una situación de decisión, medida por su peso wj, está directamente relacionada con la cantidad de información intrínsecamente aportada por el conjunto de las alternativas

a

(1)

Donde m= Numero de alternativas en la matriz de evaluaciones normalizadas y aij = Criterios o atributos normalizados. Se calcula la diversidad del criterio (Dj).

131

Dj  1 E j

(2)


Romero-Gélvez et al / DYNA 82 (191), pp. 127-136. June, 2015.

Se calcula el peso normalizado de cada criterio (Wj).

Wj  Dj /  j Dj  1.0

(3)

6. Método Vikor El método VlseKriterijumska Optimizcija I Kompromisno Resenje, que traduce del bosnio, Optimización Multicriterio y solución de compromiso, VIKOR de aquí en adelante, fue desarrollado en 1998 por Serafim Opricovic, como un método de Análisis Multicriterio para la optimización de sistemas discretos complejos con criterios conflictivos e inconmensurables [20]. Según [21], citando a [24], el método Vikor determina el ranking de alternativas, la solución de compromiso y los intervalos de estabilidad para la solución de compromiso obtenida con los pesos iniciales dados. Este método clasifica un conjunto de alternativas, con criterios en conflicto, y selecciona la alternativa de compromiso, entendida como la Alternativa que se encuentra más cercana a la ideal y más lejana de la peor de ellas. Este método da como resultado un ranking multicriterio basado en la particular medida de “cercanía” a la solución “ideal”. La medida multicriterio para la clasificación de compromiso se desarrolla a partir de la Lpmétrica, utilizada como una función de agregación en un método de programación de compromiso [26,29] El primer paso es construir la matriz de decisiones y normalizarla por el método de la iésima componente del vector unitario.

Vi 

donde ∞. [23,30] Según [21] citando a [27] la clasificación de compromiso de VIKOR se realiza mediante los siguientes pasos: a) Asumiendo que cada alternativa es evaluada de acuerdo a cada función de criterio, la jerarquía de compromiso podría ser desarrollada mediante la comparación de la * medida de cercanía a la solución ideal F . Se cada una de las funciones de criterio i=1,2,…., n. Si la iésima función del criterio representa un beneficio entonces

n

(7)

Donde S representa las distancias de las alternativas a la mejor solución; Rj representa la distancia entre las alternativas y la peor opción, wi son los pesos de criterio expresados en la preferencia de los decisores como la importancia relativa del criterio c)

Donde 0 < Vi < 1 Las alternativas del conjunto “J” son denotadas como A1, A2, Aj……..Am. Para la alternativa Aj, la calificación del iésimo aspecto es notado por f ij , por ejemplo f ij es el

Calcular los valores Q j mediante la relación

S S

j 

 S* 

 S* 

 1  v 

R R

 R* 

j

 R* 

(8)

Donde S *  min j S j ; S   max j S j ; R*  min j R j ; R   max j S j R

iésimo valor del criterio para la alternativa Aj; m es el número de alternativas, n es el número de criterios. La solución del método VIKOR comienza con la siguiente forma de la Lpmetica:

*

min R ; R‐

se introduce como el valor que max R y representa la posición del decisor frente al riesgo. Normalmente el Valor de v se toma como 0.5 para representar una posición neutra frente al riesgo aunque se puede tomar cualquier valor de 0 a 1.

1/

e)

1  p  ; j  1, 2,...., J . Este concepto general de distancia Lp representa, en el método VIKOR, las distancias de Manhattan y de Chevishev para el cálculo de y . En el primer caso, para el cálculo de , se utiliza la norma L1, donde p=1 correspondiente a la medida de mayor distancia entre dos puntos. En el segundo caso, para calcular se utiliza la distancia de Chevysev o norma que se refiere a la menor distancia entre dos puntos

(6)

R j  max i  wi  fi *  fij  /  fi *  fi   

Qj  v

(5)

fi   min fij .

S j   0 wi  fi *  fij  /  fi*  fi  

(4)

p  n  * *       wi  f i  f ij  /  f i  f i     i 1 

y

b) Teniendo la identificación de los mejores y peores valores para cada función de criterio, se determinan los valores Sj y Rj, j = 1,2,…, J mediante las relaciones

1

Lp, j

fi*  max fij

d)

ai ( a 2 )1/2

*

determinan los mejores fi y los peores fi valores para

Jerarquizar las alternativas, organizando según los valores de S, R y Q en orden decreciente. Los resultados obtenidos son tres listas ordenadas. Proponga como solución de compromiso la alternativa A(1), la cual es la mejor de todas según la medida Q (mínimo), si se cumplen las siguientes condiciones:

f)

132

I.

Ventaja aceptable. Q ( A(2) )  Q ( A(1) )  DQ , Q A ‐Q A DQDonde DQ  1/ ( J  1) DQ

(2)

1/ J‐1 y A

es

la


Romero-Gélvez et al / DYNA 82 (191), pp. 127-136. June, 2015.

alternativa que ocupa la segunda posición en la lista ordenada por valor de Q. Estabilidad aceptable en la decisión. La alternativa A deberá ser también la mejor en las listas S y/o R.

II.

Se aplica la técnica UBC |para reducir las alternativas a solo cinco, las cuales presentan una validez técnica según las condiciones propias de este yacimiento mineral. Se puede observar la aplicación del mismo en la Fig. 2., donde quedan como alternativas válidas), Alternativa 5 Tajos Largos (longwall), Alternativa 8 Corte y Relleno (Cut and Fill).

7. Desarrollo metodológico 40

30

20

10

Metodo de entibacion con cuadros

Corte y relleno

Camaras y pilares

Camaras por subnivel

Hundimiento por bloques

Cielo abierto

Hundimiento por subniveles

Camaras almacen

‐10

Tajos largos

0

A5

A9

A8

A6

A3

A2

A1

A4

A7

‐20

‐30

‐40

Figura 2. Resultados de Técnica UBC Fuente: Desarrollo propio a partir de [7]

Tabla 5. Matriz de decisión final.

c1

Produccion (Produccion por unidad de tiempo)

Inversion de Capital

Productividad (toneladas por turno de empleado)

Costos comparativos de métodos mineros adecuados

Recuperacion de la mina (porcion del deposito extraido actualmente)

Dilucion (cantidad de desperdicios producidos con el mineral)

La flexibidad del metodo con el cambio de condiciones

Selectividad del método para distinguir el mineral y los residuos

Estabilidad de las aberturas

Subsidiencia, o efectos en la superficie de excavacion

Condiciones de salud y seguridad

4,28 %

4,42 %

4,2 8%

4,28 %

4,29 %

4,34 %

4,41 %

5,50 %

5,78 %

5,75 %

5,34 %

10,58 %

4,72 %

5,75 %

5,10 %

5,67 %

4,41 %

5,32 %

5,78 %

54

40

52

52

54

53

50

2

3

2

2

20

3

3

2

2

4

2

3

54

54

54

53

54

61

62

2

3

3

4

15

4

2

2

2

4

4

3

54

54

54

52

54

50

50

3

3

3

4

20

3

3

3

2

3

3

3

54

54

51

54

52

59

64

2

2

2

3

55

4

2

3

4

4

2

2

51

54

52

52

50

59

61

1

1

1

2

100

5

1

4

4

4

2

1

min

max

min

max

min

max

Max

Max

min

max

sc19

max

c5 sc18

Tasa de desempeño

sc17

max

sc16

Rock substance strength

sc15

wj Cámaras por Subniveles Tajo largo Camaras y pilares Corte y Relleno Método de Entibación con Cuadros

A5

sc14

max

A4

sc13

Rock mass ratings

A3

sc12

max

A2

c4 sc11

Distribucion

A1

c3 sc10

max

sc9

Profundidad

sc8

max

sc7

Inclinacion

c2 sc6

max

sc5

Espesor del manto

sc4

max

sc3

Forma general

sc2

max

sc1

Fuente: Desarrollo propio

133


Romero-Gélvez et al / DYNA 82 (191), pp. 127-136. June, 2015.

0,4 0,3 0,2 0,1 0 c1

c2

c3

c4

c5

c6

c7

c8

c9 c10 c11 c12 c13 c14 c15 c16 c17 c18 c19

Figura 3. Asignación de pesos con el método de la entropía. Fuente: Desarrollo propio

Tabla 6. Matriz de dominancia porcentual. A3 A5 A6 A7 A8 Fuente: Desarrollo propio

A3 0 17 16 15 14

A5 10 0 12 11 10

A6 13 16 0 14 11

A7 11 14 12 0 13

A8 10 14 10 13 0

Alternativa 6 Cámaras y pilares (Room and pillar), Alternativa 3 Cámaras por Subniveles (Sublevel Stoping) y Alternativa 9 Método de Entibación con Cuadros (Square set). En cuanto se tienen las alternativas viables técnicamente, se logra reducir la cantidad de alternativas a considerar en el desarrollo del método VIKOR. Estas alternativas aceptadas, para el problema, pueden verse en la Tabla 5. Donde se presenta la matriz de decisión final para este problema. Con base a la matriz final, presentada en la Tabla 5, se realiza la asignación de pesos mediante la entropía, la cual se puede observar en la Fig. 3. Para este problema se puede ver como el criterio financiero c12 es el más importante para la decisión final. Como se puede observar en la matriz de dominancia y dominancia porcentual (ver Tablas 5 y 6), ninguna alternativa domina a las demás en todos los criterios; se calcula el índice q (ver Tabla 7) para presentar el ranquin de alternativas y la alternativa de compromiso, se escoge como alternativa de compromiso la que tenga el menor valor en el índice q. Tabla 7. Cálculo de índice q. Q

S

R

Cámaras por Subniveles

0,08761501

0,387292433

0,07848498

Tajo largo

0

0,337623549

0,07848498

Cámaras y pilares

0,00966963

0,343105255

0,07848498

Corte y Relleno

0,28701316

0,412714803

0,15802165

1

0,621073196

0,33579601

Método de Entibación con Cuadros Fuente: Desarrollo propio

A3 A5 A6 A7 A8

A3 0% 89% 84% 79% 74%

A5 53% 0% 63% 58% 53%

A6 68% 84% 0% 74% 58%

A7 58% 74% 63% 0% 68%

A8 53% 74% 53% 68% 0%

1,2 Cámaras por Subniveles

1 0,8

Tajo largo

0,6

Camaras y pilares

0,4

Corte y Relleno

0,2

Método de Entibación con Cuadros

0 Q

S

R

Figura 4. Calculo de índice q. Fuente: Desarrollo propio

8. Conclusiones Los resultados del método VIKOR (Tajo largo, cámaras y pilares, cámaras por subniveles) forman un conjunto de alternativas de compromiso, por lo cual cualquiera de las tres alternativas planteadas anteriormente es sugerida para extraer el depósito mineral. El conjunto de soluciones fue validado por ingenieros de minas y geólogos. La empresa a la cual se hace referencia en este caso de estudio utiliza el método de tajo largo, una de las alternativas sugeridas como resultado de aplicar la metodología planteada en este trabajo. El análisis de dominancia es muy útil, porque permite identificar alternativas dominadas. Además, ayuda a determinar si una alternativa domina a las demás. El análisis porcentual de dominancia, permite visualizar el nivel de superioridad entre alternativas de una manera fácil y de utilidad para ir identificando alternativas apropiadas para la solución del problema.

134


Romero-Gélvez et al / DYNA 82 (191), pp. 127-136. June, 2015.

La matriz de decisión genérica correspondiente a la Tabla 3 es un aporte original para la solución del problema de extracción minera, debido a que contiene datos objetivos correspondientes a parámetros propios de cada tipo de extracción mineral. Esta matriz solo requiere que se le ingresen los datos correspondientes a los criterios 1 y 2 los cuales son los únicos que cambian en cada yacimiento minero. La combinación de los métodos Vikor y Entropía para ranquear las alternativas permite reducir considerablemente el tiempo de solución para este problema, ya que en comparación con otros abordajes multicriterio como AHP, en esta metodología solo se debe ingresar los datos correspondientes a parámetros técnicos de cada yacimiento minero (criterios 1 y 2). Por lo tanto la opinión de los expertos se toma únicamente para validar los resultados. Una de las problemáticas que soluciona la metodología planteada anteriormente es la referente al conflicto de intereses y corrupción presente en la asignación de áreas para explotaciones mineras en Colombia. Debido a que la metodología presentada anteriormente no requiere del juicio de expertos para ranquear el conjunto de alternativas, no se verán inmersas influencias y preferencias particulares que afecten la realización de un proyecto minero con un método de extracción validado técnicamente. Para estudios posteriores se plantea agregar una valoración subjetiva con el método de la entropía en un vector de pesos único tal como lo muestra [28], buscando tomar en cuenta la participación de expertos en la solución del problema. Se deben abordar aspectos relacionados con los conflictos de interés e influencias que presenta el abordaje multi criterio – multi decisor propio de este tipo de decisiones en proyectos mineros. Referencias [1]

[2] [3] [4] [5] [6] [7] [8] [9]

Gelvez, J., Triana, L. and Aldana,F., Mining method selection metodology by multiple criteria decision analysis - Case study in colombian coal mining. International Symposium of the Analytic Hierarchy Process,Washington D.C., U.S.A, 2014. Hartman,H.L. and Mutmansky,J.M., Introductory Mining Engineering. John Wiley & Sons, 2002, 570 P. Boshkov, F., and Wright, S.H., Basic and parametric criteria in the selection, design and development of underground mining systems, in: SME Mining Engineering Handbook, New York, 1973. Morrison, R.G.K., A Philosophy of ground control. Montreal, Canada: McGill University, 1976, pp. pp. 125–159. Laubscher, D.H., Selection of mass underground mining methods, design and operation of caving and sublevel stoping mines. New York, Chapter 3, 1981, pp. 23–38. Nicholas, D.E., Selection Procedure, in: SME Mining Engineering Handbook, second edition, Society for Mining Engineering, Metallurgy and Exploration, 1992, pp. 2090–2106. Miller-Tait, R., Panalkis, L. and Poulin, R., UBC mining method selection, in: Proceeding of the Mine Planning and Equipment Selection Symposium, 1995, pp. 163–168. Bitarafan, M.R. and Ataei, M., Mining method selection by multiple criteria decision making tools, The Journal of the South African Institute of Mining and Metallurgy, 108 (10), pp. 493–498, 2004. Karadogan, A., Kahriman, A. and Ozer, U., Application of fuzzy set theory in the selection of underground mining method, The journal of the South African Institute of Mining and Metallurgy, 108, pp. 73–79, 2008.

[10] Alpay, S. and Yavuz, M., Underground mining method selection by decision making tools, Tunnelling and Underground Space Technology, 24 (2), pp. 173–184, 2009. DOI: 10.1016/j.tust.2008.07.003 [11] Azadeh, A. Osanloo, M. and Ataei, M., A new approach to mining method selection based on modifying the Nicholas technique, Applied Soft Computing, 10, pp. 1040–1061, 2010. DOI: 10.1016/j.asoc.2009.09.002 DOI: 10.1016/j.asoc.2009.09.002 [12] Bogdanovic, D., Nikolic, D. and Ivana, I., Mining method selection by integrated AHP and PROMETHEE method, Anais da Academia Brasileira de Ciencias, 84 (1), pp. 219–233, 2012. DOI: 10.1590/S0001-37652012000100023 [13] Greco, S., Multiple criteria decision analysis: State of the art surveys series, in: Greco, Salvatore (Ed.), International Series in Operations Research & Management Science, 2005, 1048 P. [14] Romero, C., Teoría de la decisión Multicriterio: Conceptos. 1993. [15] Pomerol,J. and Barba-Romero, S., Decisiones multicriterio: fundamentos teóricos y utilización práctica, Colección de Economía. Servicio de Publicaciones. Universidad de Alcalá de Henares, 1997. [16] Zeleny, M., Multiple criteria decision making: Eight concepts of optimality, Human Systems Management, 17, pp. 97–107, 1998. [17] Mejia-Umana, L.J., and Pulido-Gonzalez, O., Regiones y zonas con carbón en Colombia, Revista de la Facultad de Ciencias Basicas, 18, 1993. [18] Cortés-Aldana, F.A., García-Melón, M., Fernández-de-Lucio, I., Aragonés-Beltrán, P. and Poveda-Bautista, R., University objectives and socioeconomic results: A multicriteria measuring of alignment, European Journal of Operational Research, 199, (3), pp. 811–822, 2009. DOI: 10.1016/j.ejor.2009.01.065 [19] Aghajani-Bazzazi, A., Osanloo, M., and Karimi, B., Deriving preference order of open pit mines equipment through MADM methods: Application of modified VIKOR method, Expert Systems with Applications, 38 (3), pp. 2550–2556, 2011. DOI: 10.1016/j.eswa.2010.08.043 [20] Opricovic, S. and Tzeng, G.-H., Compromise solution by MCDM methods: A comparative analysis of VIKOR and TOPSIS, European Journal of Operational Research, 156 (2), pp. 445–455, 2004. DOI: 10.1016/S0377-2217(03)00020-1 [21] Quijano-Hurtado, R., Diseño e implementación de una plataforma integrada de modelación para la planificación energética sostenible Modergis estudio de caso Colombia, Tesis, Universidad Nacional de Colombia, Sede Medellin, Colombia, 2012. [22] Jingzhu, W. and Xiangyi, L., The multiple attribute decision-making VIKOR method and its application, International Conference on Wireless Communications, Networking and Mobile Computing, WiCOM 2008, 2008. [23] Vajda, S., Shannon, C.E. and Weaver, W., The mathematical theory of communication, Math. Gaz., 34, 1950, 312 P. [24] Opricovic, S., Multi-criteria optimization of civil engineering systems, 1998. [25] Yu, P.L., A class of solutions for group decision problems, Management Science, 19 (8), pp. 936–946, 1973. DOI: 10.1287/mnsc.19.8.936 [26] Bellver, J.A. and Martinez, F.G., Nuevos metodos de valoracion modelos multicriterio. 2013. [27] Opricovic, S. and Tzeng, G.-H., Extended VIKOR method in comparison with outranking methods, European Journal of Operational Research, 178 (2), pp. 514–529, 2007. DOI: 10.1016/j.ejor.2006.01.020 [28] Aghajani-Bazzazi, A., Osanloo, M. and Karimi, B., Deriving preference order of open pit mines equipment through MADM methods: Application of modified VIKOR method, Expert Systems with Applications, 38 (3), pp. 2550–2556, 2011. DOI: 0.1016/j.eswa.2010.08.043 J.I. Romero-Gélvez, es estudiante de doctorado en ingeniería - industria y organizaciones, Universidad Nacional de Colombia. Docente asistente de la Universidad Sergio Arboleda y Escuela Colombiana de Carreras Industriales, Bogotá, Colombia. MSc. en Ingeniería Industrial de la Universidad Nacional de Colombia Sede Bogotá, Esp. en Gestión de Empresas de la Universitat Politécnica de Valencia, España, Ing. Industrial

135


Romero-Gélvez et al / DYNA 82 (191), pp. 127-136. June, 2015. de la Universidad de Santander, Colombia; Diplomado en Docencia Universitaria Ecci. Con experiencia profesional en proyectos de consultoría minera y experiencia académica en actividad docente impartiendo cursos de pregrado y posgrado en diversas áreas de Ingeniería. ORCID: 0000-0002-5335-0819 F.A. Cortés-Aldana, es profesor asociado desde 1997, del Departamento de Ingeniería de Sistemas e Industrial de la Universidad Nacional de Colombia, Sede Bogotá, Colombia. Es Dr. en Proyectos de Ingeniería e Innovación en 2006, de la Universidad Politécnica de Valencia, España. Posee el título de MSc en Ciencias Económicas, Esp. en Administración de la Universidad Santo Tomás en 1998, Sede Bogotá, Colombia. Ing. de Sistemas de la Universidad Nacional de Colombia, Sede Bogotá, Colombia. Es profesor del curso Toma de decisiones, en la maestría en Ingeniería Industrial de la Universidad Nacional de Colombia. Página web http://www.docentes.unal.edu.co/facortesa/ Miembro del grupo de Investigación ALGOS. Página web: http://dis.unal.edu.co/grupos/algos/. ORCID: 0000-0002-4711-0667 G. Franco-Sepúlveda, es graduado como Ing. de Minas y Metalurgia en 1998, MaSc. en Ciencias Económicas en 2006 y candidato a Dr. en Ingeniería de la Universidad Nacional de Colombia-Sede Medellín, Colombia. Actualmente es profesor auxiliar en dedicación exclusiva adscrito al Departamento de Materiales y Minerales de la Facultad de Minas, Universidad Nacional de Colombia. Director del Grupo de Planeamiento Minero-GIPLAMIN, grupo C. ORCID: 0000-0003-4579-8389

Área Curricular de Ingeniería Geológica e Ingeniería de Minas y Metalurgia Oferta de Posgrados

Especialización en Materiales y Procesos Maestría en Ingeniería - Materiales y Procesos Maestría en Ingeniería - Recursos Minerales Doctorado en Ingeniería - Ciencia y Tecnología de Materiales Mayor información:

E-mail: acgeomin_med@unal.edu.co Teléfono: (57-4) 425 53 68

136


MGE2: A framework for cradle-to-cradle design María-Estela Peralta-Álvarez a, Francisco Aguayo-González b, Juan-Ramón Lama-Ruiz c & María Jesús Ávila-Gutiérrez d a

. Engineering Design Department, University of Seville, Seville, Spain. mperalta1@us.es , Engineering Design Department, University of Seville, Seville, Spain. faguayo@us.es c Engineering Design Department, University of Seville, Seville, Spain. jrlama@us.es d Engineering Design Department, University of Seville, Seville, Spain. mavila@us.es

b

Received: May 2th, 2014. Received in revised form: December 1th, 2014. Accepted: December 17th, 2014.

Abstract Design and ecology are critical issues in the industrial sector. Products are subject to constant review and optimization for survival in the market, and limited by their impact on the planet. Decisions about a new product affect its life cycle, consumers, and especially the environment. In order to achieve quality solutions, eco-effectiveness must be considered, therefore, in the design of a process, its product development and associated system. An orderly methodology is essential to help towards creating products that meet both user needs and current environmental requirements, under paradigms that create environmental value. To date, the industry has developed techniques in an attempt to address these expectations under Cradle-to-Cradle (C2C), which is loosely structured around the conceptual frameworks and design techniques. The present work describes a new framework that encompasses all stages of design, and enables interaction under a set of principles developed for C2C. Under this innovative new paradigm emerges the Genomic Model of Eco-innovation and Eco-design, proposed as a methodology for designing products that meet individual and collective needs, and which enables the design of eco-friendly products, by integrating them into the framework of the ISO standards of Life Cycle Assessment (LCA), eco-design, eco-labeling, and C2C certification. Keywords: Eco-design, Sustainability, Industrial Ecology, Life Cycle Assessment, Eco-effectiveness, Eco-innovation

MGE2: Un marco de referencia para el diseño de la cuna a la cuna Resumen El diseño y la ecología son temas cruciales en el sector industrial. Los productos están sometidos a revisiones constantes para sobrevivir en el mercado y a optimizaciones que los mantienen bajo los límites de impacto sobre el planeta. Las decisiones acerca de un nuevo producto afectan su ciclo de vida, los consumidores, y sobre todo el medio ambiente. Con el fin de lograr soluciones de calidad, Eco eficacia debe considerarse, por lo tanto, en el diseño de un proceso, su desarrollo de productos y el sistema asociado. Una metodología ordenada es esencial para ayudar a la creación de productos que satisfagan tanto las necesidades de los usuarios y los requisitos medioambientales actuales, bajo los paradigmas que crean valor ambiental. Hasta la fecha, la industria ha desarrollado técnicas en un intento de abordar estas expectativas bajo la cuna a la cuna (C2C), que está vagamente estructurada alrededor de los marcos conceptuales y técnicas de diseño. El presente trabajo describe un nuevo marco que abarca todas las etapas de diseño, y permite la interacción bajo un conjunto de principios desarrollados por C2C. En virtud de este nuevo e innovador paradigma surge el Modelo de Genómica de Ecoinnovación y el diseño ecológico, propuesto como una metodología para el diseño de productos que satisfagan las necesidades individuales y colectivas, y que permite el diseño de productos respetuosos del medio ambiente, mediante su integración en el marco de las normas ISO de Análisis de Ciclo de vida (ACV), el ecodiseño, ecoetiquetado y la certificación C2C. Palabras clave: Ecodiseño, Sostenibilidad, Ecología Industrial, Análisis de ciclo de vida, Eco-efectividad, Eco-innovación.

1. Introduction To meet present needs without compromising the ability of future generations to meet their needs is the most widely

used definition of sustainability. This concept has become highly relevant, now recognised as the basis of industrial, business, economic, governmental or social activity. Sustainability is built on three vectors that define and develop

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 137-146. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.43263


Peralta-Ă lvarez et al / DYNA 82 (191), pp. 137-146. June, 2015.

“3E� sustainable strategy: Economy, Equity and Ecology [1]. Over recent decades, these vectors have occupied various positions, both while being aimed at reaching their full interaction, and in industrial activities considered to be sustainable. These dimensions or vectors of sustainability are deployed sequentially in order to attain business goals, forming a pyramid whose economic base needed guaranteeing before taking into account any social and ecological criteria. Through innovation and constant attempts and proposals for change towards sustainable development [2], the industry has built various frameworks (paradigms in the design), all based on a group of principles, techniques and tools, among which the most significant are Natural Capitalism, the Natural Step, Cradle-to-Cradle (C2C), and Permaculture. These approaches all lay out a new distribution of the three vectors of 3E sustainability, arranged at the vertices of a triangular mesh where sustainability is addressed equally from any point of view. By taking the innovative cradle-to-cradle (C2C) paradigm [3] as the origin, and by considering it as the most significant framework for the advancement of sustainability in the context of engineering projects in view of its operational and ecosystematic character, this paper presents a new model of design and development of industrial products (MGE2). Our principal goal is the realization of theoretical contributions, by establishing the foundations and principles of bio-inspired C2C; followed by our goal to develop a specific and practical methodological proposal of bionic design which can be supported by Concurrent Engineering and PLM (Product Life Management) environments [4], whilst taking into account a continuous review based on Life-Cycle Assessment. In this proposal, new ideas of the C2C perspective are coordinated, thereby constituting a way to implement the basic principles of this new paradigm, which reflect the lessons learned from the eco-efficiency and eco-innovation approaches. In this new perspective of eco-efficiency, the model can support all the ISO standards requirements introduced to date, it can extend the range of solutions in order to improve the performance of impact minimization, and it can help to resolve, through the design of sustainable products, the environmental problem that current industry is causing on the planet.

experts and environmental groups, governments began to take measures and to demand certain actions from heads of industry. Society began to become aware of the problem. b) Gate to Gate: End-of-pipe technologies (1). After this first period of awareness, public administration introduced the first legislation that demanded pollution control, for which they began to implement the End-of-pipe technologies, based on pollution-control (once it is triggered) at the end of the production line. This involved an additional demand for energy, materials and specialized equipment. c) Cradle to Gate: Optimization (2). The eco-efficiency idea was extended when the industry realized that the optimization of activities and processes was the next best option. To this end, the vision of preventing pollution rather than fighting it was incorporated, and focused mainly on the manufacturing phase and on the reduction of raw materials. d) Cradle to Grave: Eco-efficiency (3). A step beyond optimization, the idea of eco-efficiency as a basis for industrial strategy was expanded, and was developed not only in the manufacturing phase, but also to cover all stages of the life cycle, from the extraction of materials to the end of the life of products. This involved comprehensive supervision and action in all stages of the life cycle of the product based on ISO 14000 standards, the implementation of legislation and BREFs, Best Available Techniques and optimization of technologies, in order to improve the control process and pollution prevention. Cradle to Cradle: Eco-effectiveness (4). As the latest advance, the C2C perspective is under development. Its aim is not only to ensure the efficiency of processes throughout the life-cycle stages, but also involves a thorough study of material flow, essential for the end of the useful life of products, since they can be reused or recycled (without entailing reduced quality). This would eliminate the current system of waste landfill or pollution of the atmosphere by means of the many existing recovery systems. The first noteworthy mention of C2C was with the publication of "Cradle to Cradle: redesigning the way we do things," written by Michael Braungart and William

2. Material Sustainable design involves a strategy that encompasses technological, economic, cultural, social, technicalproductive, aesthetic, and environmental factors. The consideration of this set of issues within design implies that those industrial organizations that carry out eco-design projects obtain a set of benefits as a consequence of the introduction of an innovative factor in their business policy. To date, design for sustainability and environmental management in industry have evolved into the following stages: a) Reactivity: Knowledge and reaction. Mass production in industry has marked the future of the planet through its high consumption of resources and excessive pollution. Faced with this situation, and with the alarm raised by

Figure 1. Evolution of the life cycle of products towards sustainability; Source: The authors

138


Peralta-Álvarez et al / DYNA 82 (191), pp. 137-146. June, 2015.

McDonough [1], which introduced the first foundations of a new paradigm for ecological industry. The authors considered this innovative perspective as the start of the "next industrial revolution." Thanks to the consideration of the design of products and systems under the perspective of sustainability during their whole life cycle, an architecture is conceived of the product and associated systems (manufacturing processes, use and disposal), seamlessly integrated with the flows of matter and energy of the natural ecosystem (naturesphere) and technical ecosystem (technosphere). The three aforementioned dimensions of sustainability are available through C2C, simultaneously articulated and under a fractal design [1]. Sustainability is placed in a triangular domain where none of the three concepts (economy, ecology, equity) is trivial, since all three are considered as equal in size, value and interest, thereby constituting the sustainable methodology of welfare economics. When tackling the design of a new product, each vertex of the fractal triangle and the interaction of each concept with the other two vertices are taken into account, without forgetting any of the qualities that will fully satisfy (thanks to the product) the overall needs that society demands (present or future), the environment, or project viability. This approach eliminates the following situations of unsustainability: 1) An economic and financial capital outlook, suitable only for product profit (capitalism): based exclusively on profits without considering environmental and social aspects. 2) A vision of equity and of social capital, with attention paid to the market sectors of disadvantaged groups and to cultural sustainability. This approach fails to consider both economic and environmental aspects. 3) An ecological and natural capital perspective for the sustainable integration of the product into the environment without considering social and economic criteria. This new strand of research is currently under development, with high expectations of its full implementation in the future, whereby it will set a new paradigm for design, and will develop products and industrial systems based on the search for ecoefficient solutions (quality) with a closed life cycle. This situation is supported by the European project Cradle to Cradle Network (C2CN) [23], approved in February 2010 for the development of the C2C paradigm and its distribution in Europe, within the INTERREG IVG (Innovation and Environment Regions of Europe Sharing Solutions). The C2C framework is insufficiently developed for the further incorporation into the product design requirements that the products can also be regenerated at the end of their useful life, and that they generate environmental value within their life cycle. This determines the need for the design and development of products, which, in addition to conceiving sustainability in a systemic manner, also integrate life-cycle analysis (LCA), cleaner production and industrial ecology, and eco-efficiency and eco-effectiveness [5], all within the framework of Best Available Techniques. To this end, we propose and develop the “Genomic Model of EcoInnovation and Eco-design, MGE2.” This is a new process for the design and development of products that converts C2C into a practical and applicable technique.

3. Methodology 3.1. Cradle to Cradle - C2C The core of the C2C paradigm [1] consists of eco-innovative design solutions inspired by nature, in their closed cycles of materials and eco-efficient industrial metabolism (in the absence of xenobiotic substances). The objective is to incorporate ecoefficient solutions that add value in order to help minimize the use of natural resources, by placing value on resources manufactured in successive cycles, which involve a parallel reduction of environmental degradation. To this end, the methodology must be designed in such a way that nature is seen as: a model (through imitation of forms, processes, flows, interactions and systems); units of measure (via comparison of designs with natural references and verification of whether the solutions are as effective, efficient, simple, and sustainable as those found in nature); and mentor (through the acceptance that human activity and industrial practice is part of nature, and acting accordingly). Approaches that arise in relation to the concept of eco-effectiveness in C2C [26], provide the search for quality solutions, by doing “more with less” without slowing down the environmental problem (which does not just minimize resource consumption, emissions and waste, but actually eliminates them), all according to the concept of Biomimicry, otherwise known as Innovation inspired by nature [6].The authors of this idea [3] propose a series of ideas that we have organized into nine practical principles, (Pi), which ensure products are obtained that incorporate closed loops, no waste generation, and recovery of all materials without any loss of quality, thereby keeping the natural capital of the planet in line with the character of a system that is open in energy and closed in materials. P1. Proactive refocus (positive impacts). As opposed to the reactive approach of environmentalists to "reduce, reutilize and reuse" that only slows down degradation without resolving the problem, this principle proposes proactive action before the generation of impacts. It rejects the assumption that the industry must inevitably destroy the environment, and it recognizes the potential for innovation, ingenuity, creativity and prudence, and imagines systems which purify water, air and soil, thereby helping to regenerate the environmental value lost in recent decades. P2. Systemic and integrated conception of product metabolism (systemic approach). This principle includes a systemic perspective of the lifecycle of products by introducing closed metabolic processes for technical and biological nutrients of the naturesphere and technosphere. This setup of closed cycles converts the output into input (waste = food). P3. Fractalization of sustainability (eco-innovation). The 3E strategy transforms the problem-resolution process and obtains solutions based on opportunities of values with a triple calculation of results (economic, social, and ecological). P4. Bio-inspired eco-innovation (biomimicry). The search for quality solutions based on effective bio-inspired innovations[6]. This will rule out ineffective solutions (such as continuous optimization which minimizes but fails to eliminate the problem of eco-efficiency), by endowing products and

139


Peralta-Ă lvarez et al / DYNA 82 (191), pp. 137-146. June, 2015.

industrial systems with added environmental value, and by helping reduce the use of natural resources and reduce environmental degradation, which directly or indirectly contributes towards the minimization of environmental impact. P5. The product as a living being and its system, associated as an ecosystem (environmentally friendly). Products are considered and developed as the metaphor of a living being with its complex relationships with the environment, whose metabolic flows of biological and technical nutrients are included within closed cycles without any loss of value and without damage to the environment. P6. Eco-intelligence (maximization of the natural value). Ecological intelligence is the concept that describes the ability to design and develop sustainable products or services. From the initial conception of the design, the carrying capacity of the ecosystems associated to its life cycle is considered, and hence the network of interactions is rendered environmentally friendly and beneficial to the environment and the agents involved. In this way, the lost environmental value on the planet can be regenerated. P7. Respect for and promotion of diversity (resilient design) Diversity (genes, organisms, populations, ecosystems) promotes resilience and robustness of the product and associated systems, thereby ensuring security in a changing world. Therefore, the variety belonging to the natural environment, and that of the technosphere, which is influenced by the manufacture, use and disposal of products, should not be adversely affected. This implies respecting and enhancing natural and technical diversity, and the avoidance of xenobiotic products and substances [27]. P8. Eco-effectiveness versus eco-efficiency (quality solutions). Eco-effectiveness is associated with quality solutions and directly addresses the concept of maintaining (or improving) the quality of resources and productivity through closed cycles. Rather than eliminating waste, we advocate a cyclic metabolism or a complete closure of material cycles (refuse is non existent) where waste of one system becomes a nutrient for other systems. This idea is taken from nature, where there is no waste and therefore all technical or biological cycles are closed (waste = nutrient). P9. Use of renewable energy (Exergy approach). The energy to sustain the metabolism of the food chains of biologicaland technical nutrients should preferably be obtained

from renewable sources, rather than through the exploitation of resources that provide energy from fossil fuels, which, for millennia, has devastated the areas where these materials have been processed [28]. 3.2. Material Flow Analysis (MFA) and Substance Flow Analysis (SFA) The paragraphs above state that the design of a product within the C2C context must deal with the concept of closed flows of materials and substances. This implies including an eco-systemic perspective in the design and development process where we will define new rules that convert waste materials into nutrients, in such a way that they are allowed to flow within two metabolic cycles, as shown in Fig. 2 (the biological cycle, associated with the naturesphere, and the technical cycle, associated with the technosphere). This is made possible thanks to MFA (material flow analysis) and SFA (substance flow analysis) with which we can simulate the principle of conservation of matter carried out by nature, where the flow of material, substance and energy is constant [7]. With these two methods, all flows within the life cycle of the product can be taken into account through Material Flow Accounting, (MFA). Interest in these two types of analyses due to their different trophic levels and to their associated energy, appears the moment when the aim becomes that of converting the flows generated by the industry into the metaphor of the trophic chain which is followed by natural ecosystems. To this end, materials must coexist exclusively in the two metabolic routes to achieve the closed nature of the cycles of flows (hence the name, from cradle to cradle). This eliminates the concept of waste, and renewable energy resources are chosen for the metabolic reactions. Within the C2C design, the following types of metabolism associated with the product have been established: Metabolism associated to Naturesphere. These are a result of processes linked to biological nutrients or biodegradable materials that could be metabolized and regenerated by nature. In this cycle, the material may return to the biosphere (lithosphere, hydrosphere and atmosphere) only as organic material (neither synthetic nor toxic). Discarding a chair made entirely from wood is one example. This metabolism is represented in Fig. 2, BN.

Figure 2 Cyclical Metabolisms in C2C (adapted from [17,21,24])

140


Peralta-Ă lvarez et al / DYNA 82 (191), pp. 137-146. June, 2015.

a.

Metabolism associated to the Technosphere. This consists of technical materials and processes related with product life cycle, forming the set of technical nutrients to be metabolized by the Technosphere (in Fig. 2, TN), divided into: B1- Downcycling is the path where the materials lose quality and value. With use, their elimination or disposal is only postponed, and their destructive cycle is extended, for example, the manufacture of sheets for thermoformed, plastic lumber, pallets or thermal fillers from PET (polyethylene terephthalate) bottles. These having fulfilled their initial goal are incorporated into a process where they suffer a loss of certain properties, and can be exploited if they are used in the manufacture of only those products that do not require the features offered by the primary material. And B2 - Upcycling transforms the unused material or product, otherwise destined to become waste, into another material or product of equal or greater utility or value. These paths lead to materials of greater value, through being transformed into preferential materials in the eco-design of products and industrial ecology. An example can be found in the car industry [8], where once each vehicle becomes obsolete, many engine parts could easily be reused in other applications. Both the MFA and the SFA are carried out through the study of flow diagrams, material balance, and simulation processes of the values of environmental impact [9]. Their use enables us to also understand the metabolic flows described above and the energy inherent in the processes, the toxic substances and their flows within every phase of the life cycle (extraction, manufacturing, use, withdrawal, and end of life), currently included in the field of study of ecotoxicity. Clearly, the purpose of the design with C2C is to improve flows associated with the product in relation to the metabolic capacity of the planet: the Naturesphere and Technosphere. The C2C paradigm also describes the need to improve the metabolism of the Technosphere, through the implementation of industrial ecology systems within which the effective management of nutrients is carried out. Sets of intelligent materials are formed that enable upcycling, thereby leaving downcycling as obsolete, depending on the technology available and the development of new materials. For the preservation of metabolic routes in closed loop cycles, maintenance from energy flows is needed, which in turn requires a surplus of resource consumption and pollution generation for their production. In order to minimize this impact, the replacement of fossil fuels by renewable energy sources has become a priority. 3.3. Industrial ecology and cleaner production There are three types of industrial systems [10], which coexist today. Type I is the conventional and unsustainable system that was born in the industrial revolution and is about to become extinct. Those companies and organizations that have begun to become aware of sustainable development and environmental care are matched to type II. Finally, type III is not still considered by many systems (although some are running currently, such as the Eco-industrial Park in Kalundborg, or are under development, such as the Eco-industrial Park in Dallas Texas). There are many studies and research projects about industrial ecology’s dynamic

and models through which projects are being proposed to create industrial ecosystems and eco-industrial parks. Adopting this system with its characteristics offers companies the ability to minimize the environment impact and reduce production costs through an energy and resources efficiency plan. Its application prevents contamination, allows resource recovery and reconstruction of damaged resources in degraded ecosystems. Industrial ecosystems are a powerful economic tool, both for industry and close communities that may achieve benefit from park’s clean management . This has been made possible thanks to MGE2, which guides planning and productive industrial activity in order to achieve sustainability in manufacturing processes. MGE2 allows industrial plants to manage inflows and outflows efficiently. Good planning will optimize the manufacturing phase where industrial system quality will be strengthened. 3.4. Application Tools The C2C paradigm and framework have developed a set of techniques and individual tools, which have yet to be integrated into a model of product design and development. From among other survey techniques, possible applications could be found for biodegradable materials, analysis and assessment of material flows (MFA), substance flow analysis (SFA) [25], life cycle analysis (LCA), Sankey diagrams of energy balances, a study of biological and technical metabolic routes, the design of a closed nutrient cycle, exergy analysis, design and development of metabolisms of the product, product disassembly trees, chemical design, triple E strategy (or fractal pyramid), X-list, gray-list, Plist, eco-effectiveness, the rediscovery of environmental concepts, the five stages of redesign (no use of harmful pollutants, monitoring reports, positive passive list, active positive active list, and rediscovery or innovation), and bioinspired design [11]. The stages of product design and development are complex, non-intuitive, and fail to guarantee good environmental performance, all due to this situation of isolated techniques with no specific establishment for the way in which they should be applied. Based on C2C, a new design model (MGE2) is proposed in order to integrate paradigm principles and techniques in the design and development process. 3.5. Genomic Model of Eco-innovation and Eco-design MGE2. The aim of the proposal for the genomic model of design and development is to integrate the C2C paradigm, the material and substance flow analyses together with all aspects present in the analysis of the life cycle of products. To this end and through the achievement of a flow of adequate nutrients (simulating the trophic chains that living beings follow in their ecosystems), the objective is the incorporation into the products of a series of characteristics that designate their sustainability during manufacture and use, and at the end of their useful life, where they repeatedly restart the process as technical nutrients, thereby rendering them autopoietic [12], and self-healing.. The design requirements that MGE2 incorporates into products are defined in order to ensure eco-compatibility, by enabling integration of the nutrients into successive redesigns (new generations of

141


Peralta-Álvarez et al / DYNA 82 (191), pp. 137-146. June, 2015.

products), while taking into account the evolution of the associated ecosystem product (market, Technosphere, Naturesphere, etc). The model reflects the requirements of complexity and flexibility of the new environments of the development known as PLM - Product Life Management [4], and is configurable in response to the complexity of the product, thereby forming an open reference architecture for concurrent, collaborative, and distributed engineering environments. Bearing these aspects in mind, the following dimensions are proposed for the characterization of the products: Static dimension, which incorporates a sustainable character into the product (self-compatible) defined as:  Autopoietic (self-regenerating): the product regenerates itself.  Environmentally friendly: the different solutions are designed based on the capacity of the receiving environment.  Metabolizable: the flows of substances and materials are designed in response to biological and technical metabolic processes.  Systemic (holistic): different projective scenarios, cyclic interactions of the product, and metabolic flows generated in its life cycle are all studied, for both biological and technical nutrients. Dynamic dimension, which determines the variations in the different generations of products, endowing them with an evolutionary character (resilient and robust), differentiated by:  Natural selection (environmental pressure): resulting from the interaction of the genome (internal characteristics of the product) with the environment (which selects the best adapted) leading to the phenotype. This constitutes the learning factor between generations of the product.  Recombination and mutation (the combination of two different genotypes): both correspond to the random processes of genetic transmission between generations and to strategies of hybridization between products in the company's portfolio. In order to correctly analyse and synthesize these dimensions of the sustainable product with the appropriate connotations, a series of steps is defined that guide the

Figure 3 The Genomic Model of eco-innovation and eco-design; Source: The authors

whole design and development process and which render the product sustainable. Taking the states and evolution of natural systems as a reference, the process is divided into two sections: genotype, the stage of gestation of the product (design and development); and phenotype, which defines the associated system of the product (manufacturing, use, withdrawal and end of life, market, policy, legislation, and competence). The terms “genotype” and “phenotype” are characteristic of genetics, where the duality of organisms is represented. These have been chosen as an analogy to describe the internal characteristics of a product, or its “genes” (genotype) and its expression or interaction in a certain environment or "ecosystem" (phenotype). The two processes (genotype and phenotype) need a strategy (sustainable) that determines their evolution, and require constant analysis and interaction management, achieved with a Life-Cycle Analysis [13]. Hence the MGE2 has a five-fold structure: <MGE2>= <<ProductStrategy><Genotype><FoodChain><Phenotype> <LifeCycle Analysis>> < Product Strategy > (1) In this stage, the objectives are defined under C2C principles, which design or redesign a new product and manage its life cycle. These aims establish the product strategy of a systemic, autopoietic, ecoinnovative, eco-friendly, and metabolizable character. <Genotype> (2) Based on product strategy, various techniques and design tools associated with C2C are applied in order to establish the "genes" that define the materials (nutrients), metabolic routes, and the types of energy (possibly renewable) which will sustain the products from cradle to cradle. The genomic design of the product establishes the domains of needs, functions and concepts, and materialization. In these domains there is a series of techniques to ensure eco-innovation, eco-compatibility and metabolism [14]. The main tool for the assessment of design solutions that establish these "genes" consists of the basic strategies of eco-design, supported by biomimetic design strategies oriented towards ecoeffectiveness, all with the intention of obtaining the closed-loop cycle characteristics of the C2C perspective which are to be applied in each domain (view Fig. 5). <Food Chain> (3) After defining the "genes" that characterize the product, the product begins its phase of growth and development, that is to say that it is ready to be manufactured and that it will later become part of its associated system. At this stage, the work of other actors begins that will render the rest of their life cycles ecoeffective, with decisions on logistics or management of the end of life. These stages, considered in the design (genomics phase), can be completed and optimized at this point. Hence, in the design stage, certain decisions are made that ease this work and help to minimize the environmental impact once the network of relationships, which ensures the ecocompatibility and metabolism, are established. Therefore, it is necessary to conduct a study of possible interactions by considering two key elements: Naturesphere (6), which constitutes the environment where the industry extracted the natural resources and where the biological nutrients are returned; and Technosphere (7), as a means of attaining the flow of technical nutrients, which must be taken into account for the material flow analysis (MFA) and substance flow analysis (SFA).

142


Peralta-Ă lvarez et al / DYNA 82 (191), pp. 137-146. June, 2015.

Figure 4 Toolbox MGE2. Definition and Development of MGE2 Strategies and tools; Source: The authors

<Phenotype> (4). This phase constitutes the development of the analysis of the real and potential interactions of the product with the environment as an expression of its genes (genotype + environmental interactions = phenotype). The expected outcome of the product on the market (environment) is determined. This stage takes into account market analysis, legislation, user analysis, material resources available, traditions, forward and reverse logistics, stakeholder analysis, processes of the end of useful life, etc. The required performance of the product is analyzed under the C2C sustainability criteria, based on data obtained from the system into which the product will be integrated and associated. <Life Cycle Analysis> (5). The objective of LCA is to determine the environmental impact that is associated with the phenotype of a product or with its new design (genotype), so that the impact can be considered within the product. <Strategy: This tool is particularly significant in the model proposed since it pursues the qualitative and quantitative knowledge of the flows of materials, energy, emissions, and effluents and of their impact on the environment. LCA can be applied at various stages (on the phenotype - product to be redesigned - or on the genomic design of a new product.

4. Verification and Recognition The MGE2 model is designed to be applied in all those design projects and product development with sustainability goals under C2C. Once the design and development process is completed, then verification and recognition of the work is performed by some of the existing eco-labels, with which the added environmental value is awarded to the products [13]. Thanks to the structure of the proposed model, the criteria demanded by eco-labelling schemes are included in the product once they have undergone the design stages. Together with the current ISO eco-labelling programs [15], a new certification system associated with C2C ecoeffectiveness [16] has been developed, administered by the authors of this new perspective and which differentiates products according to the sustainable objectives achieved, into qualifications of platinum, gold, silver, and basic. The criteria for the achievement of these labels range from the basic level (where the product inventory and strategy is valued), through to the silver level (achieved with a product of at least 50% reusable materials), via the C2C gold level (products consisting of 65% clean materials, production and energy), and finally reaching the Platinum level (which includes all the above requirements and also attains good water management in the life cycle). The structure of MGE2 allows all aspects required for the implementation of the

143


Peralta-Ă lvarez et al / DYNA 82 (191), pp. 137-146. June, 2015.

C2C-certified products to be taken into account, once the necessary tools to enable the estimated requirements to be met are incorporated. 5. Case study: implementation of MGE2 in the redesign of office chair The implementation of MGE2 is flexible, and varies according to the type of project, product complexity, and the proposed objectives. Therefore, the industry can either redesign an existing product or design a new one, adapting to different operating modes in the different stages of the model. That is to say, the MGE2 model offers flexibility of application and flexibility in the choice of techniques and tools destined for the attainment of C2C projects. The exact nature of this methodology is applied by way of an illustration, and defines the strategy and steps to follow in the redesign of an office chair [17,18] STAGE 1: Life cycle analysis of existing products: Life Cycle Analysis is an objective process for the evaluation of the environmental impact associated with a product, process or industrial activity by identifying and quantifying not only the use of material and energy, but also the emissions to the environment, both in order to determine the impact of resource use and emissions, and to evaluate and implement environmental improvement strategies [19]. The study covers the entire life cycle, taking into account all stages from the cradle to the grave of the product involved. For this case study, by following MGE2 and thanks to the study of material and substance flow, this LCA now covers the life cycle from the cradle to the cradle. The main objective of this tool within the model is to determine the environmental impact of product performance in relation to the 3E vectors. From this stage, improvements are established that determine the product strategy. The LCA results obtained are shown in Fig. 8 in the final section. STAGE 2: Establishment of product strategy under C2C: Once the LCA data and possible applicable improvements are known, then a systemic, autopoietic, environmentally

Figure 5 Application of the 3E product pyramid; Source: The authors

compatible and metabolizable strategy of the product is established through the exploration of the value and innovation of the 3E pyramid, which constitutes the basic tool for ecoinnovation. The generation of the set of 3E values enables the establishment of the premise that defines the product strategy; this situation in turn enables the parameterization of the environment of the generic design, through techniques and tools for this particular project. This strategy focuses on: (1) Systemic (or holistic) integration for bio-inspired design; the different scenarios of the chair throughout its life cycle are considered in this phase with the aim of promoting and equitably integrating all three aspects of the 3E pyramid. (2) Sustainable and eco-friendly. Improvement of the metabolism by decreasing the ratios of environmental impact on the Naturesphere, in order to minimize the impact on the environment (or rendering the ecological footprint assimilable). This is achieved by increasing the flows of material on the Technosphere by means of upcycling (minimizing those flows related to infra-recycling as much as possible), eliminating possible toxic or polluting substances with the incorporation of innovations from green or sustainable chemicals. Finally, features that are cooperative with Naturesphere are incorporated, thereby creating environmental value. (3) An autopoietic character is obtained by taking the concept of "genetic intelligence" into account and by supplying it to the product with the aim of facilitating tasks of use, manufacturing, logistics (forward and reverse), and of its regeneration at the end of the life cycle. In a particular way, this intelligence incorporates innovation into phenotypic interactions, thereby enabling regeneration, reuse, and recovery in successive generations of products. STAGE 3: Design and Development of the product genotype. Study of the Phenotype and associated criteria: Once the data of LCA, the phenotype required, and the product strategy are all determined, then the redesign is performed under the principles of C2C. To this end, the genomic design is carried out, with the detailed study of each domain, (need, functional, conceptual, and materialization), and in the fields of Eco-innovation, Eco-compatibility, and Metabolization [20]. In each domain, a series of individual eco-design strategies is applied, as are the tools necessary for the definition of all the requirements that render the product sustainable. STAGE 4: Validation of genotype and phenotype optimization: Concurrent with the previous stage, the verification and validation of the genomic design of the product is performed based on the requirement demanded by the system that it be associated with the life cycle of the product, which in turn determines the procedures for interaction of the genome with the environment. This results in the initial phenotype which develops and optimizes over the life cycle and those of successive generations of products. The stages of manufacture and of the end of useful life hold special interest, since the metabolic pathways and biological and technical nutrients are investigated and defined, and fix the associated processes from the disassembly diagram of the genotype. The final solution is the ECOS office chair [17,18]. STAGE 5: New LCA of the product with the objective of Environmental Product Declaration C2C. The last stage in the implementation of the MGE2 model

144


Peralta-Ă lvarez et al / DYNA 82 (191), pp. 137-146. June, 2015.

corresponds to the completion of a new LCA of the redesign of the product. The objective of this analysis is the confirmation of the proposed improvements in the design process and the knowledge of all the information necessary in order to obtain an eco-label for the product from any of the existing certification programs (including C2C certification). In the case study carried out, the second application of LCA verifies whether the product meets the criteria to qualify for C2C certification [16]. The results awarded a GOLD eco-label to the chair. All the information is collected for the writing of the Environmental Product Declaration in which the environmental information of the chair is presented. It is also quantified throughout its life cycle to allow for its comparison with other products that fulfil the same function, all of which cause a greater impact on the environment. 6. Results and Discussion Fig. 6 presents the results of the MFA, SFA, and LCA, as well as the properties and characteristics of the flows of materials, together with a comparison where the results and impact reduction are described. The final synthesis is summarized in this figure, which reflects the characteristics that render the product sustainable under the C2C paradigm, since they fit the criteria required to achieve the aforementioned certification.The MGE2 model can be used for the design and development of any product and system; data relating to genotype and phenotype, i.e., all that is required are the inputs and outputs of each processes involved in the life cycle stages, properly defined in the system limits of LCA process. The MGE2 model provides an appropriate structure for design and product development from C2C paradigm, unlike other models proposed in the literature which do not contemplate an organization of actuation from the Triple Bottom Line perspective and do not integrate quantitative tools and methodologies that facilitate the application of the C2C principles, such as the Material Flow Analysis (MFA), Substance Flow Analysis (SFA) or life cycle assessment (LCA). Furthermore, the MGE2 model considers the different routes of product certifications. The MGE2 model is currently under validation, conducted by the company ESINOR Instalaciones for energy system design and development. Intellectual exploitation rights were given up in order to verify its application in the industrial sector. The development of the entire life cycle stage is a specific framework to achieve sustainable integration in the design and product development from 3E perspective (economic, ecological and social). For this reason, the scope of this model is open to future updates. We are currently working on the detailed design process of individual product life cycle stages. It is in the manufacturing stage that different methods of design and development of sustainable manufacturing processes are being considered, from new paradigms such as Green manufacturing or Clean Production [29] or the Materialization Domain that is being completed with the study of industrial product metabolism.

Figure 6 Summary of life cycle analysis and criteria for C2C certification of the product; Source: The authors

7. Conclusion The work presented provides previously unexplored joint epistemological ideas of the C2C perspective and a model of design and development which introduces an articulated form of implementing the basics of this new design paradigm. By compiling lessons learned from the approach of ecoefficiency and eco-innovation, we develop a new bioinspired architecture for the process of eco-efficient design and development under the C2C principles, known as MGE2. This model can support all regulatory requirements to date, and can extend the range of solutions in order to improve the performance of the minimization of impact and to solve the current environmental problems caused by industry worldwide. Future work is aimed at developing design and development environments with PLM tools and CAD systems that provide computational support of the MGE2 model, by validating and verifying information and offering feedback. This work proposes a methodology to design sustainable products and systems within the C2C paradigm. Its main objective is to optimize any system interactions with the environment, taking into account the Three Sustainable Dimensions: economic, ecological and equity (3E). The Genomic Model of Eco-innovation and Eco-design is a methodology for sustainable product design and development. It can help engineers and their products, systems or activities to have an internal operation in line with its associated system. The model guides decision making processes with both qualitative and quantitative assessments

145


Peralta-Álvarez et al / DYNA 82 (191), pp. 137-146. June, 2015.

within a set of strategies included in the best available techniques, combining new and eco-innovative methodologies with traditional experienced technologies, obtaining broad effectiveness to manage product lifecycles. References [1] [2] [3]

[4]

[5] [6] [7]

[8]

[9] [10] [11]

[12]

[13]

[14] [15] [16] [17] [18]

[19]

McDonough, W. and Braungart, M., Cradle to Cradle: Remarking the way we make things. North Point Press, New York, 2002. Beskow, C. and Ritzén, S., Performing Changes in Product Development: A Framework with Keys for Industrial Application. Res Eng Des 3, pp. 172-190, 2000. DOI 10.1007/s001630050032 McDonough, W., Braungart, M., Anastas, MT. and Zimmerman, JB., Applying the principles engineering of green to Cradle to Cradle design. Environ Science & Tech. 37: 434A-441A, 2003. DOI: 10.1021/es0326322 Sudarsan, R., Fenves, S.J., Sriram, R.D. and Wang, F., A product information modeling framework for product lifecycle management. Computer-Aided Design, 37 (13) pp. 1399–1411, 2005. DOI:10.1016/j.cad.2005.02.010 Braungart M., McDonough, W. and Bollinger, A., C2C design: creating healthy emissions - a strategy for eco-effective product and 10.1016/j.jclepro.2006.08.003 Benyus, J.M., Biomimicry, innovation inspired by nature. Perennial Harpercollins, New York, 2002. Bouman, M., Heijungs, R., Voet, E., Bergh, J. and Huppes, G., Material flows and economic models: an analytical comparison of SFA, LCA and partial equilibrium models. Ecological Economics, 32 (2), pp. 195-216, 2000. DOI:10.1016/S0921-8009(99)00091-9 Brissauda, D. and Mathieuxa, F., End-of-life product-specific material flow analysis. Application to alum coming from end-of-life commercial vehicles in Eur, 55 (2), pp. 92-105, 2010. DOI:10.1016/j.resconrec.2010.07.006 El-Haggar, S., Sustainable industrial design and waste management. Cradle-to-Cradle for sustainable development. Elservier Academic Press, London, 2007. Themelis, N.J., E4001:001 - Industrial Ecology of Earth Resources. Columbia University. 2000. Byggeth, S. and Hochschorner, E., Handling trade-offs in ecodesign tools for sustainable product development and procurement. J Clean Production, 14 (15-16), pp. 1420-1430, 2006. DOI: 10.1016/j.jclepro.2005.03.024 Varela, F.G., Maturana, HR. and Uribe, R., Autopoiesis: The organization of livingsystems, its characterization and a model. BioSystems, 5 (4), pp. 187-196, 1974. DOI: 10.1016/03032647(74)90031-8 Finster, M., Eagan, P. and Hussey, D., Linking industrial ecology with business strategy. Creating value for green product design. J Industrial Ecology, 5 (3), pp. 107-125, 2002. DOI: 10.1162/108819801760049495 Lofthouse, V., Ecodesign tools for designers: Defining the requirements. J Clean Production, 14 (15-16), pp. 386-1395, 2006. DOI:10.1016/j.jclepro.2005.11.013 Ball, J., Can ISO 14000 and eco-labelling turn the construction industry green? Build and Environ, 37 (4), pp. 421-428, 2002. DOI:10.1016/S0360-1323(01)00031-2 MBDC., Certification. [on line], [date of reference July of 2014], Available at: www.mbdc.com/detail.aspx?linkid=2&sublink=8. Aguayo, F., Peralta, M.E., Lama, J.R. and Soltero, V., Ecodiseño: Ingeniería sostenible de la cuna a la cuna . RCLibros, 2011. Peralta, ME., Aguayo, F. and Lama, J.R., Sustainable engineering based on cradle to cradle model. An open architectural reference for a C2C design. Dyna Ingeniería e Industria, 86 (2), pp. 199-211, 2011. DOI: 10.6036/3873 Heijungs, R., Huppes G. and Guinée, JB., Life cycle assessment and sustainability analysis of products, materials and technologies. Toward a scientific framework for sustainability life cycle analysis. Polym Degrad and Stab, 95 (3), pp. 422-428, 2010. DOI: 10.1016/j.polymdegradstab.2009.11.010

[20] Luttropp, C. and Lagerstedt, J., EcoDesign and the ten golden rules: Generic advice for merging environmental aspects into product development. J. of Clean Production, 14(15-16), pp. 1396-1408, 2006. DOI:10.1016/j.jclepro.2005.11.022 [21] Laan, T.K., Cradle to Cradle as a sustainable building strategy for Schiphol Real Estate. Ph.D. Thesis, Delft University of Technology Netherlands, 2009. [22] Bollinger, LA., Growing cradle-to-cradle metal flow systems: An application of agent-based modeling and system dynamics to the study of global flows of metals in mobile phones. Ph.D. Thesis Delft University of Technology Netherlands, 2010. [23] European Regional Development Fund, [on line], [date of reference September of 2013]. Available at: www.c2cn.eu. [24] Geldermans, R.J., Cradle to Cradibility: Two material cycles and the challenges of closed-loops in Construction. Ph.D. Thesis Delft University of Technology Netherlands, 2009. [25] Hansen, E. and Lassen, C., Experience with the use of substance flow analysis in Denmark. J of Industrial Ecology, 6 (3-4), pp. 201-219, 2003. DOI: 10.1162/108819802766269601 [26] McDonough, W. and Braungart, M., Design for the triple top line: New tools for sustainable commerce. Corporate Environmetal Strategy, 9 (3), pp. 251-258, 2002. DOI: 10.1016/S10667938(02)00069-6 [27] Fraguela, J.A., Carral, L., Iglesias, G. and Sánchez, M., The path to excellence: A management strategy based on people. 2013. DYNA, 77 (161), pp. 21-29, 2010. ISSN 0012-7353 [28] Espi, J.A. and Alan, S., The scarcity – abundance relationship of mineral resources introducing some table aspects. DYNA, 77 (161), pp. 21-29, 2010. ISSN 0012-7353 [29] Peralta, M.E., Marcos, M. and Aguayo, F., Sostenibilidad en la fabricación industrial: Horizonte 2020 para los sistemas de fabricación inteligente. Jornadas predoctorales, Universidad de Cádiz, España. Diciembre 2013. M.E. Peralta-Álvarez, is a BSc. Eng Industrial Design, MSc. Environmental Engineering, and PhD. Student in Manufacturing and Environmental Engineering. She works as a professor in the Department of Design Engineering, Engineering Project, at University of Seville, Spain. F. Aguayo-González, is a BSc. Industrial Eng, Engineer and PhD Industrial Engineering, BSc. Psychology, Computer Science Engineering, MSc. Quality, Environment, Security and Health. He is a professor in the Department of Design Engineering, Engineering Project, at University of Seville, Spain., His field of knowledge: Engineering Project. He worked as a Project Manager of Eng. project. J.R. Lama-Ruiz, is a BSc. Eng and MSc. Electronic Eng. and PhD. Student in Manufacturing Egineering. He worked as a project manager in industrial automation and intelligent system. He is a professor in the Department of Design Engineering, Engineering Project, at University of Seville, Spain. M.J. Ávila-Gutiérrez, is a Bs. Eng. in Industrial Design and MSc. in Design and Development of Products and Industrial Installations. PhD. Student in Holonic architecture in manufacturing systems. She worked for consulting companies within the aeronautical sectors. She works as a professor in the Department of Design Engineering, Engineering Project, at University of Seville, Spain.

146


CrN coatings deposited by magnetron sputtering: Mechanical and tribological properties Alexander Ruden-Muñoz a, Elisabeth Restrepo-Parra b & Federico Sequeda b b

a Departamento de Matemáticas, Universidad Tecnológica de Pereira, Pereira, Colombia.arudenm@gmail.co Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Colombia, Manizales, Colombia. erestrepopa@unal.edu.co c Laboratorio RDAI, Escuela de Ingeniería de Materiales, Universidad del Valle, Cali, Colombia. fsequeda@yahoo.com

Received: May 3th, 2014. Received in revised form: January 16th, 2015. Accepted: March 18th, 2015.

Abstract Mechanical and tribological properties of CrN coatings grown on steel substrates AISI 304 and AISI 4140 using the magnetron sputtering technique were analyzed. Coatings were grown at two pressures of work, 0.4 and 4.0 Pa. The films grown on AISI 304 at a pressure of work of 0.4 showed the highest hardness because they presented a larger grain size and lower roughness. For CrN synthesized at 0.4 Pa, the surface damage was lower during the tribological test. Adherence studies were also carried out, obtaining Lc1 and Lc2 for coatings produced at both pressures and on both substrates. Better adherence behavior was observed for films grown at a low pressure because these films were thicker (~890 nm). Keywords: hardness, tribology, critical load, adherence.

Recubrimientos de CrN depositados por pulverización catódica con magnetrón: Propiedades mecánicas y tribológicas Resumen Se analizaron las propiedades mecánicas y tribológicas de recubrimientos de CrN crecidos sobre sutratos de aceros AISI 203 y AISI 4140 usando la técnica de pulverización catódica con magnetrón. Los recubrimietos fueron crecidos a dos presiones de trabajo, 0.4 y 4.0 Pa. Las películas crecidas sobre acero AISI 304 a 0.4 Pa mostraron la dureza más alta debido a que ésta presenta gran tamaño de grano y baja rugosidad. Para los recubrimientos sinterizados a o.4 Pa, el daño superficial fue bajo durante la prueba tribológica. Se realizaron estudios de adherencia, obteniéndose Lc1 y Lc2 para los recubrimietos producidos con ambas presiones y en abos sustratos. Se observó una mejor adherencia en las películas crecidas a baja presión debido a su mayor espesor (~890 nm). Palabras clave: dureza, tribología, carga críticia, aderencia.

1. Introduction Over the past decades, hard coatings produced by physical vapor deposition have been successfully applied to molds, punches, cutting tools and other machine parts to increase their lifetime. The current interest in hard coatings has focused on nanostructured coatings with high hardness [1-3]. One of the most interesting materials is chromium nitride (CrNx) produced as thin films. CrN has been used extensively in industry due to its excellent mechanical properties [4,5], corrosion resistance [6,7] and excellent wear

behavior [8,9]. CrNx coatings are found in cutlery, parts used in the aeronautical and textile industries as well as cutting tools and dyes. In recent years, CrNx films have also replaced hard chromium in specific applications in the automotive industry [10]. Among the advantages of CrNx coatings are a small coefficient of friction, low internal stresses and the possibility of producing thick coatings. Due to its exceptional abrasion resistance, chrome nitride is used as a coating in cutting, milling and screw-threading tools made of titanium and its alloys, brass, copper and other non-ferrous metals, molds, punches and machine parts. Chrome nitride shows

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 147-155. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.43292


Ruden-Muñoz et al / DYNA 82 (191), pp. 147-155. June, 2015.

high chemical resistance and exceptionally low affinity to machined non-ferrous metals [11,12]. Regarding their tribological behavior, CrN coatings have been reported to exhibit improved wear resistance under dry and lubricated conditions when compared to TiN films [13]. Moreover, CrN coatings exhibit sufficient thermal stability and wear and corrosion resistance and are considered to be a promising hard coating material [14,15] in addition to conventionally used TiN. CrN hard coatings are widely used due to their excellent mechanical properties, which make them useful in a wide variety of industrial applications. However, in spite of their excellent mechanical and tribological properties, their corrosion resistance has always been conditioned by the presence of structural defects such as pores, pinholes and cracks that appear during application [16-18]. The presence of these defects is a key factor that influences the integrity of such coatings, not only in terms of corrosion resistance but also tribological properties [19]. Moreover, whenprepared as a nanocomposite combined with other materials, such as WS2, CrN can be used as a solid lubricant coating [20] In this work, a study on the mechanical and tribological properties of CrN coatings grown at two pressures on AISI 304 and AISI 4140 substrates was performed. Coatings were produced by the magnetron sputtering technique. Mechanical properties were analyzed by using the nanoindentation technique, and tribological behavior was studied by the ballon-disk method. 2. Experimental setup A DC magnetron sputtering system located in a clean room was employed, and the base vacuum pressure was 5x10-7 Pa. Coatings were grown by using a Cr target (99.99%) at room temperature, a bias voltage of -300 V, an interelectrode distance of 10 cm and two pressures of work (P1=0.4 Pa and P2=4 Pa) using a gas mixture of N2-Ar (1.5:10 sccm) for both cases, the pressure, and relationships of flow were monitored and remained constant during the entire experiment. The power of the system was 8 W/m2 for 90 min. Two substrates were chosen to produce the coatings. Austenitic AISI 304 steel has one of the highest corrosion resistances among other Cr-Ni (carbon-stabilized) -based steels when it is exposed to harmful environments [21]. AISI - SAE 4140 is a Cr-Mo alloyed steel with high stability up to 400°C and suitable for resisting stress and torsion [22]. The sample dimensions were a diameter of 1.25 cm and thickness of 4 mm. Samples were polished using silicon carbide abrasive paper. The samples were then deep cleaned using an ultrasonic cube in acetone for 15 min to eliminate oil, dust and any other contaminants. XRD analysis was carried out using a D8 Bruker AXS diffractometer with the Rigaku Ultima III software and Cu Kα radiation (λ=0.1540 nm). The hardness and elastic moduli were obtained using a nanoindenter NANOVEA module IBIS – Technology, employing the traditional Oliver and Pharr method [23] to fit the discharge curve with a load resolution of 0.08 µN. Nanoindentations were carried out at low (Lo), medium (Me) and high (Hi) loads (Lo: 0.01 - 0.4

mN; Me: 0.41 – 1 mN and Hi: 1.1 - 10 mN), obtaining hardness profiles and elastic moduli as a function of depth. An ideal load of 1 mN was determined to obtain the maximum indentation depth of 10% of the coating thickness. The phenomenon known as the indentation size effect (ISE) usually involves a decrease in the measured apparent hardness with increasing applied test load, i.e., with increasing indentation size. The existence of the ISE suggests that, if hardness is used as a material selection criterion, it is clearly insufficient to quote a single hardness number [24]. Nanoindentation tests were performed using a pyramidal Berkovich indenter coupled to a Fischer-Cripps Laboratories IBIS Nanoindentation Tester and a displacement control with a compliance of 0.00035 µm/mN. The IBIS software was used to control the indentation and for correction and results analysis. A ball-on-disk (BOD) CSEM – Tribometer was used to determine the wear and coefficient of friction (COF) of the coatings. A spherical counterpair of alumina (Al2O3) with a diameter of 6 mm, normal load of 1 N, sliding distance of 100 m, velocity of 10 m /s and data acquisition rate of 2 Hz was used. The COF was obtained by using the XTribo 2.5 software program. To calculate the thickness, roughness and wear of the samples, an XP – 2 AMBIOS profilometer was employed. The wear rate measurements were carried out using the cross section of the wear track after the BOD test and the Archard model, which proposes that the wear coefficient is directly proportional to the wear volume and inversely proportional to the normal applied load and the sliding distance. A scratch test for measuring the adherence was carried out with a Micro Test instrument using a Rockwell C indenter with a radius of 200 μm, a variable load between 0 and 100 N, an applied load velocity of 1 N/s, a distance of 6 mm and a displacement velocity of 4.5 mm/min. the software of the equipment allows for the calculation of COF versus load and distance, which is used to obtain the critical load (LC) for coating failure and thus to determine the failure type (adhesive or cohesive). 3. Results and Discussion 3.1. Structure and Morphology

Figure 1. X-ray diffraction patterns of coatings grown at 0.4 Pa, presenting crystallographic structure of CrN - FCC (α-CrN). Source: The authors.

148


Ruden-Muñoz et al / DYNA 82 (191), pp. 147-155. June, 2015. Table 1 Thickness and roughness of CrN coatings grown on AISI 304 and AISI 4140 at 0.4 and 4 Pa. System Thickness (nm) Roughness – Ra (nm) 4 Pa 0.4 Pa 4 Pa 0.4 Pa CrN/304 549±19 CrN/4140 583±8 Source:The authors.

896±2 890±1

65±3 67±2

41±0.3 43±0.8

A XRD study of CrN films was carried at an Ar-N2 flux ratio of 10/1.5 sccm. Fig. 1 shows the XRD results for the CrN coatings grown at 0.4 Pa. The diffraction results show peaks corresponding to the (111), (220), (200), (311) and (222) planes of the FCC phase. These orientations are in agreement with ICDD JCPDS card 00-011-0065 for CrN. These results are in agreement with studies carried out by Hones et al. [25]. The diffraction patterns show a preferential orientation in the direction of (111), characteristic of the FCC CrN phase [26,27]. Peak identification was carried out by using the crystalline structure database of the ICSD [28]. Using profilometry analysis, the thickness of the coatings deposited on AISI 304 and AISI 4140 at 0.4 and 4 Pa were obtained, and the values are listed in Table 1. Changes in the thickness of the coatings depending not only on the substrate but also on the pressure of work were observed. Thinner coatings were obtained when grown at a pressure of 4 Pa because of the increase in the number of collisions between molecules during deposition. This increase forces atoms to lose energy before arriving to the substrate, decreasing the island coalescence process and generating a high porosity [29]. At lower pressures, nucleation is enhanced, producing densified and uniform coatings [30]. No strong influence of the substrate material on the coating thickness was observed. Table 1 also shows the coatings’ roughness. At a higher pressure, the roughness increases due to a greater number of collisions in the plasma, which reduces the atomic energy and mobility of surface adatoms; again the coalescence process is not favored. Moreover, the ionization degree (also enhanced by a great number of collisions) [31] allows for a greater range of atomic energies, increasing the probability of forming several structures such as Cr, Cr2N and CrN (nanocomposite) with denser regions than others [32]. At a lower pressure, atoms arrive with higher energy, enhancing the coalescence process. According to Zhao et al. [33], as the pressure increases, the deposition rate decreases; thus, the energy and the number of negative particles decrease because of the increase in the dispersion of atoms. Conversely, at lower pressures (less than 10 mTorr), the deposition rate is affected by the strong bombardment of negatively charged particles, producing denser and low-roughness films, as was the case for the films grown at 0.4 Pa. At intermediate pressures (between 10 and 30 mTorr), the bombardment of negatively charged particles is not sufficiently high to reduce the surface roughness due to the deposition rate. Finally, at extremely high pressures, (greater than 80 mTorr) and low deposition rates, new species arriving to the surface have more time to relax into configurations with lower energy, allowing for the formation of low-roughness films.

3.2. Mechanical properties Mechanical tests were carried out on AISI 304 and AISI4140 substrates with and without CrN coatings at both pressures of work. The Oliver and Parr method was employed [34, 35] using load –unload curves to determine the materials’ hardness (H) and Young´s moduli (E). Fig. 2 shows the load-unload curves for the substrates with and without CrN coatings deposited at 0.4 and 4 Pa. The substrates presented a strong elastic behavior according to the wide range of deformation with poor contact stiffness they experienced [36]. Conversely, when the substrates were coated with CrN at two different pressures, the nanoindentation curves indicated elasto-plastic behavior, and there was no piling around the indenter [37]. Table 2 shows the H, E and the shear resistance (H3/E2) for the coatings grown on AISI 304 and AISI 4140 at both work pressures. The H value of the coating grown at 0.4 Pa is on the order of 26 GPa, demonstrating that the hardness is independent of the substrate type if the depth penetration is small enough to avoid the size indentation effect [24], which is considered to be 10% of the film thickness [38]. Conversely, the coatings deposited at 4 Pa exhibited lower hardness (21 GPa) than those grown at 0.4 Pa. By invoking the Hall-Petch effect [39,40], the coating hardness was related to the grain size and thereby with the roughness. Coatings presenting lower grain size and roughness (sample grown at 0.4 Pa) presented higher hardness because they exhibited better surface properties. According to Ward and

Figure 2. Load and unload curves for uncoated steel substrates and those coated with CrN grown at 0.4 and 4 Pa. Source:The authors.

Table 2 Values of hardness, Young´s moduli and plasticity index of coatings grown on AISI 304 and AISI 4140 at 0.4 and 4 Pa. Material H E [H3/E2 ] (GPa) (GPa) (GPa) AISI 304 5.6±0.5 243.5±0.7 0.00302 AISI 4140 5.8±0.3 283.9±0.4 0.00242 0.4 Pa AISI 4140/CrN 26.7±0.3 343.1±0.3 0.16145 AISI 304/CrN 26.6±0.4 358.70±0.9 0.14595 4 Pa AISI 4140/CrN 21.8±0.1 325.8±0.8 0.09773 AISI 304/CrN 21.1±0.8 331.7±0.6 0.08538 Source:The authors.

149


Ruden-Muñoz et al / DYNA 82 (191), pp. 147-155. June, 2015.

Datta [41], the hardness of Nb coatings is strongly related to transitions in failure modes when such films are submitted to scratch tests and morphological modifications. In this study high-density coatings were produced, with the coatings grown at 0.4 Pa exhibiting smaller grains and lower roughness with increasing hardness. By contrast, the coatings grown at 4 Pa presented high porosity with large grains and high roughness. Regarding the elasticity, there several changes were observed as a function of the substrate type and the pressure. An increase in the shear resistance at a low pressure of work (0.4 Pa) occurred because the probability of island coalescence was greater; thus, coatings presented a preferential orientation along the (111) crystallographic direction, which is the direction of the most densely packed planes, thereby increasing the surface hardness due to plastic deformation [42]. However, in the coating deposited at 4 Pa, the lower degree of coalescence led to low crystallographic texture and higher polycrystallinity, increasing the number of grain boundaries due to different formation energies [43]. 3.3. Tribological analysis Figs. 3 and 4 present the results of the coefficient of friction (COF) analysis of the CrN films. From this analysis, the coefficient of wear was obtained by the profilometry

Figure 3. COF vs. distance for AISI 304/CrN grown at 0.4 and 4 Pa. Micrographs (50x) of the wear tracks caused by the ball-on-disk test. Source:The authors.

Figure 4. COF vs. distance for AISI 4140/CrN grown at 0.4 and 4 Pa. Micrographs (50x) of the wear tracks caused by the ball-on-disk test. Source:The authors.

Table 3 Values of COF, wear rate, Lc1 and Lc2 of coatings grown on AISI 304 and AISI 4140 at 0.4 and 4 Pa. Material

Average COF (µ) (Adim.)

Wear rate (k) (mm3/N-m)

1.9814x10-15 1.0293x10-15 Pressure 0.4 Pa 4140/CrN 0.653±0.004 2.4419x10-18 304/CrN 0.639±0.005 1.0919x10-18 Pressure 4 Pa 4140/CrN 0.892±0.006 3.6886x10-16 304/CrN 0.813±0.002 2.9270x10-16 Source:The authors. AISI 4140 AISI 304

0.987±0.009 0.955±0.005

Cohesive (Lc1) (N)

Adhesive (Lc2) (N)

-

-

8 16

19 35

5 10

11 15

technique. Both substrates, AISI 304 and AISI 4140, showed high COFs because of the formation of abrasive wear particles; moreover, micro-welding was observed during the ball-on-disk process [44]. In most of the cases presented here, during the initial ramping, the friction fluctuated. This was due to the interaction of the coatings and the ball material. Initially, m increased to a peak value and then gradually decreased to a lower, stable value. The high initial COF was due to the adhesive wear that took place between the ball and the sample, which led to the transfer of material from the ball to the coating surface. The friction decreased as the material transfer from the ball stabilized after a large area of the coated sample was covered with the transferred material from the ball. After a certain period, this transferred material covered the entire test area, and abrasive wear took place. The stabilized coefficient friction increased after some time as the test progressed. In the steady state, the transferred material was worn out slowly by abrasive removal as the wear test progressed [45]. In Table 3, the COFs of the coatings studied in this work are presented. The CrN/304 and CrN/4140 systems grown at 4.0 Pa showed a rapid increase in the COF. This increase occurred because the particles could not anchor themselves, generating an abrasive system composed of particles that were hardened and plastically deformed. Moreover, rapid delamination occurred due to poor island coalescence. Conversely, the coatings produced at 0.4 Pa on both substrates presented stable COFs with value on the order of 0.646. Similar results have been reported by Feng Cai et al. for α-CrN [46]. Micrographs of the wear track (50x) for each tribological process show surface damage following the wear analysis. The steel substrates exhibited abrasive damage and micro-welding, while the coatings deposited at 4 Pa showed tribological damage with surface fatigue. This damage was caused by poor island coalescence, producing high porosity, delamination and the fracture of asperities, which produced plowing and greater surface damage due to plastic deformation. For the CrN synthesized at 0.4 Pa, the surface damage was lower because of the high densification of the film caused by high adatom mobility and lower roughness. The surface damage produced by the

150


Ruden-Muñoz et al / DYNA 82 (191), pp. 147-155. June, 2015.

tribological test consisted of surface plowing [47]. In Table 3, the coefficient of wear or wear rate, k, measured by using the cross section of the wear track (profilometry analysis) obtained by the ball-on-disk test is shown. The coefficients were calculated by considering the wear volume (V) using the theorem of Pappus (eq. 1) and the formulation of Archard (eq. 2).

2

2

̅

(1)

Where A is the apparent contact area and ́ is an average height corresponding to the material removal. (2) L being the sliding distance, P the applied load and H the hardness. All systems presented abrasive characteristics. When CrN coatings were produced at 0.4 Pa, the wear rate decreased, presenting damage up to 10000 times lower than that observed for the uncoated substrates (AISI 304 and AISI 4140). However, for the coatings grown at 4 Pa, the wear rate decreased by approximately 1000 times. 3.4. Adherence analysis - Scratch test Figure 6. Scratch test results for AISI 4140/CrN coatings grown at 0.4 and 4 Pa. Source:The authors.

Figure 5. Scratch test results for AISI 304/CrN coatings grown at 0.4 and 4 Pa. Source:The authors.

Figs. 5 and 6 show the results of the scratch test analysis of all of the samples studied in this work. In these figures, the critical loads (in N) of the Lc1 and Lc2 failures, respectively, are presented.

Figure 7. Micrographs of the failure zones of AISI 304/CrN coatings grown at 0.4 Pa (a) before and (b) after exceeding the coating thickness. Source: The authors.

151


Ruden-Mu単oz et al / DYNA 82 (191), pp. 147-155. June, 2015.

Figure 8. Micrographs of the failure zones of AISI 304/CrN coatings grown at 4 Pa (a) before and (b) after exceeding the coating thickness. Source:The authors.

Figure 9. Micrographs of the failure zones of AISI 4140/CrN coatings grown at 0.4 Pa (a) before and (b) after exceeding the coating thickness. Source:The authors.

These values were obtained from the zone where the load became independent of the COF. These two critical loads Lc1 and Lc2 were defined for the failure of the coatings. Lc1, the first critical load, corresponds to the initial cohesive failure

Figure 10. Micrographs of the failure zones of AISI 4140/CrN coatings grown at 4 Pa (a) before and (b) after exceeding the coating thickness. Source:The authors.

of the coatings, characterized by the appearance of initial cracks or pores within the coatings. Lc2, the second critical load, corresponds to the adhesive failure of the coatings, i.e., the first observation of adhesive failure features such as chipping, partial delamination, pores or other such phenomena, through which the substrate beneath the coating becomes exposed [45,48]. The values of such loads are listed in Table 3. Better adherence behavior was observed for films grown at low pressure, especially those grown on AISI 304 because they were thicker (~890 nm) and exhibited higher volume displacement and thus improved surface adherence. During the dynamic scratching test, some effects on the coatings could be observed. Figs.7 and 8 show that higher damage caused by delamination was sustained by the coatings deposited on AISI 304, while the CrN coatings grown on AISI 4140 underwent cracking (Figs. 9 and 10); thus, three zones can be identified in the scratch test process: (i) plastic deformation zone, (ii) cracking at Lc1 and (iii) delamination and abrasive failure of the coating/substrate (Lc2). The relationship between wear and adhesion has been studied. This correlation is poor when focusing on a small group of data individually. Because many factors will affect the test results, especially those of wear tests, such as deformation and temperature, variation in the data is expected and acceptable [49]. Because changes in the coatings or any other experimental factors will affect the results, the range of variation may be different in other cases. For instance, in our case, we did not have sufficiently high variation in the parameters to obtain a real relationship. However, some conclusions can be drawn.

152


Ruden-Muñoz et al / DYNA 82 (191), pp. 147-155. June, 2015.

To explain why the wear rate decreased with the critical adhesion load, wear track images may be considered. Fig. 7 and 8 show the wear tracks of the films grown on AISI 304 at 0.4 and 4 Pa, respectively. When we compare Figs. 7 to 10, a few different failure modes of the wear tracks can be observed, such as spallation and some cracking. For instance, Fig. 7 (AISI 304/CrN grown at 0.4 Pa) shows an image of the coating that sustained the lowest degree of failure and thereby presented the highest adhesion loads. Conversely, Fig. 10 (AISI 4140/CrN grown at 4 Pa) shows an image of the coating that sustained the highest degree of failure and thereby presented the lowest adhesion loads. In all of the wear tracks, similar failure modes were detected; the only difference was that the AISI 4140/CrN coatings grown at 4 Pa exhibited more serious damage. Therefore, it can be concluded that the failure modes due to wear were mainly caused by weak adhesion strength, which accelerated the wear of the films. The adhesion results indicate that the AISI 304/CrN coating grown at 0.4 Pa (Fig. 7) exhibited the best adhesion. In addition, failure due to weak adhesion increased the amount of wear debris generated (AISI 4140/CrN coatings grown at 4 Pa, Fig. 10), which increased the probability of adhesive wear between the coatings and wear debris. On the other hand, the tribological behavior of coatings was strongly dependent on the coating thickness, especially at relatively low loads; changes in the microstructure and mechanical behavior caused by variations in coating thickness strongly influences their tribological properties [50]. In our case, coatings produced at 0.4 Pa of pressure exhibited higher thickness compared with those produced at 4 Pa. Furthermore, coatings produced at lower pressure presented better tribological behaviour (lower COF and wear rate). According to the literature, if coatings could be produced with relatively high thickness, they can exhibit uniform microstructure and a low residual stress state and their tribological performance can be enhanced [50]. 4. Conclusions The mechanical and tribological properties of CrN coatings grown on steel substrates AISI 304 and AISI 4140 at 0.4 and 4 Pa using the magnetron sputtering technique were analyzed. XRD analysis shows peaks corresponding to the (111), (220), (200), (311) and (222) planes of the FCC phase. Thinner films were obtained when the coatings were grown at a pressure of 4 Pa; however, no strong influence of the substrate material on the coating thickness was observed. Conversely, at a higher pressure, the roughness increased due to low mobility of the surface adatoms. Moreover, the coatings deposited at 4 Pa exhibited lower hardness (21 GPa) than that of the coatings grown at 0.4 Pa. Regarding the wear behavior, both substrates, AISI 304 and AISI 4140, showed high COFs. Moreover, micro-welding was observed during ball-on-disk testing. The CrN/304 and CrN/4140 coatings grown at 4.0 Pa showed a fast increase in the COF. Conversely, the coatings grown at 0.4 Pa on both substrates presented a stable COF on the order of 0.646. The steel substrates presented abrasive damage and micro-welding,

while the coatings deposited at 4 Pa showed tribological damage with surface fatigue. For the CrN coating synthesized at 0.4 Pa, the surface damage was lower due to the higher densification of the film. Better adherence behavior was observed for films grown at low pressure, especially those grown AISI 304, because they were thicker (~890 nm) and exhibited higher volume displacement and had improved surface adherence. Acknowledgments The authors gratefully acknowledge the financial support of la Dirección Nacional de Investigaciones of the National University during the course of this research under project 10709 “Implementación de técnicas de Modelamiento, Procesamiento Digital y simulación para el estudio de sistemas físicos”. References [1]

Aperador-Chaparro W., Ramírez-Martín C. and Vera-López E., Synergy between erosion-corrosion of steel aisi 4140 covered by a multilayer TiCN/ TiNbCN, at an impact angle of 90°, DYNA, 80 (178), pp. 101-108, 2013. [2] Holubar, P., Jilek, M. and Sima, M., Present and possible future applications of superhard nanocomposite coatings, Surface Coatings Technology, 133 (1), pp. 145-151, 2000. DOI:10.1016/S02578972(00)00956-7 [3] Chen, X., Kirsch, B.L., Senter, R., Tolbert, S.H. and Gupta, V., Tensile testing of thin films supported on compliant substrates, Mechanics Materials, 41, pp. 839-848, 2009. DOI:10.1016/j.mechmat.2009.02.003. [4] Escobar-Galindo R, van Veen. A, Schut. H, Janssen. G.C.A.M., Hoy. R. and de Hosson, J.Th.M., Adhesion behavior of CrNx coatings on pre-treated metal substrates studied in situ by PBA and ESEM after annealing, Surface and Coatings Technolology, 199 (1), pp. 57-65, 2005. DOI:10.1016/j.surfcoat.2005.04.018 [5] Choi, E.Y., Kang, M.Ch., Kwon, D.H., Shin, D.W. and Kim, K.H., Comparative studies on microstructure and mechanical properties of CrN, Cr–C–N and Cr–Mo–N coatings, Journal of Materials Procesisng Technology, 187, pp. 566-570, 2007. DOI:10.1016/j.jmatprotec.2006.11.090. [6] Ürgen, M. and Cakir, A.F., The effect of heating on corrosion behavior of TiN- and CrN-coated steels, Surface and Coatings Technology, 96 (2-3), pp. 236-244, 1997. DOI:10.1016/S02578972(97)00123-0 [7] Lai, F.D. and Wu, J.K., High temperature and corrosion properties of cathodic-arc-plasma-deposited CrN coatings, Surface and Coatings Technology, 64 (1), pp. 53-57, 1994. DOI:10.1016/S02578972(09)90086-X [8] Dobrzański, L.A. and Lukaszkowicz, K., Erosion resistance and tribological properties of coatings deposited by reactive magnetron sputtering method onto the brass substrate, Journal of Materials Procesing Technology, 157, pp. 317-323, 2004. DOI:10.1016/j.jmatprotec.2004.09.050. [9] Gahlin, R., Bromark, M., Hedenqvist, P., Hogmark, S. and Hakansson, G., Properties of TiN and CrN coatings deposited at low temperature using reactive arc-evaporation, Surface and Coatings Technology, 76, pp. 174-180, 1995. DOI:10.1016/02578972(95)02597-9. [10] Friedrich, C., Berg, G., Broszeit, E., Rick, F. and Holland, J., PVD CrxN coatings for tribological application on piston rings, Surface and Coatings Technology, 97, pp. 661-668, 1997. DOI:10.1016/S02578972(97)00335-6. [11] Aouadi, S.M., Schultze, D.M., Rohde, S.L., Wong, K.C. and Mitchell K.A.R., Growth and characterization of Cr2N/CrN multilayer

153


Ruden-Muñoz et al / DYNA 82 (191), pp. 147-155. June, 2015.

[12] [13] [14]

[15] [16]

[17] [18]

[19]

[20]

[21]

[22] [23]

[24]

[25]

[26] [27]

[28]

[29] [30]

coatings, Surface and Coatings Technology, 140, pp. 269-277, 2001. DOI:10.1016/S0257-8972(01)01121-5 Warcholinski, B. and Gilewicz, A., Tribological properties of CrNx coatings, Journal of Achievements Materials Manufacturing Engineering, 37, pp. 498-504, 2009. Huang, Z.P., Sun Y. and Bell T., Friction behavior of TiN, CrN and (TiAl)N coatings, Wear, 173, pp. 13–20, 1994. DOI:10.1016/00431648(94)90252-6. Chen, J-S and Duh, J-G, Indentation behavior and Young’s modulus evaluation in electroless Ni modified CrN coating on mild steel, Surface and Coatings Technology, 139, pp. 6-13, 2001. DOI:10.1016/S0257-8972(01)00987-2. Warcholinski, B., Gilewicz, A. and Ratajski, J., Cr2N/CrN multilayer coatings for wood machining tools, Tribology International, 44, pp. 1076–1082, 2011. DOI:10.1016/j.triboint.2011.05.004. Ibrahim, M.A.M., Korablov, S.F. and Yoshima, M., Corrosion of stainless steel coated with TiN, (TiAl)N and CrN in aqueous environments, Corrosion Science, 44, pp. 815-828, 2002. DOI:10.1016/S0010-938X(01)00102-0. Jehn, H.A., Improvement of the corrosion resistance of PVD hard coating–substrate systems Surface and Coatings Technology, 125, pp. 212-217, 2000. DOI:10.1016/S0257-8972(99)00551-4. Conde, A., Navas, C., Cristóbal, A.B., Housden, J. and de Damborenea. J., Characterisation of corrosion and wear behaviour of nanoscaled e-beam PVD CrN coatings, Surface and Coatings Technology, 201, pp. 2690–2695, 2006. DOI:10.1016/j.surfcoat.2006.05.013 Cunha, L. and Andritschky, M., Residual stress, surface defects and corrosion resistance of CrN hard coatings, Surface and Coatings Technology, 111, pp. 158-162, 1999. DOI:10.1016/S02578972(98)00731-2 Deepthi, B., Barshilia, H.C., Rajama, K.S., Konchady, M.S., Pai, D.M. and Sankar, J., Structural, mechanical and tribological investigations of sputter deposited CrN–WS2 nanocomposite solid lubricant coatings, Tribology International, 44, pp. 1844–1851, 2011. DOI:10.1016/j.triboint.2011.07.007 Nalbant, M. and Yildiz, Y., Effect of cryogenic cooling in milling process of AISI 304 stainless steel, Transaction of Nonferrous Metals Society of China, 21, pp. 72-79, 2011. DOI:10.1016/S10036326(11)60680-8. Huyett G.L., Engineering Handbook, technical information, Industrial Press Inc. New York, 2000. Oliver, W.C., Pharr, G.M., An improved technique for determining hardness and elastic modulus using load and displacement sensing indentation experiments, Journal of Materials Research, 7, pp. 15641583, 1992. DOI:10.1557/JMR.1992.1564. Peng, Z., Gong, J. and Miao, H., On the description of indentation size effect in hardness testing for ceramics: Analysis of the nanoindentation data, Journal of the European Ceramic Society, 24, pp. 2193–2201, 2004. DOI:10.1016/S0955-2219(03)00641-1 Hones, P., Sanjines, R. and Lévy, F., Characterization of sputterdeposited chromium nitride thin films for hard coatings, Surface Coatings Technology, 94, pp. 398-402, 1997. DOI:10.1016/S02578972(97)00443-X Barata, A., Cunha, L. and Moura, C., Characterisation of chromium nitride films produced by PVD techniques Thin Solid Films, 398, pp. 501-506, 2001. DOI:10.1016/S0040-6090(01)01498-5 Han, Z., Tian, J., Lai, Q., Yu, X. and Li, G., Effect of N2 partial pressure on the microstructure and mechanical properties of magnetron sputtered CrNx films, Surface Coatings Technology, 162, pp. 189-193, 2003. DOI:10.1016/S0257-8972(02)00667-9 Xu, J., Umehara, H. and Kojima, I., Effect of deposition parameters on composition, structures, density and topography of CrN films deposited by r.f. magnetron sputtering, Applied Surface Science, 201, pp. 208-218, 2002. DOI:10.1016/S0040-6090(01)01498-5 Petrov, I., Barna P.B., Hultman, L. and Greene, J.E., Microstructural evolution during film growth, Journal of Vacuum Science and Technology A, 215, pp. S117-S128, 2003. DOI: 10.1116/1.1601610 Lou, J., Advanced structure materials - Mechanical Characterization of Thin Film Materials, Ed. Wole Soboyejo, pp. 35- 63, 2006.

[31] Park, H.-S., Kappl, H., Lee, K.H., Lee, J.-J., Jehn, H.A. and Fenker, M., Structure modification of magnetron-sputtered CrN coatings by intermediate plasma etching steps, Surface and Coatings Technology, 133-134, pp. 176-180, 2000. DOI:10.1016/S0257-8972(00)00960-9 [32] Mcguire, G.E. and Rossnagel, S.M., Vacuum Evaporation, Handbook of Thin Film Technology, Ed., McGraw-Hill, pp. 1-26. 1970. [33] Zhaom, Y., Qian, Y., Yu, W. and Chen, Z., Surface roughness of alumina films deposited by reactive r.f. sputtering, Thin Solid Films, 286, pp. 45-48, 2006. DOI:10.1016/S0040-6090(01)01498-5 [34] Meza, J.M., Chávez, C. and Vélez, J.M., Indentation techniques: Mechanical properties measurement of ceramics, DYNA, 73 (149), pp. 81-93, 2006. [35] Faghihi, D. and Voyiadjis, G.Z., Determination of nanoindentation size effects and variable material intrinsic length scale for bodycentered cubic metals, Mechanics of Materials 44, pp. 89–211, 2012. DOI:10.1016/j.mechmat.2011.07.002. [36] Doerner, M.F. and Nix, W.D., A method for interpreting the data from depth sensing indentation instrument, Journal of Materials Research, 1 (4), pp. 601-609, 1986. DOI: 10.1557/JMR.1986.0601. [37] Bolshakov, A. and Pharr, G.M., Influences of pile up on the measurement of mechanical properties by load and depth sensing indentation techniques, Journal of Materials Research, 13 (04), pp. 1049-1058, 1998. DOI:10.1557/JMR.1998.0146. [38] Korsunsky, A.M., Mcgurk, M.R., Bull, S.J. and Page, T., On the hardness of coated systems, Surface and Coatings Technology, 99 (12), pp. 171-183, 1998. DOI:10.1016/S0257-8972(97)00522-7 [39] Li, G., Chen, J. and Guan, D., Friction and wear behaviors of nanocrystalline surface layer of medium carbon steel, Tribology International, 43, pp. 2216–2221, 2010. DOI:10.1016/j.triboint.2011.12.023. [40] Harvey, E., Ladani, L. and Weaver, M., Complete mechanical characterization of nanocrystalline Al–Mg alloy using nanoindentation, Mechanics of Materials, 52, pp. 1–11, 2012. DOI:10.1016/j.mechmat.2012.04.005. [41] Ward, L.P. and Datta, P.K., Scratch test adhesion and hardness evaluation of sputtered Nb coatings as a function of the argon gas pressure, Thin Solid Films, 271, pp. 101-107, 1995. DOI:10.1016/0040-6090(96)80086-1. [42] Scheerer, H., Slomski, E.M., Trobmann, T. and Berger, C., Characterization of CrN coatings concerning the potential to cover surface imperfections, Surface and Coatings Technololy, 205, pp. S47–S50, 2010. DOI:10.1016/j.surfcoat.2010.03.051 [43] Thompson, C.V., Structure evolution during processing of polycrystalline films, Annual Review of Materials Science, 30, pp. 159-190, 2000. DOI: 10.1146/annurev.matsci.30.1.159. [44] Bressan, J.D., Daros, D.P., Sokolowski, A., Mesquita, R.A. and Barbosa, C.A., Influence of hardness on the wear resistance of 17-4 PH stainless steel evaluated by the pin-on-disc testing, Journal of materials Processing and Technology, 205, pp. 353–359, 2008. DOI:10.1016/j.jmatprotec.2007.11.251. [45] Singh, K., Krishnamurthy, N. and Suri, A.K., Adhesion and wear studies of magnetron sputtered NbN films, Tribology International, 50, pp. 16–25, 2012. DOI:10.1016/j.triboint.2011.12.023 [46] Cai, F., Huang, X., Yang, Q., Wei, R. and Nagy, D., Microstructure and tribological properties of CrN and CrSiCN coatings, Surface and Coatings Technology, 205, pp. 182–188, 2010. DOI:10.1016/j.surfcoat.2010.03.051 [47] Holberg, K. and Matthews, A., Coatings Tribology: A concept, critical aspects and future directions, Thin Solid Films, 253, pp. 173178, 1994. DOI:10.1016/0040-6090(94)90315-8. [48] Good, R.J., Theory of cohesive vs adhesive separation in an adhering system, Journal of Adhesion, 4 (2), pp. 133–154, 1972. DOI:10.1080/00218467208072218. [49] Lau, K.H. and Li, K.Y., Correlation between adhesion and wear behavior of commercial carbon based coating, Tribology International, 39 (2), pp. 115–123. 2006. DOI:10.1016/j.triboint.2005.04.008 [50] Wei, R., Langa, E., Rincon, C, and Arps, J.H., Deposition of thick nitrides and carbonitrides for sand erosion protection, Surface and Coatings Technology, 201 (7), pp. 4453–4459, 2006. DOI: 10.1016/j.surfcoat.2006.08.091

154


Ruden-Muñoz et al / DYNA 82 (191), pp. 147-155. June, 2015. A. Rúden-Muñoz, received the BSc. Eng in Physical Engineering in 2005, the MSc. degree in Physics in 2007, both from the Universidad Nacional de Colombia, and the PhD degree in Materials Engineering in 2009, in the Universidad del Valle, Colombia. His research interests include: production and characterization of materials by plasma assisted techniques for technological Apllications. E. Restrepo-Parra, received the BSc. Eng in Electrical Engineering in 1990 from the Universidad Tecnológica de Pereira, Colombia, the MSc. degree in Physics in 2000, and the PhD degree in Engineering – automatic in 2009, the last two from the Universidad Nacional de Colombia. Manizales, Colombia. From 1991 to 1995, she worked in the Colombian electrical sector and since 1996 for the Universidad Nacional de Colombia. Currently, she is a senior professor in the Physics and Chemistry Department, Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Colombia – Sede Manizales, Colombia. His research interests include: simulation and modeling of materials properties by several methods; materials processing by plasma assisted technique and materials characterization. She is currently director of the Laboratories Sede Manizales, Universidad Nacional de Colombia. F. Sequeda, received the BSc. in Physics in 1970 in la Universidad Industrial de Santander, Colombia, the MSc. degree in Metalluergy Engineering in 1972 in University of Missouri, and the PhD in Materials Science in University of Illinois, USA. Currently, he is a full Professor in the Physics Department, Facultad de Ciencias, Universidad del Valle, Cali, Colombia. His research interests include: materials processing by plasma assisted technique and materials characterization.

Área Curricular de Ingeniería Geológica e Ingeniería de Minas y Metalurgia Oferta de Posgrados

Especialización en Materiales y Procesos Maestría en Ingeniería - Materiales y Procesos Maestría en Ingeniería - Recursos Minerales Doctorado en Ingeniería - Ciencia y Tecnología de Materiales Mayor información:

E-mail: acgeomin_med@unal.edu.co Teléfono: (57-4) 425 53 68

155


Weibull accelerated life testing analysis with several variables using multiple linear regression Manuel R. Piña-Monarrez a, Carlos A. Ávila-Chávez b & Carlos D. Márquez-Luévano c a

Industrial and Manufacturing department of the IIT Institute, Universidad Autónoma de Ciudad Juárez, Chihuahua , México. manuel.pina@uacj.mx Industrial and manufacturing deparment of the IIT Institute, Universidad Autónoma de Ciudad Juárez, Chihuahua , México. carlos.avila@uacj.mx c Reliability Engineering Department at Stoneridge Electronics North America. carlos.marquez@stoneridge.com

b

Received: May 16th, 2014. Received in revised form: Fabruary 24th, 2015. Accepted: March 04th, 2015

Abstract , we estimated, in joint form, the parameters of In Weibull accelerated life test analysis (ALT) with two or more variables , , … , the life stress model and one shape parameter . These were then used to extrapolate the conclusions to the operational level. However, these conclusions are biased because in the experiment design (DOE) used, each combination of the variables presents its own Weibull family , . Thus the estimated is not representative. On the other hand, since is determined by the variance of the logarithm of the lifetime data , the response variance and the correlation coefficient , which increases when variables are added to the analysis, is always overestimated. In this paper, the problem is statistically addressed and based on the Weibull families , a is estimated and used to determine the parameters of . Finally, based on the variance of each level, the variance of vector the operational level is estimated and used to determine the operational shape parameter . The efficiency of the proposed method is shown by numerical applications and by comparing its results with those of the maximum likelihood method (ML). Keywords: ALT analysis; Weibull analysis; multiple linear regression; experiment design.

Análisis de pruebas de vida acelerada Weibull con varias variables utilizando regresión lineal múltiple Resumen En el análisis de pruebas de vida acelerada Weibull con dos o más variables aceleradas , , … , , estimamos en forma conjunta los parámetros del modelo de relación vida esfuerzo y un parámetro de forma . Después estos parámetros son utilizados para extrapolar las conclusiones al nivel operacional. Como sea, estas conclusiones están sesgadas debido a que dentro del diseño de experimentos (DOE) utilizado, cada combinación de las variables presenta su propia familia Weibull , . De esa forma la estimada no es representativa. Por otro lado, dado que está determinada por la varianza del logaritmo de los tiempos de vida , por la varianza de la respuesta y por el coeficiente de correlación , el cual crece cuando se agregan variables al análisis, es siempre sobre estimada. En éste artículo, el problema es estadísticamente identificado y basado sobre las familias Weibull , un vector es estimado y de cada nivel, la varianza del nivel operacional utilizado para determinar los parámetros de . Finalmente, basado en la varianza es estimada y utilizada para determinar el parámetro de forma del nivel operacional. La eficiencia del método propuesto es mostrada a través de aplicaciones numéricas y por la comparación de sus resultados con los del método de máxima verosimilitud (ML). Palabras clave: ALT análisis; análisis Weibull; regresión lineal múltiple; diseño de experimentos.

1. Introduction In Accelerated Life Testing analysis (ALT) with constant over time and interval-valued variables , the , ,…,

standard approach of the analysis consists in using higher levels of the stress variables and a life-stress model , the lifetime data are obtained as quickly as possible [6]. In this approach, the function , which relates the lifetime

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 156-162. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.43533


Piña-Monarrez et al / DYNA 82 (191), pp. 156-162. June, 2015.

data to the stress variables, is parametrized as (1) is a vector of unknown ,…, Where parameters, and ,…, is a ,…, ≡ 1. Among the vector of specified functions with most common models of we have the generalized Eyring model, the temperature-humidity model (T-H), the temperature–non–thermal model, the proportional hazard model and the generalized log–linear model [9] and [13]. On the other hand, in ALT Weibull analysis no matter which model we use, all of them are used to estimate the scale parameter  under different levels of the significant variables. For example if the (T-H) model is used, then in the Weibull probability density function (pdf) ([19] and [16] Chapter 1), given by exp

(2)

by replacing  with the (T  H ) model the Weibull/(TH) pdf is given by

| ,

(3)

Unfortunately, since in (3) only one shape parameter is estimated and used to represent all the level combinations of the variables, and because the maximum likelihood (ML) or multiple linear regression (MLR) methods perform the estimation as linear combination of with the coefficients of the vector , is always overestimated. As a consequence, the related reliability is overestimated too. In order to show this problem, in section 2 the generalities of ALT analysis are given. Section 3 presents the problem statement. In section 4 the problem is statistically addressed. Section five details the proposed method and, finally, in section 6 the conclusions are presented.

corresponding probability density function is / and the hazard rate function is / . Additionally, it is important to note that in have over , ,…, ALT, the effect that the covariates , which as in (3) is included in T, is modeled by . On the other hand, for constant over time and interval valued variables, the cumulative risk function is ln with parametrized as 0 where and Z are as 0 they were defined in (1), and 0 represents the base risk X ,…, X 0 . when all the covariates are zero X For example, based on this formulation and on the physical principle of Sedyakin [1], pp. 20), the survival function for is related by , two different levels of the variables . Implying that 0 0

which in terms of mean that , and since the survival function does not does depend on , then the random variable not depend on X either. In particular observe that, since the / and its variance is is expected value of , then its variation coefficient / / does not depend on either. Thus, for any two stress levels (or variables combination), the distribution is the same, implying that only the scale changes (see [1] sec. 2.3). Observe the fact that the scale only changing in the Weibull analysis implies that ̂

, is On the other hand, R(t) under any two levels where related by , / and that it is called the , acceleration factor. Finally, it is important to note that by setting ln ln or ln ln , and because ln do not depend on , then and ln does not depend on either. as in (5), the variance of ln

2. Generalities of Weibull ALT analysis

ln

In ALT analysis, the objective is to obtain life data as quickly as possible. Data are obtained by observing a set of n units functioning under various levels of the explanatory variables . These levels are chosen to be , ,…, higher than the normal one. With these life data, we draw conclusions and then, the conclusions are extrapolated to the normal level. Models used to perform the extrapolation are . Among the most known as life-stress models models we have the parametric common accelerated failure time models (AFT) (e.g. Arrhenius model) [8], and the proportional hazard model (PH) (e.g. Weibull proportional hazard model) [5] and [1]. On the other hand, in the analysis, the lifetime data T is a nonnegative and absolute continuous random variable. Thus, the survival function is , 0. And based on this, the

(4)

ln

̅

(5)

Despite of this, because in the estimation of the Weibull life-stress relationship as in (3), we estimate only one value as a linear combination of the variables , is , ,…, always overestimated as in the following section. 3. Problem statement Statement 1: Because in ALT failure time data is obtained by increasing the level of the variables , the variance of the logarithm of the , ,…, lifetimes defined in (5) is diminished (the time to failure is shorted) and as a consequence, is overestimated. Statement 2: In multivariate ALT analysis, when significant variables are added into , , ,…, the relation between the logarithm of the scale parameter and tends to be one, thus the corresponding , ,…,

157


Piña-Monarrez et al / DYNA 82 (191), pp. 156-162. June, 2015.

sum square error is diminished increasing , and as a consequence, is overestimated. Considering these statements, firstly note that is intrinsically related to the strength characteristics of the product; thus, if the levels of the significant variables are selected in such that, for their higher , ,…, effect combination, they do not generate a foolish failure mode (the effect of the level combinations is lower than the effect that reaches the destructive limits), must be constant, and its value must represent the variance of the strength characteristic which in the estimation processes with constant over time and interval-valued variables is not used (or measured). And second, we can observe that in the estimation process, the shape parameter is determined by the variance of the logarithm of the lifetime data , the response variance and the correlation coefficient , and . Thus, due to the that neither of them represent dependence of on , and , adding significant variables (increases R2) and/or overstressing their levels (diminishes ) always overestimates as in the following section.

/

∑ ∑

)

Let us first show that is completely determined by , and . To see this, let us use the Weibull reliability function given by

(13)

Finally from (9c), (10), (11) and (13),

0.4

(7b)

For another possible approximation of F(t), see [3], [4] and [21]. On the other hand, observe from (7a) that is a critical parameter (see [10] and [16] sec. 2.3) and thus, the analysis depends on the accuracy by which it is estimated. Also, observe that (7b) is in function of the sample size n and that for 6 the F(t) percentile is greater than 90% [2]. Regardless of this, note that  represents the 0.367879 reliability percentile which corresponds to ln ln exp 1 0 implying from (7a) that for center response Y (8)

Thus, under multiple linear regression the coefficients of (7a) are estimated as

(12)

(7a)

ln ̅

(11)

The goodness of fit index is given by

Which in linear form is given by

ln

(10) ∑

(6)

ln , , Where ln ln 1 , ln , and is the cumulative failure function of t given by 1 . based on the median rank approach is estimated as [14].

(9c)

On the other hand, the goodness of fit of the polynomial given in (7a) is performed by the anova analysis where its sources of variation are

0.3 /

(9b)

From (9b) observe that its denominator is the variance of the logarithm of the lifetime data defined in (5), which in terms of the covariates is given by.

4. Statistical analysis

exp

∑ ∑

/

is given by (14)

On the other hand, to see that increasing (or decreasing) affects as in Statement1, note from (8) that increasing (or decreasing) is equivalent to increasing (or decreasing) ∑ ∑ ln in (9b). That is to say, shortening the time in which the lifetime occurs, decreases their variance and thus, according to (14),  is overestimated. In the case of Statement 2, from (11) to (14), it is clear that although the levels of the variables are not stressed, by adding significant variables, since and , are fixed as in is increased, and as a consequence, (5), then in (14) ∑ is always overestimated. 5. Proposed Method To see numerically that is overestimated as in section 4, first note that each combination of the variables, as in Fig. 1, presents its own Weibull family, and that data are gathered by using a replicated experiment design DOE as presented in Fig. 2 (see [15] and [20] Chapter 13).

(9a) 158


W(βi, ηi )

Piña-Monarrez et al / DYNA 82 (191), pp. 156-162. June, 2015.

W(β1, η1 )

W(β2, η2 )

W(β3, η3 )

Table 2. Weibull parameters and reliability; ML approach. Sub-set Time Temp Hum Eta (η) 1 150 378 0.8 249.666 2 150 378 0.4 354.736 3 150 398 0.4 167.818 Source: The authors

W(βn, ηn )

Beta (β) 5.874 5.874 5.874

R (t) 0.951 0.994 0.596

2

1

Figure 1: Orthogonla array levels Source: The authors

Run number 1 2 3 4 5 6 7 8 9

x1 1 1 1 2 2 2 3 3 3

Inner array x2 x3 x4 1 1 1 2 2 2 3 3 3 1 2 3 2 3 1 3 1 2 1 3 2 2 1 3 3 2 1

3

Table 3. Weibull parameters and reliability; MLR approach. Sub-set Time Temp Hum Eta (η) 1 150 378 0.8 228.134 2 150 378 0.4 339.251 3 150 398 0.4 144.916 Source: The authors

n

Z1 Z2

Outer array 1 1 2 1 2 1

R (t) 0.877 0.995 0.324

In order to compare the standard results of Table 2, with those found in the DOE, Table 3 presents the Weibull family and R(t) for each DOE combination, using (8) and (9b) with centered response (Y). By comparing these results, we observe that the estimated in Table 2, in contrast to the estimated from Table 3, does not represent the expected 0.367879 percentile as defined in (8). And that 5.874 does not represent the shape parameter of the levels found in the DOE. Thus the proposed method to avoid this issue, using MLR, is as in the following section.

2 2

(Life time data)

Figure 2: Orthogonla Array Levels(9)34 Source: The authors

5.1. Regression approach for statement 1.

Second, to illustrate this, let us use the DOE data from Table 1, which corresponds to twelve electronic devices. Data were published by [17] p.11. From these data, the Weibull/(T–H) parameters defined in (3), by using ML are 5.874, 0.0000597, 0.281 and 5630.330 (the ALTA Pro software was used). In addition, observe that although in Table 1 there are three level combinations among the variables, which as a consequence lead to three Weibull families in this DOE, regardless of this, in the standard approach [eq. (3)], only one shape parameter 5.874 was estimated. Thus, it is not representative of the whole set of data. To see this, in Table 2, the scale and shape parameters , , estimated by ML, and their associated reliability for t=150 are given. Table 1. Weibull (T-H) Data Sub-set Time Temperature 190 378 208 378 1 230 378 298 378 310 378 316 378 2 329 378 411 378 108 398 123 398 3 166 398 200 398 Source: Vassiliou and Metas, 2003.

Beta (β) 4.84061 6.42755 3.49029

Humidity 0.8 0.8 0.8 0.8 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4

In ALT with one interval valued and constant over time variables, as is the case of Weibull/Arrhenius, Weibull/Inverse power law and Weibull/Eyring, it is possible to estimate their parameters by applying (8), (9a) and (9b) by following the next steps. Step 1. For each replicated level of the stress variable (We must have almost 4 replicates, although 10 are recommended), determine the corresponding and parameters by using (7a), (8), (9a) y (9b). (In this one variable approach, is generally constant). If is not constant, proceed as in section 5.2. Step 2. Take the effect of the corresponding linear transformation (see next section) of the time/stress model defined in (1) as (e.g. in Arrhenius 1/ ) and the corresponding logarithm of the scale parameter of . the i-th level of the variable estimated in step 1 as Step 3. Using (9a) and (9b), estimate by regression between the variables and defined in step 2, the parameters of the life/stress model . Note: In the Eyring case, do not forget to subtract the logarithm of the reciprocal of the temperature 1/ from the logarithm of before you perform the regression. Step 4. Using the regression parameters of estimated in step 3, estimate the logarithm of for the operational (or desired) level (see next section). Finally, form the Weibull family of the operational (or desired) level W( , ) with the shape parameters estimated in step1 and the scale parameter estimated in this step. And with these Weibull parameters, determine the desired reliability indexes. 5.1.1 Let us exemplify the above methodology, through the

159


Pi単a-Monarrez et al / DYNA 82 (191), pp. 156-162. June, 2015.

Weibull/Arrhenius and Weibull/Eyring relationship, which are parametrized as in (1). In the case of Arrhenius, the infinitesimal characteristic (see [18]) is given by / , thus the primitive (integral) 1 of , is given by 1 1/ . Since 1 shows the form in which the variable affects the time, in the Arrhenius model the effect is 1/ (see step 2 of section 5.1). Thus from (1) and (4), the Arrhenius model is given by: (for details see [1], Chapter 5). (15a) and are the parameters to be In (15a), estimated, and T is the absolute temperature (Kelvin). The linear form of (15a) is given by ln

ln

|

exp

Table 4. Weibull/Arrhenius data. Stress 393K 3850 4340 4760 5320 5740 Time 6160 6580 7140 7980 8960 Source: Vassiliou and Metas, 2003.

Eta 5890.059 5048.622 4207.185

X 0.002545 0.002451 0.002364

Y 14.654831 14.538138 14.391921

5.1.2 In the case of the Weibull/Eyring relationship the infinitesimal characteristic is given by / , with primitive 1 of , given by 1 ln / 1/ , thus . This formulation with 1 is used in the Eyring model when the temperature is used. The Eyring model is given by: (17) In (17), and are parameters to be estimated and T is the absolute temperature. The linear relationship of (17) is

(16)

As a numerical application, consider the data in Table 4. Data were published by [17]. The Weibull parameters of step 1 are given in Table 5a. The effect for step 2, 1/ and are given in Table 5b. ln The Weibull/Arrhenius parameters of step 3 using Minitab速 and data of Table 5b, are 1862.4 and 99.5%. Finally, by exp 3.9483 51.86271 with using these parameters, the Weibull family mentioned in step 4, for a level of 323K is 4.092, 16,556.67 .

Table 5b. Data step 2 b0 1 1 1 Source: The authors

X 0.002545 0.002451 0.002364

(15b)

Using (15a) the Weibull/Arrhenius pdf is given by

Table 5a. Data step 1 Stress 393 408 423 Source: The authors

Table 6. Data for Weibull/Eyring model. ln (1/T) ln (Eta) -5.97381 8.68102 -6.01127 8.52687 -6.04737 8.34455 Source: The authors

ln

ln 1/

/

(18a)

And the linear relationship used to estimate the parameters is given by ln

ln

/

(18b)

The Weibull/Eyring pdf using (17) is |

exp (19)

408K 3300 3720 4080 4560 4920 5280 5640 6120 6840 7680

423K 2750 3100 3400 3800 4100 4400 4700 5100 5700 6400

Beta 4.092 4.092 4.092

Y 8.68102134 8.52687066 8.3445491

Using data of Table 4, the Weibull/Eyring parameters of step 3 using Minitab速 with data of Table 6, are 1454.2 99.3%. By using these and 10.9606 with parameters the Weibull family mentioned in step 4, for a level of 323K is 4.092, 16,076.52 . Finally, for the one variable case, when the shape parameter is not constant for all the stress levels proceed as in the multivariate case of the following section. 5.2. Regression approach for statement 2. For the multivariate ALT analysis, as in Fig. 1, each covariate combination presents its own Weibull family. Thus, because in the standard ALT analysis, the estimated value does not represent the whole set of data, in MLR, we propose to estimate the Weibull/life/stress parameters through the following steps. Step 1. For each replicated combination level of the stress variables (We must have almost 4 replicates; 10 is recommended; see comment below eq. (7b)), determine the corresponding Weibull family , . This could be performed by ML, but MLR is recommended. (ML is a biased estimator and n is small). Step 2. Take the effect of the corresponding linear 160


Piña-Monarrez et al / DYNA 82 (191), pp. 156-162. June, 2015.

transformation of the variables as the independent variables and the corresponding logarithm of the scale ,…, parameter of the i-th Weibull family of step1 as the . dependent variable Step 3. Estimate the parameters of the life/stress model by regression between the set of variables and defined in step 2. If there are not enough ,…, degrees of freedom to perform the analysis, proceed as follows. a) Estimate a vector by reordering (7a) as … , ,…, , ,…

Table 7. Data for the (T-H) model. Level 1

2

3

ln

(20)

Time 190 208

Temp 378 378

Hum 0.8 0.8

b0 1 1

X 5.2470 5.3375

Y -1.275132 -0.238955

230

378

0.8

1

5.4381

0.427496

298 310 316

378 378 378

0.8 0.4 0.4

1 1 1

5.6971 5.7366 5.7557

1.086592 -1.275132 -0.238955

329

378

0.4

1

5.7961

0.427496

411 108 123

378 398 398

0.4 0.4 0.4

1 1 1

6.0186 4.6821 4.8122

1.086592 -1.275132 -0.238955

166

398

0.4

1

5.1120

0.427496

200

398

0.4

1

5.2983

1.086592

W(β,η) 4.84061 228.13413 3*σ2L1= 0.11343 6.42755 339.25138 3*σ2L2= 0.05092 3.49029 144.91613 3*σ2L3= 0.23558

Source: The authors

Estimate the parameters of by performing a regression between and . In (20), is as in ,…, and ln are the shape parameter and the (7b), and logarithm of the lifetime data of the i-th Weibull families of step 1. b) Based on (5) and on the fact that ln ~ ln , 2/ 1 where is the sample variance of the lifetime data, form the logarithm vector ln 1

ln , … , ln

/

1 1

/

, ln where

is the variance

of the i-th level defined in (9c) and n is the number of replicates of the i-th level of step1. c) Take the inverse of the effect of the covariates of step and , ,…, as 2, as the independent variables the response variable and perform a regression between and , ,…, . Observe that and are vectors for the complete DOE data (or families). Step 4. Using the regression parameters of estimated in step 3-a), estimate the scale parameter for the operational level by applying (4). By using the regression of the parameters of step 3-b), estimate the value of operational level, and by applying (14), with of step 1 index, estimate the corresponding value. and a desired , are the parameters of the Weibull family of the desired stress level and they could be used to determine any desired reliability index. Observe that the estimation of using (14) is robust (almost insensible) to the selected index. As a numerical application consider the data in Table 7. Data were published in [17]. On the other hand, data of step 3 using (20) are given in Table 8. By using Minitab, the parameters of W(T–H) model by regression between and , are exp 11.894 6.831 06, 6398.3, and 0.31745 with 96.4%. The parameters of the regression and ln are , 16.3645, between 0.038294 and 1.001203 with 100%. To show the method, suppose that the operational level is 258 with 0.2, then, by using the above parameters as in step 4, 1930.62, and by taking 0.08587, 95.0%, 5.8054. Thus, the 3.046497 and operational Weibull family is 5.8054, 1930.62 .

Table 8. Data for step 3 of the (T-H) model. Sub-set b0 X1=1/T 1 0.0026455 1 0.0026455 1 1 0.0026455 1 0.0026455 1 0.0026455 1 0.0026455 2 1 0.0026455 1 0.0026455 1 0.0025126 1 0.0025126 3 1 0.0025126 1 0.0025126 Source: The authors

X2=1/H 1.25 1.25 1.25 1.25 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5

Yη 5.51045 5.38690 5.34976 5.47262 5.93496 5.79292 5.72955 5.84954 5.04747 4.88065 4.98951 4.98700

On the other hand, the ML parameters using the ALTA routine are 5.9701 05, 5630.3299, 0.2806 with 5.8744 and 1642.4765. With operational Weibull Family given by 5.8744, 1642.48 . A comparison of the Weibull parameters and reliability index of the ML and the proposed Method is given in Table 9. In Table 9, we can see that the shape parameter is not representative of the observed Weibull families as it is in the proposed method. The same occurs with the estimated reliability. In particular, it is important to note that the proposed method is based on the observed variance and thus it is directly related to the operational factors of the process. 6. Conclusions In Weibull multivariate ALT analysis, each combination of the significant variables presents its own behavior, thus the standard approach of estimating only one shape parameter to represent all the Weibull families is suboptimal. Since depends on , which increases when variables are added to the analysis, in the multivariate case is always overestimated. Clearly, since the change in the scale parameter is reflected in ∑ ln , thus the proposed method could easily be generalized to the right censured case by reflecting the censured data on ∑ ln and by substituting for in (9b) where is the number of failure. Although the proposed method depends greatly on the

161


Piña-Monarrez et al / DYNA 82 (191), pp. 156-162. June, 2015.

accuracy in which is estimated, because ~ ln , 2/ 1 stabilize the variance as ln defined in step 3-b, the proposed method could be considered robust for this issue. It is important to mention that in (14) is not highly sensitive to the selected index. Knowing (14), it seems to be possible to generalize the proposed method to the ML approach by formulating a log-likelihood function based on the β values of the Weibull families, but more research must be undertaken. Since the shape parameter is inversely related to , and because is the standard deviation of the lognormal distribution, which presents a Table 9. Comparison between ML and proposed method.ML Parameters (T-H) Parameters Level Eta Beta 1 249.5454 5.8744 A=0.000059701 2 354.3875 5.8744 φ=5630.3299 3 167.6529 5.8744 b=0.28060 Op 1642.4765 5.8744 Source: The authors

Proposed method (PM) Eta 228.13413 339.25138 144.91613 1930.61980

Beta 4.8406 6.4276 3.4903 5.8054

References [1] [2] [3] [4] [5] [6] [7]

[8] [9] [10]

[11]

[12]

[13] [14]

flexible behavior and similar analysis to the Weibull distribution [11], it seems to be possible to extend the present method to the lognormal analysis. On the other hand, although the proposed method is practical and its application could easily be performed by using a standard software routine, as Minitab does, a more detailed method could be proposed by using a copula to modeling in joint form the Weibull families behavior, but because the Weibull distribution is determined by an non-homogeneous Poison processes [7] and its convolutions do not have a closed form [12], more research must be undertaken.

Bagdonaviĉius, V. and Nikulin, M., Accelerated life models, modeling and statistical analysis, Florida: Chapman and Hall/CRC, 2002. Bertsche, B., Reliability and automotive and mechanical engineering, Berlin: Springer, 2008. Cook, N. J., Comments on plotting positions in extreme value analysis. J. Appl. Meteor., 50 (1), pp. 255-266, 2011. DOI:10.1175/2010JAMC2316.1. Cook, N. J., Rebuttal of problems in the extreme value analysis. Structural Safety., 34 (1), pp. 418-423, 2012. Doi:10.1016/j.strusafe.2011.08.002. Cox, D.R. and Oakes, D., Analysis of survival data, Florida: Chapman and Hall/CRC, 1984. Escobar, L.A. and Meeker, W.Q., A review of accelerated test models. Statistical Science, 21 (4), pp. 552-577, 2006. DOI:10.1214/088342306000000321. Jun-Wu, Y., Guo-Liang, T. and Man-Lai, T., Predictive analysis for nonhomogeneous poisson process with power law using Bayesian approach. Computational Statistics and Data Analysis., 51, pp. 42544268, 2007. DOI:10.1016/j.csda.2006.05.010. Nelson, W.B., Applied life data analysis, New York: John Wiley & Sons, 1985. Nelson, W.B., Accelerated testing statistical models, test plans and data analysis, New York: John Wiley & Sons, 2004. Nicholls, D. and Lein, P., Weibayes testing: What is the impact if assumed beta is incorrect? Reliability and Maintainability Symposium, RAMS Annual, 2009. pp. 37-42. DOI: 10.1109/RAMS.2009.4914646 Manotas, E., Yañez, S., Lopera, C. and Jaramillo, M., Estudio del efecto de la dependencia en la estimación de la confiabilidad de un sistema con dos modos de falla concurrentes. DYNA, 75 (154), pp. 29-38, 2007. McShane, B., Adrian, M., Bradlow, E. and Fader, P., Count models based on Weibull interarrival times. Journal of Business and Economic Statistics, 26 (3), pp. 369-378, 2008. DOI:10.1198/073500107000000278. Meeker, W.Q. and Escobar, L.A., Statistical methods for reliability data. New York: John Wiley & Sons, 2014. Mischke, C.R., A distribution-independent plotting rule for ordered failures. Journal of Mechanical Design, 104 (3), pp. 593-597, 1979. DOI:10.1115/1.3256391.

3*σ2L 0.1134336 0.0509200 0.2355755 0.0858736

σ2 Y 3.046497 3.046497 3.046497 3.046497

Reliability index R2 0.872453 0.690524 0.942006 0.950000

R(ti) (ML) 0.7615 0.9936 0.9531 0.5561

R(ti) (PM) 0.5893 0.9947 0.7604 0.7937

[15] Montgomery, D.C., Diseño y análisis de experimentos. México, D.F.: Limusa Wiley, 2004. [16] Rinne, H., The Weibull distribution a handbook. Florida: CRC press, 2009. [17] Vassiliou, P. and Metas, A., Application of Quantitative Accelerated Life Models on Load Sharing Redundancy. Reliability and Maintainability, 2004 Annual Symposium – RAMS, pp. 293-296, DOI: 10.1109/RAMS.2004.1285463 [18] Viertl, R., Statistical methods in accelerated life testing. Göttingen: Vandenhoeck & Ruprecht, 1988. [19] Weibull, W., A statistical theory of the strength of materials. Stockholm: Generalstabens litografiska anstalts förlag. 1939. [20] Yang, K. and El-Haik, B., Design for six sigma: a roadmap for product development. New York: McGraw-Hill. 2003. [21] Yu, G.H. and Huang, C.C., A distribution free plotting position. Stochastic environmental research and risk assessment, 15 (6), pp. 462-476, 2001. DOI:10.1007/s004770100083. M.R. Piña-Monarrez, is a Researcher-Professor at the Autonomous University of Ciudad Juarez, Mexico. He completed his PhD degree in Science in Industrial Engineering in 2006 at the Technological Institute of Ciudad Juarez, Mexico. He had conducted research on system design methods including robust design, design of experiments, linear regression, reliability and multivariate process control. He is member of the National Research System (SNI-1), of the National Council of Science and Technology (CONACYT) in Mexico. C.A. Ávila-Chavez, is a PhD student on the Science in Engineering Doctoral Program (DOCI), at the Autonomous University of Ciudad Juarez, Mexico. He completed his MSc. degree in Science in Industrial Engineering in 2011 at the Technological Institute of Ciudad Juarez, Mexico. His research is based on Accelerated lifetime and Weibull analysis. C.D. Márquez-Luevano, is a reliability engineering at the Stoneridge Electronics North America El Paso Texas, USA. He completed his MSc. degree in Industrial Engineering in 2013 at the Autonomous University of Ciudad Juarez, Mexico. His research studies on reliability focus on random vibration and Weibull analysis.

162


State of the art of ergonomic costs as criterion for evaluating and improving organizational performance in industry Silvana Duarte-dos Santos a, Antonio Renato Pereira-Moro b & Leonardo Ensslin c a.

Federal University of Santa Catarina, Florianópolis, Brazil. Silvana.duarte@ufms.br Federal University of Santa Catarina, Florianópolis, Brazil. renato.moro@ufsc.br c Federal University of Santa Catarina, Florianópolis, Brazil. sensslin@gmail.com

b

Received: May 28th, 2014. Received in revised form: February 24th, 2015. Accepted: March 9th, 2015

Abstract This study selects, in a structured manner, relevant articles with scientific recognition and simultaneously identifies these publications’ characteristics that may scientifically enrich the proposed topic. The topic involves ergonomic costs as a criterion for evaluating and improving organizational performance in industry. This study uses Proknow-C as a theoretical instrument for intervention. The following results are obtained: a) a bibliographic portfolio of 16 items, aligned with the view adopted by researchers who served as this research’s theoretical framework; b) the Applied Ergonomics journal shows the highest number of scientific articles in the bibliographic portfolio; and c) Ergonomics, Costs, and Evaluation are the most frequent keywords. The studies selected using the methodology indicate that successful ergonomic projects result in substantially reduced production costs and associated economic and financial gain for the industry. Keywords: Performance Evaluation; Costs; Ergonomics

Estado del arte de los costos ergonómicos como un criterio para la evaluación y mejora del desempeño organizacional en la indústria Resumen Esta investigación tuvo como objetivo seleccionar de una manera estructurada, artículos relevantes con reconocimiento científico y, al mismo tiempo, identificar las características de las publicaciones que puedan contribuir científicamente para el enriquecimiento del tema propuesto. El tema objeto de estudio se dirige a la gestión de los costos como herramienta ergonómica de evaluación del desempeño en la industria. El trabajo se caracteriza como exploratorio-descriptivo, de naturaleza teórico-ilustrativa, y tiene como instrumento teórico de intervención el Development Process–Constructivist (Proknow-C). Como resultado de este proceso se encontró: a) un portafolio de 16 artículos, alineados con la opinión adoptada por los investigadores, que sirvieron como marco teórico básico de esta investigación; b) el diario Journal Applied Ergonomics como aquella que tiene el mayor número de trabajados en el Portafolio Biblioteca; c) Ergonomics, Costs y Evaluation, como las palabras-clave más frecuentes. Los estudios seleccionados por metodología indican que proyectos de ergonomía exitosas resultaron en reducción sustancial de los costes de producción con una ganancia económica y financiera consecuente para la industria. Palabras Clave: Evaluación del Desempeño; Costos; Ergonomía

1. Introduction One of the biggest mistakes of managers is to link ergonomics exclusively to health and safety of workers when it should also be added to the organization’s planning cycles to ensure good business performance. According to Dul and Neumann [1] and Falck et al. [2], ergonomics should be an

integral part of organizational strategy—combining employee health and safety objectives and economic goals, thus generating value for shareholders and employees. Working conditions in the industrial sector have motivated numerous studies on the impact of ergonomics on the financial performance of these organizations. Good ergonomic conditions are intrinsically linked to employee

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 163-170. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.43733


Duarte-dos Santos et al / DYNA 82 (191), pp. 163-170. June, 2015.

satisfaction, high productivity, and reduction of cost offsets for accidents or occupational diseases. In the specialized literature, there is a consistent research body that demonstrates the benefits of ergonomics to organizational performance [3-5]. The financial benefits resulting from the implementation of ergonomic programs are visible in both industrially developed countries and the ones that are still industrializing. The benefits of ergonomics are most visible in the latter group [6]. Most literature recognizes that ergonomics should be included as a criterion for evaluating and improving organizational performance, considering factors such as the health and safety of workers and increasing or maintaining productivity and quality levels. The subject of ergonomic cost is constituted by comprehensive and complex lines of research, highlighting the difficulties in undertaking a study related to this subject. There is an additional difficulty related to amplitude and dispersion of knowledge in various publications, editors, and databases. In this respect, Tasca et al. [7] reported the difficulty that many researchers face in justifying the selected theoretical framework to support their research activities. This situation indicates the importance of using a structured method that provides a consistent theoretical framework. In this respect, the researcher is faced with the following question: how to build the required knowledge when beginning a search on the subject of ergonomic costs, in order to later allow the researcher to seek opportunities to contribute to this subject? In response to this question, this paper’s main objective is to guide the researcher in the search of opportunities (gaps) on the chosen topic. The selection of a relevant literature portfolio on the topic; conducting bibliometric analysis of the selected bibliographic portfolio and its references; and seeking to identify the important journals, articles, authors, and keywords are the specific objectives of this study. Thus, through a structured process, known as ProKnow-C (Knowledge Process Development-Constructivist), this study selects relevant articles with scientific recognition and identifies aspects of these publications that contribute to the topic. 2. Performance Evaluation in Industry In business management, the importance of performance evaluation has long been recognized as an integral part of the planning and control cycle of organizations. Performance evaluation is fundamental to the effective management of human resources, assisting in the development of individuals, improving organizational performance, and contributing to business planning [8]. Although several authors have undertaken efforts to develop more efficient methods to measure performance, there is still much criticism against traditional means of assessment. The incentive for short-term approaches, narrow focus on strategy covering quality, insufficient stimulus of the adoption of measures to encourage continuous improvement, and the scarcity of information about customer needs and performance of the competition are some of the major identified deficiencies [9-12].

The assessment of organizational performance can be conceptualized as the management process used to build, establish, and disseminate knowledge through the identification, organization, measurement, and integration of necessary and sufficient aspects to measure and manage the performance of the strategic objectives of a particular context of the organization [13]. There are several tools that can assess the performance of organizations. Performance Pyramid System [14], and Performance Prism [15] are wellknown and disseminated methodologies in the academic and business environment. Although these methodologies have advantages in meeting aspects of the new decision-making context, they fail to meet the requirements regarding the identification, organization, measurement, and integration of criteria, as well as the generation of improvement actions [13]. Concerning the specialized literature, performance evaluation systems should include the use of financial and non-financial indicators, especially because many factors that can be evaluated are currently considered to be intangible. Furthermore, evaluation systems of organizational performance should consider the particularities of the context in which the assessment will occur, through the values and preferences of the decision-maker, and thus permit the connection between operational and strategic objectives. The historical use of performance evaluation criteria solely based on financial ratios gives way to the use of more appropriate parameters to the new reality of organizations. In many situations, non-financial indicators such as the quality of products and services, customer satisfaction, cycle time, innovation, psychosocial environment, musculoskeletal health, and employee effectiveness are now being considered [16-19]. In many cases, however, ergonomics is still neglected by industry in the planning and implementation of actions aimed at measuring and improving business performance. The poor knowledge of managers about ergonomics and the difficulty in measuring its financial benefits discourage its inclusion as a criterion for evaluating and improving performance [2022]. Ergonomics is found at the base of the performance improvement process, enabling increased productivity, improved product quality, reduced failures, and as a consequence, cost reduction. Generally, improvements in ergonomics bring measurable benefits for production processes, such as improvement in productivity, which can be translated into financial results. In a study on drilling steel plates, comparing the productivity of an ergonomically redesigned workstation with a conventional workstation resulted in increases of 22% in volume and 50% in the quality of holes performed by workers [23, 24]. When considering the advanced manufacturing technology (AMT), a widely used feature in modern industry, Maldonato et al. [25] proposed a new methodology for evaluating AMT incorporating human factors and work ergonomics. Similarly, Craig et al. [26] emphasized that harm reduction programs in industry must go beyond traditional methods for risk factors of work-related ergonomics. These should include personal factors such as smoking, weight control, and alcohol abuse.

164


Duarte-dos Santos et al / DYNA 82 (191), pp. 163-170. June, 2015.

2.1. Measurement of Ergonomic Costs in Industry In the literature, several studies were identified that measured ergonomic costs and their importance for the evaluation and improvement of organizational performance in various industry sectors. Sen and Yeow [27] conducted a study to ascertain whether ergonomic improvements can have their financial feasibility proven in developing countries. In this study, we identified that the improvements that would have saved over US$ 500,000 in the first year and the improvement costs were less than 2%. Thus, we concluded that the ergonomic proposal was quite profitable for the organization. Also, in a study conducted in an office environment, collecting outcome measures after a macro ergonomics intervention showed significant positive effects on employees, such as improved communication and collaboration and efficiency of business processes-time and cost [18]. Furthermore, Yeow and Sen [28] reported that ergonomic interventions performed on the production line of an electronic components factory solved problems such as delay in search for materials, unproductive manual counting of components, obstructions during inserts, and the falling of components while the plate is transported on a conveyor belt. The results revealed the effectiveness of ergonomics applied to the manual component insertion (MCI) process, generating a considerable increase in productivity and annual revenue (US$ 4,223,736) and a reduction in defects and annual waste costs (US$ 956,136). Several authors indicate that organizations are reluctant to adopt ergonomic interventions, particularly due to the difficulty in measuring their financial impacts [29]. Thus, the search for the economic measurement of ergonomic costs in industry by researchers is noticeable. Hendrick [20] argues that managers only financially support an ergonomic design when grounded in a cost-benefit analysis. For this, the author reported 250 case studies of the benefits of ergonomics programs, including the reduction of musculoskeletal disorders, missed work days, cost of worker compensation, increased productivity, quality, and business volume. The author concluded that the payback period for ergonomic interventions was less than one year. Similarly, studies conducted with workers in the newspaper industry reported that the implementation of ergonomic designs to reduce musculoskeletal injuries had a low average cost of US$376 and US$25. In the clothing industry, the economic evaluation from a participatory ergonomic process showed that ergonomic interventions can be economically advantageous, even when changes are low financial cost and technological. [30]. Trask et al. [31] innovated by proposing the measurement of costs related to the data-collection phase of studies involving ergonomics costs. The results showed that the collection method based on self-reporting by workers is less costly. Similarly, the use of a virtual simulation tool for the simultaneous application of evaluation methods of ergonomic hazards in static and repetitive activities in the metal industry favored making business decisions with consequent reduction of costs and investments [32].

Therefore, there are serious difficulties in quantifying the financial loss to industry, due to problems caused by the absence of appropriate ergonomic conditions. However, the economic benefits of ergonomics can easily outweigh its implementation costs, and that the financial losses relate mainly to the reduction in the productivity of workers in the manufacturing process [33]. The main difficulties in measuring these losses are related to the cost and lack of information, the multifactorial nature of the problems, and measurement methods [34,35]. Rickards and Putnam [22] provide an accounting-based methodology to identify potential productivity gains from the adoption of ergonomic improvements in a call center. The analyzed factors were absenteeism, overtime, costs for training new workers, processing time of missed calls, and lost productivity. It is also possible to find studies related to the measurement of ergonomic costs in industry jobs in the Brazilian literature. A study performed in the shoe industry about the cost benefit of an ergonomic intervention demonstrated that the gains achieved were higher than the intervention costs. The study also showed that, after the intervention, there was an 80% reduction in accidents, 45.65% reduction in absenteeism, and improved production with a 3% increase in productivity and a reduction in manufacturing wastes (reworking and wastes) to less than 1% [36]. 3. Materials and Methods 3.1. Intervention instrument: Proknow-C The knowledge construction process critical to conducting a survey is unique in relation to the researcher and the boundaries imposed for the research. Also, the context in which the researcher is inserted, and the availability of access to the means of dissemination of research, influences this knowledge construction process [37,38]. As an intervention tool, the survey used a literature review process called ProKnow-C, proposed by Ensslin et al. [7], which, from a constructivist perspective, shows a structured process to build the needed knowledge in the researcher to begin research on the subject they want to investigate (Fig. 1). ProKnow-C was conceived in the Laboratory for Multicriteria Methodologies to Support Decision Making (LabMCDA) under the Department of Production and Systems Engineering, Federal University of Santa Catarina (UFMS), Brazil, which since 1994 has investigated the subject of organizational performance evaluation as a tool for supporting decision-making, using the Multicriteria methodology to Support Constructivist Decision Making (MCDA-C). LabMCDA found, however, that the materials that informed the review of the state of the art of its publications could be questioned in relation to alignment and relevance of the content related to the purpose of the research as well as the completeness of the search for these materials.

165


Duarte-dos Santos et al / DYNA 82 (191), pp. 163-170. June, 2015.

procedure that minimizes the use of randomness and subjectivity in the literature review process [43,44]. The entire process consists of four steps: Step 1: selection of a portfolio of articles on the research subject, Step 2: bibliometric analysis of the portfolio, Step 3: systemic analysis, and Step 4: definition of the research question and research objective. In this study, two stages of the process were developed: selection of a portfolio of articles on the research subject and the bibliometric portfolio analysis. Hence, part of the necessary knowledge regarding the research topic has been built.

Definition of key-words to be used Definition of databases where searches will be carried out Adherence test of keywords by searching the databases and reading some articles aligned with the topic Exclusion of repeated articles Identification of alignment with the topic by reading the title

Identification of the number of citations of each article as a measure of scientific relevance

Set the representativeness in the portfolio from which the article remains or is discarded

Articles with proven scientific relevance

Current articles without proven scientific relevance

4. Results and Discussion Identifying how current the article is

4.1. Selection of the Bibliographic Portfolio Set the cutoff point from which the current nature of the article is sufficient to justify its remaining in the portfolio, even without proven scientific relevance

Non-current articles without proven scientific relevance, however, from authors listed in the group of articles with proven scientific relevance

Other articles (discarded)

Identification of alignment with the topic by reading the abstract

Is the complete article available for reading?

Identification of alignment with the topic by reading the complete article

Bibliographic portfolio on the research topic

Legend Step in which articles from the current portfolio are discarded

Figure 1. Summary of the selection process of the bibliographic portfolio in the ProKnow-C knowledge construction methodology. Source: Adapted from Tasca et al. [7].

Thus, in 2005, LabMCDA launched a line of research to fill this gap by developing a process that performs the search with bounded amplitude. The framework provided by the researchers oriented the structured process and the focus. Currently, ProKnow-C has several publications in journals, establishing itself as an important process to mapping knowledge, depending on boundaries, perceptions, and researcher motivations [39-42]. Thus, the main attribute of ProKnow-C is its ability to become a tool to support the construction of knowledge in a particular research field, providing a structured and rigorous

The sub-process of selecting the portfolio of articles is the initial step in the process and allows the researcher to select the articles related to the research topic, aligned according to their perception and the imposed boundaries. This selection step is performed through three sub-steps: a) selection of articles in databases, making up the Gross Bank of Articles; b) filtration of the selected articles based on the alignment of the research; and (c) the representativeness test of the bibliographic portfolio. As a result, there is a set of papers that are considered to be relevant to the researcher and aligned with the research topic. This set of articles is called the bibliographic portfolio [45]. The selection step of the article set had 759 publications, which were included in the initial portfolio called Gross Article Bank. To gather the studies and make up the Article Bank, the Endnote X3 application was used as a bibliographic manager. Three lines of research were defined for the method’s application. The first line is related to the central theme of the study, i.e., Performance Evaluation. The second and third lines are directly related to the study topic, i.e., Costs and Ergonomics. Keywords were also defined in that step for each research line and implementation of compliance tests for the keywords. For research line 1, six keywords were defined: Performance, Evaluation, Appraisal, Assessment, Management, and Measurement. For research line 2, one keyword was defined: Costs. For research line 3, Ergonomics, was defined as the keyword. Next, among the databases included in the Portal of Journals of the Coordination of Improvement of Higher Education Personnel (CAPES), we searched databases that were aligned to the knowledge areas considered to be relevant to the research, namely Health Sciences, Engineering, and Multidisciplinary areas. Initially, 12 databases were identified: SciVerse SCOPUS, WILEY, Web of Knowledge–ISI, Pubmed, PROQUEST, EBSCO, Science Direct, Cambridge Journals Online, Emerald, IEEE Xplore, SAGE Journals Online, and Scielo. Out of these, six databases were selected for this study: SciVerse SCOPUS, WILEY, Web of Knowledge–ISI, Pubmed, PROQUEST, and EBSCO, which index a set of scientific journals that are more aligned with the research topic (Fig.3). Research in these databases was conducted from March 15, 2013, to May 4, 2013.

166


Duarte-dos Santos et al / DYNA 82 (191), pp. 163-170. June, 2015. Table 1 Selected Articles for the Library Portfolio of the Proknow-C Methodology Authors Robertson MM, Huang YH, O’Neill MJ, Schleifer LM

Hendrick HW Yeow PHP, Nath Sen R

388

Rosecrance JC, Cook TM 188

29

17

2

EBSCO

Wiley

Proquest

Pubmed

Web of Knowledge ISI

122

Sci Verse Scopus

420 400 380 360 340 320 300 280 260 240 220 200 180 160 140 120 100 80 60 40 20 0

Christmansson M, Medbo L, Hansson GÅ, Ohlsson K, Unge Byström J, Möller T, Forsman M. Yeow PHP, Nath Sen R

Driessen MT, Anema JR, Proper KI, Bongers PM, Beek AJVD

Figure 2. Number of scientific articles aligned with the research topic found in the databases. Source: The Authors

In the second step, using the bibliographic manager, the filtration of the gross article bank, as identified in the databases, was conducted. We analyzed 759 articles, and the following aspects were considered: a) presence of repeated/redundant articles, b) alignment of article titles with the topic, c) scientific recognition of the articles, d) alignment of abstracts with the topic, and e) availability of the complete articles in the databases. After examining the articles, 14 were considered to be aligned with the research topic, and the filtering process of the articles was stopped. Next, the test of the representativeness of the Bibliographic Library was performed to analyze the cited references in the articles from the Gross Portfolio. To achieve this, all references in the articles were surveyed, restricting the temporal space for study, from 2000 to 2013, and to articles published in journals. Again, the bibliography manager EndNote X3 was used for the composition of references of the bibliographic portfolio. Then, the cited articles were exported to a spreadsheet to determine the filtering and representativeness of the citations. After the worksheet was assembled and organized, a new query was performed in Google Scholar to identify the number of article citations from the bibliographic portfolio. After this step, the spreadsheet was reorganized and the content was classified by a number of citations in decreasing order, thus establishing the degree of representativeness of each article in % compared with the total number of references. Two articles were identified that included known authors, with a high number of citations, 91 and 43, respectively, and aligned with the topic, which was incorporated into the bibliographic portfolio, totaling 16 articles; 14 primary articles were from the bibliographic portfolio and 2 were selected from the representativeness test, as shown in Table 1.

Craig BN, Congleton JJ, Kerk CJ, Amendola AA, Gaines WG Herbert R, Dropkin J, Warren N, Sivin D, Doucette J, Kellogg L, Bardin J, Kass D, Zoloth S Lindegård A, Karlberg C, Wigaeus Tornqvist A, Toominga A, Hagberg M. Spielholz P, Griffith J

Davis G,

Abdel-Malek K, Yu W, Yang J, Nebel K Guimarães LM, Ribeiro JLD, JS Renner Trask C, Mathiassen SE, Wahlstrom J, Heiden M, Rezagholi M Maldonado A, García JL, Alvarado A, Balderrama CO Rickards J, Putnam C

Article Title No. of citations Flexible workspace design and 163 ergonomics training: Impacts on the psychosocial work environment, musculoskeletal health, and work effectiveness among knowledge workers Determining the cost-benefits of 91 ergonomics projects and factors that lead to their success Quality, productivity, occupational health 43 and safety, and cost effectiveness of ergonomic improvements in the test workstations of an electronic factory The use of participatory action research 41 and ergonomics in the prevention of workrelated musculoskeletal disorders in the newspaper industry A case study of a principally new way of 36 materials kitting - An evaluation of time consumption and physical workload

Year

Productivity and quality improvements, revenue increment, and rejection cost reduction in the manual component insertion lines through the application of ergonomics Participatory Ergonomics to prevent low back and neck pain among workers: Design of a randomized controlled trial to evaluate the (cost-effectiveness) Personal and non-occupational risk factors and occupational injury/illness

29

2006

2

2008

23

2006

Impact of a joint labor-management ergonomics program on upper extremity musculoskeletal symptoms among garment workers Concordance between VDU-users' ratings of comfort and perceived exertion with experts' observations of workplace layout and working postures Physical risk factors and controls for musculoskeletal disorders in construction trades A mathematical method for ergonomicbased design: Placement Cost-benefit analysis of a socio-technical intervention in a Brazilian footwear company Modeling costs of exposure assessment methods in industrial environments

22

2001

19

2005

13

2006

12

2004

02

2012

01

2012

A hierarchical fuzzy axiomatic design methodology for ergonomic compatibility evaluation of advanced manufacturing technology A pre-intervention benefit-cost methodology to justify investments in workplace health

00

2013

00

2012

2008

2003 2003

2000

2002

Source: The Authors

The next step, the bibliometric analysis of the bibliographic portfolio, consists of applying statistical methods to the selected articles, to quantify the existing information, and map the structure of knowledge of a particular scientific field [46]. Five aspects were considered in this analysis step: a) relevance of the journals, (b) recognition of scientific articles, (c) most prominent authors, (d) most used keywords, and (e) analysis of the journal impact factor of the bibliographic portfolio [47,42]. Among the 16 publications of the bibliographic portfolio, the Applied Ergonomics Journal is highlighted, with five publications, followed by the International Journal of Industrial Ergonomics, with four publications. These publications are geared toward ergonomics. The first is focused on the use of ergonomics in various sectors, including the industrial sector, and the second publication has its editorial content geared to ergonomics and its use in

167


Duarte-dos Santos et al / DYNA 82 (191), pp. 163-170. June, 2015.

Applied Ergonomics

5

International Journal of Industrial Ergonomics

4

American Journal of Industrial Medicine

1

Applied Occupational and Environmental Hygiene

1

BMC Musculoskeletal Disorders

1

International Journal of Advanced Manufacturing Technology

1

International Journal of Workplace Health Management

1

Journal of Construction Engineering and Management

1

Work-a Journal of Prevention Assessment & Rehabilitation

1

Figure 3. Number of selected articles per journal for the bibliographic portfolio of the ProKnow-C knowledge construction methodology. Source: The Authors

industry. Hence, it is fully justified that, within the research topic, many works are found in these two journals. Importantly, the bibliographic portfolio articles were published in 9 different journals (Fig. 2). As for the scientific recognition of the Articles of the bibliographic portfolio, the most cited work was the scientific paper entitled Flexible workspace design and ergonomics training: Impacts on the psychosocial work environment, musculoskeletal health, and work effectiveness among knowledge workers, from the authors M. M. Robertson, Y. H. Huang, M. J. O'Neill, and L. M. Schleifer, 2008, Journal of Applied Ergonomics, with 163 citations on Google Scholar. The following articles have also been emphasized: Determining the cost-benefits of ergonomics projects and factors that lead to their success, by H. W. Hendrick (2003), with 91 citations on Google Scholar; Similarly, Quality, productivity, occupational health and safety and cost effectiveness of ergonomic improvements in the test workstations of an electronic factory, by P. H. P. Yeow and R. N. Sen (2003), with 43 citations. The authors are recognized in the scientific community by several papers in the area of ergonomics, management of its costs, and benefits of its application to various sectors of the economy, including industry, and thus aligning itself to the research topic. Among the 61 authors of the bibliographic portfolio, Yeow and Sen are highlighted with two articles: Quality, productivity, occupational health and safety and cost effectiveness of ergonomic improvements in the test workstations of an electronic factory (2003), with 43 citations on Google Scholar, and Productivity and quality improvements, revenue increment, and rejection cost reduction in the manual component insertion lines through the application of ergonomics (2006), with 29 citations. In both articles, the authors write together without the participation of other authors. Both works have focused on the implementation of ergonomic improvements in workstations of the electronic components industry, and

economically measure the costs and benefits of such interventions. As for the authors listed in the articles’ references, 334 scholars were identified who contribute to the scientific community in some way. Of these, P. Vink is highlighted, with 8 papers in the references, though none is exactly aligned with the research topic. Furthermore, W. Karwowski is highlighted with 6 studies in the references. The most relevant articles selected by this methodology emphasize aspects related to the ergonomic design of workstations and their adequacy for workers, the psychosocial work environment, musculoskeletal health, and work-related accidents. After the application of ergonomic concepts, these aspects can evolve significantly, often leading to benefits that far outweigh their implementation costs. The cited studies are characterized by the common goal of providing managers with information for comparison of the implementation costs of ergonomic measures with their financial impacts and the enhanced quality of life of the employees. In the industrial sector, measuring the ergonomic costs and benefits derived from improved ergonomic conditions in the workplace is a difficult task. Manager resistance and the difficulty in gathering data and economic arguments to justify a financial investment in more adequate working conditions pose major obstacles to the implementation of ergonomic projects. However, the results obtained by studies prove that improved ergonomics can result in reduced musculoskeletal discomfort, tighter control of work activity, a greater sense of community, improved communication and collaboration, and efficiency in business processes. Consequently, successful ergonomic projects result in cost reduction for industrial organizations. Conclusion Our methodology allowed for the identification that scientific production in the field of ergonomic costs, as a criterion for evaluating and improving organizational performance in industry, is a study field that has recently received increasing attention from scholars. In agreement with the finding that it is a fairly unexplored area in terms of scientific publications, it was found that the articles in the bibliographic portfolio do not show such a high number of citations, with 68% of articles from the bibliographic portfolio in the 12–91 citation range. Recently, the scientific production has shown a great increase, and signs point to further increases in the coming years. At the same time, when there is a larger amount and better quality of information available to the researchers, they are unable to make use of all of the available information. Therefore, it is necessary to selectively choose the content to be considered in their research. The work of establishing a selection criterion and following a rigorous process in the search for relevant information can be difficult. Thus, there is a need to use an objective methodology for selecting bibliographic references for scientific research. This study conducted an investigation and disclosure of relevant publications about ergonomic costs as a criterion for

168


Duarte-dos Santos et al / DYNA 82 (191), pp. 163-170. June, 2015.

evaluating and improving organizational performance in industry, reported by the ProKnow-C intervention instrument. Considering the limitations of the study—which are restricted to the articles published in scientific journals in the databases available on the CAPES Portal, in its entirety, based on the knowledge generated during the investigative process—we believe that the results shown may contribute to the scientific community because of the presentation of a structured process to identify and select relevant and aligned articles to the ergonomic costs theme as a criterion for evaluating and improving the organizational performance in industry, as well as the disclosure of the characteristics of the selected publications for the operationalization of the Proknow-C intervention instrument. Therefore, the following points are suggested for future research: (a) replication of the process to other contexts from conference proceedings, theses, dissertations, and books, as well as searches in various available databases on the CAPES portal; and (b) continuation of this research through the development of two missing steps in Proknow-C: systemic analysis (content analysis of the bibliographic portfolio) and identification of opportunities for scientific research with the suggestion of research questions and objectives. References

[11]

[12]

[13]

[14] [15] [16] [17]

[18]

[19]

[1]

Dul, J. and Newmann, W. P., Ergonomics contributions to company strategies. Applied Ergonomics, 40 (4), pp. 745-752, 2009. DOI:10.1016/j.apergo.2008.07.001 [2] Falck, A.C. and Rosenqvist, M., What are the obstacles and needs of proactive ergonomics measures at early product development stages?–An interview study in five Swedish companies. International Journal of Industrial Ergonomics, 42 (5), pp. 406-415. 2012. DOI:10.1016/j.ergon.2012.05.002 [3] Thun, J.H., Lehr, C.B. and Bierwirth, M., Feel free to feel comfortable—an empirical analysis of ergonomics in the German automotive industry. International Journal of Production Economics, 133 (2), pp. 551-561, 2011. DOI:10.1016/j.ijpe.2010.12.017 [4] Asadzadeh, S.M., Azadeh, A., Negahban, A. and Sotoudeh, A., Assessment and improvement of integrated HSE and macroergonomics factors by fuzzy cognitive maps: The case of a large gas refinery. Journal of Loss Prevention in the Process Industries, 26 (6), pp. 1015-1026, 2013. DOI:10.1016/j.jlp.2013.03.007 [5] Guimarães, L.D.M., Ribeiro, J.L.D., Renner, J.S. and Oliveira, P.A.B., Worker evaluation of a macroergonomic intervention in a Brazilian footwear company. Applied Ergonomics, 45 (4), pp. 923935, 2014. DOI:10.1016/j.apergo.2013.11.007 [6] Scott, P.A., Global inequality, and the challenge for ergonomics to take a more dynamic role to redress the situation. Applied Ergonomics, 39 (4), pp. 495-499, 2008. DOI: 10.1016/j.apergo.2008.02.014 [7] Tasca, J.E., Ensslin, L., Ensslin, S.R. and Alves, M.B.M., An approach for selecting a theoretical framework for the evaluation of training programs. Journal of European Industrial Training, 34 (7), pp. 631-655, 2010. DOI: 10.1108/03090591011070761 [8] Koufteros, X., Verghese, A.J. and Lucianetti, L., The effect of performance measurement systems on firm performance: A crosssectional and a longitudinal study. Journal of Operations Management, 32 (6), pp. 313-336, 2014. DOI: 10.1016/j.jom.2014.06.003 [9] Schmenner, R.W., Escaping the black holes of cost accounting. Business Horizons, 31 (1), pp. 66-72, 1988. DOI: 10.1016/00076813(88)90043-2 [10] Gunasekaran, A., Patel, C. and McGaughey, R.E., A framework for supply chain performance measurement. International Journal of

[20] [21]

[22]

[23] [24]

[25]

[26]

[27] [28]

[29]

169

Production Economics, 87 (3), pp. 333-347, 2004. DOI: 10.1016/j.ijpe.2003.08.003 Narasimhan, R., Narayanan, S. and Srinivasan, R., An investigation of justice in supply chain relationships and their performance impact. Journal of Operations Management, 31 (5), pp. 236-247. 2013. DOI: 10.1016/j.jom.2013.05.001 Ahmed, I., Sultana, I., Paul, S.K. and Azeem, A., Employee performance evaluation: a fuzzy approach. International Journal of Productivity and Performance Management, 62 (7), pp. 718-734, 2013. DOI: 10.1108/IJPPM-01-2013-0013 Ensslin, L. and Ensslin, S.R., Construction process of indicators for performance assessment (Conference). In: V Discussion Cycle: Evaluation of Public Policies. Department of Planning (SEPLAN/SC), Florianópolis, 2009. Lynch, R.L. and Cross, K.F., Measure Up! Yardsticks for Continuous Improvement. Cambridge: Blackweel, 1991. Neely, A., Adams, C. and Kennerley, M., The performance prism: The socrecard for measuring and managing stakeholder relationship. Financial Times Prentice Hall, London, 2002. Yeo, R., The tangibles and intangibles of organizational performance. Team Performance Management, 9 (8), pp. 199-204, 2003. DOI: 10.1108/13527590310507453 Garengo, P., Biazzo, S. and Bititci, U.S., Performance measurement systems in SMEs: A review for a research agenda. International Journal of Management Reviews, 7, pp. 25-47, 2005. DOI: 10.1111/j.1468-2370.2005.00105.x Robertson, M.M., Huang, Y.H., O'Neill M.J. and Schleifer, L.M., Flexible workspace design and ergonomics training: Impacts on the psychosocial work environment, musculoskeletal health, and work effectiveness among knowledge workers. Applied Ergonomics, 39, pp. 482-494, 2008. DOI: 10.1016/j.apergo.2008.02.022 Edwards, K. and Jensen, P.L., Design of systems for productivity and well being. Applied Ergonomics, 45 (1), pp. 26-32, 2014. DOI: 10.1016/j.apergo.2013.03.022 Hendrick, H.W., Determining the cost-benefits of ergonomics projects and factors that lead to their success. Applied Ergonomics, 34, pp. 419-427, 2003. DOI: 10.1016/S0003-6870(03)00062-0 Silva, M.P., Amaral, F.G., Mandagara, H. and Leso, B.H., Difficulties in quantifying financial losses that could be reduced by ergonomic solutions. Human Factors and Ergonomics in Manufacturing & Service Industries, 2012. DOI: 10.1002/hfm.20393 Rickards, J. and Putnam, C., A pre-intervention benefit-cost methodology to justify investments in workplace health. International Journal of Workplace Health Management, 5 (3), pp. 210-219, 2012. DOI: 10.1108/17538351211268863 Mafra, J.R.D. Metodologia de custeio para a ergonomia. Revista Contabilidade & Finanças, 17 (42), pp. 77-91, 2006. Das, B., Shikdar, A.A. and Winters, T., Workstation redesign for a repetitive drill press operation: a combined work design and ergonomics approach. Hum. Factors Man., 17, pp. 395-410, 2007, DOI: 10.1002/hfm.20060. Maldonado, A., García, J.L., Alvarado, A. and Balderrama, C.O., A hierarchical fuzzy axiomatic design methodology for ergonomic compatibility evaluation of advanced manufacturing technology. The International Journal of Advanced Manufacturing Technology, 66(14), 171-186, 2013. DOI: 10.1007/s00170-012-4316-8 Craig, B.N., Congleton, J.J., Kerk, C.J., Amendola, A.A. and Gaines, W.G., Personal and non‐occupational risk factors and occupational injury/illness. American Journal of Industrial Medicine, 49 (4), pp. 249-260, 2006. DOI: 10.1002/ajim.20290 Sen, R.N. and Yeow, P.H., Cost effectiveness of ergonomic redesign of electronic motherboard. Applied Ergonomics, 34 (5), pp. 453-463, 2003. DOI: 10.1016/S0003-6870(03)00065-6 Yeow, P.H.P. and Sen, R.N., Productivity and quality improvements, revenue increment, and rejection cost reduction in the manual component insertion lines through the application of ergonomics. International Journal of Industrial Ergonomics, 36, pp. 367-377, 2006. DOI: 10.1016/j.ergon.2005.12.008 Karsh, B.T., Newenhouse, A.C. and Chapman, L.J., Barriers to the adoption of ergonomic innovations to control musculoskeletal


Duarte-dos Santos et al / DYNA 82 (191), pp. 163-170. June, 2015.

[30]

[31]

[32] [33]

[34]

[35]

[36]

[37] [38]

[39]

[40]

[41]

[42] [43]

[44]

[45] [46] [47]

disorders and improve performance. Applied ergonomics, 44 (1), pp. 161-167. 2013. DOI: 10.1016/j.apergo.2012.06.007 Tompa, E., Dolinschi, R. and Natale, J., Economic evaluation of a participatory ergonomics intervention in a textile plant. Applied ergonomics, 44 (3), pp. 480-487. 2013. DOI: 10.1016/j.apergo.2012.10.019 Trask, C., Mathiassen, S.E., Wahlström, J., Heiden, M. and Rezagholi, M., Modeling costs of exposure assessment methods in industrial environments. Work: A Journal of Prevention, Assessment and Rehabilitation, 41, pp. 6079-6086. 2012. DOI: 10.1080/00140139408963711 Garcia-Garcia, M., Sanchez-Lite, A., Camacho, A. and Domingo, R., Analysis of postural assessment methods and virtual simulation tools into manufacturing engineering. DYNA, 80 (181), pp. 5-15. 2013. Bolis, I., Brunoro, C.M., and Sznelwar, L.I., Mapping the relationships between work and sustainability and the opportunities for ergonomic action. Applied ergonomics, 45 (4), pp. 1225-1239, 2014. DOI: 10.1016/j.apergo.2014.02.011 Pereira-da Silva, M., Amaral, F.G., Mandagara, H. and Leso, B.H., Difficulties in quantifying financial losses that could be reduced by ergonomic solutions. Human Factors and Ergonomics in Manufacturing & Service Industries, 2012. DOI: 10.1002/hfm.20393 Looze, M.P., Vink, P., Koningsveld, E.A., Kuijt-Evers, L. and Van Rhijn, G.J., Cost‐effectiveness of ergonomic interventions in production. Human Factors and Ergonomics in Manufacturing & Service Industries, 20 (4), pp. 316-323, 2010. DOI: 10.1002/hfm.20223 Guimarães, L.D.M., Ribeiro, J.L.D. and Renner, J.S., Cost-benefit analysis of a socio-technical intervention in a Brazilian footwear company. Applied Ergonomics, 43, pp. 948-957, 2012. DOI: 10.1016/j.apergo.2012.01.003 Ensslin, L., ProKnow-C, knowledge development processconstructivist: Technical process with patent-pending registration with the INPI. Brazil: [s.n.], 2010. Ensslin, L., Teaching material presented in the discipline: Performance evaluation of the Graduate Program in Production Engineering from the Federal University of Santa Catarina. Florianópolis: UFSC, Brazil, 2011. Bortoluzzi, S.C., Ensslin, S.R., Ensslin, L. and Valmorbida, S.M.I., Performance assessment in networks of small and medium enterprises: State of the art for boundaries imposed by the researcher. Electronic Journal of Business & Strategy, 4, pp. 202-222, 2011. Rosa, F.S., Ensslin, L. and Lunkes, R.J., Management of environmental disclosure: a study about the potential and opportunity of the subject. Engenharia Sanitária e Ambiental, 16, pp. 157-166, 2011. Bruna, E.D., Ensslin, L. and Ensslin, S.R., Selection and analysis of a portfolio of articles about performance evaluation in the supply chain. Production Management, Operations and Systems, 7, pp. 113-125, 2012. Ensslin, L., Ensslin, S.R. and Pacheco, G.C., A study about safety in soccer stadiums based on the analysis of the international literature. Perspectives in Information Science, 17, pp. 71-91, 2012. Afonso, M.H.F., de Souza, J.V., Ensslin, S.R. and Ensslin, L., How to build knowledge about the research topic? Application of the Proknow-C process in the literature search about the evaluation of sustainable development. Journal of Environmental and Social Management, 5, pp. 47-62, 2012. Rosa, F.S., Ensslin, S.R., Ensslin, L. and Lunkes, R.J., Management envinonmental disclosure: a construtivist case. Management Decision, 50, pp. 1117-1136, 2012. DOI: 10.1108/00251741211238364 Lacerda, R.T.O., Ensslin, L. and Ensslin, S.R., A bibliometric analysis of the literature about strategy and performance evaluation. Management and Production, 19, pp. 59-78, 2012. Caldas, M.P. and Tinoco, T., Research in human resource management in the 1990s: A bibliometric study. Journal of Business Administration, 44, pp. 12-18, 2004. Bortoluzzi, S.C., Evaluation of the financial and economic performance of the Marel Furniture Industry Company SA: The contribution of multicriteria methodology to support the constructivist

decision (MCDA - C). Florianópolis. Brazil, 2009.

Thesis

Dissertation

MSc.

UFSC,

S.D. dos Santos, is professor in the accounting course at the Federal University of Mato Grosso do Sul, Brazil. MSc. in Management and Agroindustrial Production in 2012. Currently a Dr. student in Industrial Engineering at the Federal University of Santa Catarina, Brazil. A.R.P Moro, is PhD in Human Movement Science from the Federal University of Santa Maria (UFSM), Brazil. Currently an associate professor in the Department of Physical Education at UFSC. Founding partner of the Brazilian Society of Biomechanics (SBB) and Senior Ergonomist for the Brazilian Association of Ergonomics (ABERGO). L. Ensslin, is PhD in Production Engineering at Lancaster University, UK. PhD in Industrial and Systems Engineering at the University of South California, EE.UU.

170

Área Curricular de Ingeniería Administrativa e Ingeniería Industrial Oferta de Posgrados

Especialización en Gestión Empresarial Especialización en Ingeniería Financiera Maestría en Ingeniería Administrativa Maestría en Ingeniería Industrial Doctorado en Ingeniería - Industria y Organizaciones Mayor información: E-mail: acia_med@unal.edu.co Teléfono: (57-4) 425 52 02


Study of the behavior of sugarcane bagasse submitted to cutting Nelson Arzolaa & Joyner García b a

Department of Mechanical and Mechatronics Engineering, Universidad Nacional de Colombia, Bogotá, Colombia. narzola@unal.edu.co b Universidad Nacional de Colombia, Bogotá, Colombia. jdgarciava@unal.edu.co Received: June 2th, 2014. Received in revised form: January 27th, 2015. Accepted: February 24th, 2015.

Abstract The aim of this work was to study the behavior of sugarcane bagasse submitted to cutting, as a function of its moisture content, angle of the blade edge and cutting speed. The specific cutting energy and peak cutting force were measured using an experimental facility developed for this series of experiments. An analysis of the results of the full factorial experimental design using a statistical analysis of variance (ANOVA) was performed. The response surfaces and empirical models for the specific cutting energy and peak cutting force were obtained using statistical analysis system software. Low angle of the blade edge and low moisture content are, in this order, the most important experimental factors in determining a low specific cutting energy and a low peak cutting force respectively. The best cutting conditions are achieved for an angle of blade edge of 20.8° and a moisture content of 10% w. b. The results of this work could contribute to the optimal design of sugarcane bagasse pre-treatment systems. Keywords: sugarcane bagasse; specific cutting energy; experimental model; biomass; pre-treatment.

Estudio del comportamiento del bagazo de caña de azúcar sometido a corte Resumen El objetivo de este trabajo fue estudiar el comportamiento del bagazo de caña de azúcar sometido a corte, como función de su contenido de humedad, ángulo de filo de la cuchilla y la velocidad de corte. La energía de corte específica y la fuerza de corte pico fueron medidas empleando un dispositivo experimental desarrollado para estos experimentos. Se realizó el análisis de los resultados del diseño factorial completo utilizando un análisis de varianza (ANOVA). Las superficies de respuestas y los modelos empíricos para la energía de corte específica y la fuerza de corte pico se obtuvieron mediante software estadístico. Un ángulo de filo pequeño junto con una baja humedad son, en ese orden, los factores experimentales más importantes para lograr bajos valores de energía de corte específica y de fuerza de corte pico respectivamente. Las mejores condiciones de corte se alcanzan para un ángulo de filo de la cuchilla de 20.8° y un contenido de humedad de 10% b. h. Los resultados de este trabajo pueden contribuir al diseño óptimo de sistemas para el pre-tratamiento de bagazo. Palabras clave: bagazo de caña de azúcar; energía de corte especifica; modelo experimental, biomasa; pre-tratamiento.

1. Introduction According to Larsson et al. [1], pelletized biomass is rapidly becoming an important renewable source of energy production. Several researchers have found that the mechanical integrity of the densified biomass was better in general for moisture contents of less than 15% (w.b.) and fiber sizes less than 19 mm [2]. Densification has encouraged significant interest around the world as a technique for utilization of agro and forest residues as an energy source [3],

and pellets/briquettes production has grown rapidly in Europe, Northern America and China in the last few years. Mechanical densification of biomass into fuel pellets/briquettes has been shown to significantly reduce storage and transportation costs [4]. The cutting process is one of the most important steps for biomass preparation prior to densification. This stage helps to homogenize the raw material and therefore facilitate handling, feeding and filling in the briquetting equipment. Nowadays, for by-product industry, a better knowledge of material behaviour during

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 171-175. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.43816


Arzola & García / DYNA 82 (191), pp. 171-175. June, 2015.

cutting is needed for economical and productive reasons. Biomass machinery and tool manufacturers require reliable information on the main factors influencing biomass cutting, i.e. the specific cutting energy and peak cutting force [5-7]. The sugarcane industry generates large amounts of biomass, such as bagasse and straw, which can be used for power or thermal energy generation and for other engineering applications [8]. Processing the bagasse into a densified fuel through pelletisation will be an economically attractive option for this by-product. It is common practice to conduct a pretreatment to biomass before pelletising, like cutting and drying. From an engineering viewpoint, few papers on biomass cutting process have been published, especially with respect to sugarcane biomass [9]. Also, the research that has been carried out in general on the pre-treatment of sugarcane bagasse and cutting machinery for bio-briquette production is very limited at this time [10-12]. Habib et al. [13] categorized the different parameters affecting the performance of the cutting process. They showed that the main parameter of the cutting tool is the knife-edge angle, and that of the biomass is the moisture content, whereas, for the operation machine conditions, the main parameter is the cutting speed. The scope of this work was to assess the suitability of the cutting process for sugarcane bagasse. Extensive cutting tests were carried out in order to assess the physical variables involved. The variables explored included the moisture content, the angle of the blade edge and the cutting speed. Specific cutting energy and peak cutting force are both important parameters describing the cutting process, for that reason, the surface responses and empirical expressions for these parameters were obtained. 2. Material and method 2.1. Sugarcane bagasse biomass The sugarcane bagasse was obtained from agricultural lands in the Cundinamarca Department, Colombia in January 2013 and stored for two weeks in the laboratory. The recollecting process was conducted carefully in order to avoid contamination of the samples with other process materials. This sugarcane bagasse came from 18-month-old plantations that use Colombian traditional cultivation processes. 2.2. Sample preparation These samples were completely dehydrated in a muffle at a temperature of 80ºC for 10 hours. Once the samples were completely dried, they were weighted and hydrated in moisture percentages on a wet basis of 10%, 20% and 30%, accordingly, and left in a sealed recipient for three days. The determination of moisture content was carried out according to the EN 14774-3 Technical Standard [14]. Simultaneously, a mechanical mixing process was carried out at regular time intervals to guarantee the even distribution of the water. Each experimental treatment is labelled in the following way, W##A##V#, where: W [moisture content, %], A [angle of the blade edge, °] and V [cutting speed, ms-1].

Table 1. Experimental design arrangement N

Code

1 W10A20V1 2 W10A20V2 3 W10A20V3 4 W10A40V1 5 W10A40V2 6 W10A40V3 7 W10A60V1 8 W10A60V2 9 W10A60V3 10 W20A20V1 11 W20A20V2 12 W20A20V3 13 W20A40V1 14 W20A40V2 15 W20A40V3 16 W20A60V1 17 W20A60V2 18 W20A60V3 19 W30A20V1 20 W30A20V2 21 W30A20V3 22 W30A40V1 23 W30A40V2 24 W30A40V3 25 W30A60V1 26 W30A60V2 27 W30A60V3 Source: Self-elaboration

Moisture content W (%) 10 10 10 10 10 10 10 10 10 20 20 20 20 20 20 20 20 20 30 30 30 30 30 30 30 30 30

Angle of the blade edge  (°) 20 20 20 40 40 40 60 60 60 20 20 20 40 40 40 60 60 60 20 20 20 40 40 40 60 60 60

cutting speed V (ms-1)

Number of replicates

2.3 3.4 4.5 2.3 3.4 4.5 2.3 3.4 4.5 2.3 3.4 4.5 2.3 3.4 4.5 2.3 3.4 4.5 2.3 3.4 4.5 2.3 3.4 4.5 2.3 3.4 4.5

7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7

2.3. Design of experiment A systematic study was conducted where three factors were varied according to a three level full factorial design. The factors were: angle of the blade edge, cutting speed and bagasse moisture content. Finally, 27 experimental treatments were obtained. The experiments were replicated seven times for a total of 189 experimental units. Table 1 shows the experimental design arrangement. An analysis of the results of the full factorial experimental design using a statistical analysis of variance (ANOVA) was performed, an F test was conducted and the results evaluated at a 95% confidence level [15]. The response surfaces and empirical models for the specific cutting energy and peak cutting force were obtained using statistical analysis program developed in Matlab. 2.4. Cutting test The energy and force required for cutting may vary depending on the raw material moisture content, the blade geometry and the speed of the cutting process. Laboratory experiments were conducted to evaluate the performance of a single edge knife, with different edge angles (20°, 40° and 60°) and cutting speeds (2.3 ms-1, 3.4 ms-1 and 4.5 ms-1) respectively, under controlled conditions. Most of the agricultural machinery employed for the cutting process use blades with edge angles in this range. On the other hand, the cutting speed range was limited for the operation of the experimental facility; nevertheless, the cutting speed range of the experiments

172


Arzola & García / DYNA 82 (191), pp. 171-175. June, 2015.

matches the low range of speed found in several cutting agricultural machinery. The cuts were made by a lab scalecutting unit able to control the process parameters shown in Fig. 1. The facility consists of a structure, a biomass sample holder and a pendulum with a blade holder. The bagasse samples are fixed in cantilever into the biomass sample holder and a mobile single blade executes the cut. An encoder, fixed in the pendulum rotation axis, allows the measurement of the initial and final angular positions; then a data acquisition system calculates the energy consumed during cutting. The energy consumed in shearing a unit area of biomass is called the specific cutting energy. This response variable is calculated as the mechanical energy consumed during cutting divided by the transverse section of each biomass specimen. The mechanical energy was measured by means of the difference between angular positions of the pendulum; while the transverse section of the biomass specimen was measured by digital image processing (CAD software). Furthermore, the force involved in the cutting process is measured with a precision load cell located in the knife holder. The maximum force reached in the plane of cut is called the peak cutting force. Fig. 2 shows a biomass specimen secured in the sample holder. The biomass specimen is an array of circular shaped biomass bagasse fibers , which are joined at the base with adhesive tape. The cut is performed in the transverse direction to the fibers. The length of specimens was 100 mm and the diameter was 10 ± 2 mm. 3. Results and discussion 3.1. Specific cutting energy results An analysis of the results of the full factorial experimental design using a statistical analysis of variance (ANOVA) was

Figure 2. Biomass specimen secured in the sample holder Source: self-elaboration.

performed. Only the angle of the blade edge and the moisture content are statistically significant for the specific cutting energy achieved in the cutting process for 95% of probability. Table 2 shows the degree of significance of the coefficients for the factors and their interaction. The expression of the empiric model for the behaviour of the specific cutting energy in function of the angle of the blade edge and the moisture content of the sugarcane biomass obtained by applying the statistical software from experimental data is: 0.242135 0.00287758W

0.00901554α 0.000119362α 0.000150395αW 68.6

(1)

Where: ec: Specific cutting energy (Jmm-2). : Angle of the blade edge (°). Wt: Moisture content of the sugarcane bagasse (%). Fig. 3 shows the main effects graph for specific cutting energy. The main effects graph is used to examine differences between level means for the three experimental factors. There is a main effect when different levels of an experimental factor affect the response differently. The main

Figure 1. Experimental facility for the determination of specific cutting energy and peak cutting force. a) Rotary encoder; b) acquisition data system; c) shaft; d) pendulum; e) load cell; f) blade holder; g) biomass sample holder; h) blade Source: self-elaboration.

Table 2. ANOVA for specific cutting energy . Source Mean square A:Vc 0.00109155 B:Alpha 0.449191 C:Wt 0.0323553 AA 0.00140461 AB 0.00108699 AC 0.00190095 BB 0.0957422 BC 0.0759982 CC 0.00237216 Blocks 0.000134424 Source: self-elaboration

173

Rate-F 0.62 256.17 18.45 0.80 0.62 1.08 54.60 43.34 1.35 0.08

Valor-P 0.4312 0.0000 0.0000 0.3720 0.4322 0.2992 0.0000 0.0000 0.2464 0.9982


Arzola & García / DYNA 82 (191), pp. 171-175. June, 2015.

0.0474551 0.0011553α 0.00116605W 0.000191839V 63.9 0.0000226658αW

0.0000126339α

(2)

Where: Fp: Peak cutting force (N). Vc: Cutting speed (ms-1). Fig. 5 shows the main effects graph for the peak cutting force. Fig. 6 shows the response surface for the peak cutting force. It can be concluded that both factors, low angle of the blade edge and low moisture content, are the most important factors in determining a decrease in the peak cutting force. Additionally, the cutting speed (single factor) is not statistically significant for the specific cutting energy response.

Figure 3. Main effects graph for specific cutting energy Source: self-elaboration.

Table 3. ANOVA for peak cutting force. Source Mean square A:Vc 0.0000317415 B:Alpha 0.00441394 C:Wt 0.00210402 AA 0.0000329728 AB 5.75361E-7 AC 0.000374057 BB 0.00107262 BC 0.00172617 CC 6.55832E-8 Blocks 0.00000709776 Source: self-elaboration

Rate-F 1.00 138.44 65.99 1.03 0.02 11.73 33.64 54.14 0.00 0.22

Valor-P 0.3198 0.0000 0.0000 0.3106 0.8933 0.0008 0.0000 0.0000 0.9639 0.9691

Figure 4. Response surface for the specific cutting energy (Vc=3.4 m/s) Source: self-elaboration.

effects test is suitable for inspecting differences amongst the levels of a single experimental factor, averaging over the other factors. The steeper the slope of the curve, the greater the magnitude of the main effect. Fig. 4 shows the response surface for the specific cutting energy. It can be concluded that a low angle of the blade edge is the most important factor in determining a reduction in the specific cutting energy. Additionally, the cutting speed is not statistically significant for the specific cutting energy response. 3.2. Peak cutting force results

Figure 5. Main effects graph for peak cutting force Source: self-elaboration.

The analysis of the results of the full factorial experimental design was performed using a statistical analysis of variance (ANOVA). It was proved that the three experimental factors (considering the interactions between factors) are statistically significant for the peak cutting force achieved in the cutting process for 95% of probability. Table 3 shows the degree of significance of the coefficients for the factors and their interaction. The expression of the empiric model for the behaviour of the peak cutting force in function of the experimental factors is: Figure 6. Response surface for the peak cutting force (Vc=3.4 m/s) Source: self-elaboration. 174


Arzola & García / DYNA 82 (191), pp. 171-175. June, 2015.

Despite there being no other found works on cutting sugarcane bagasse, several similitudes are found in this research with respect to other studies. For example, according with [9] the effect of cutting speed, there is little or nothing in the specific cutting energy required for cutting sugarcane trash by shearing, this behavior is similar in the current study. In another work [16], the researchers explain that the wheat straw is brittle and less viscoelastic to low moisture content and thus easier to cut; also they explained that the ideal moisture content to cut wheat straw was in the range of 8 to 10%. Similarly for the current study, it is possible to find that specific cutting energy for sugarcane bagasse is approximately doubled when the moisture content increases from 10% to 30%. 4. Conclusions

References

[2] [3] [4] [5] [6]

[7]

[9] [10] [11]

[12] [13] [14]

In this study, results show a good statistical correlation between specific cutting energy, the angle of the blade edge and the moisture content respectively (R2=68.6). Also, results show a good statistical correlation between peak cutting force and the three experimental factors studied (R2=63.9). The analytical models of the sugarcane bagasse behavior and the energy and force involved during cutting were studied in order to choose the best cutting conditions. Low angle of the blade edge and low moisture content are, in this order, the most important experimental factors in determining a low specific cutting energy and a low peak cutting force. According to the analysis of the response surface for specific cutting energy, the best cutting conditions are achieved for an angle of blade edge of 20.8°, a cutting speed of 2.89 ms-1 and a moisture content of 10% (w.b.) respectively; the minimum specific cutting energy was obtained for this set of reference values ( 0.00596 ). The results of this work could contribute to the optimal design of bagasse pretreatment systems.

[1]

[8]

Larsson, S.H., Thyrel, M., Geladi, P. and Lestander, T.A., High quality biofuel pellet production from pre-compacted low density raw materials. Bioresour. Technol, 99, pp. 7176–7182, 2008. DOI: 10.1016/j.biortech.2007.12.065 Bissen, D., Biomass densification, document of evaluation. Agricultural Utilization Research Institute, Minneapolis: Zachry Energy Corporation, 2009. Bhattacharya, S.C., Biomass energy use and densification in developing countries. Paper presented at the Pellets 2002: The First World Conference on Pellets, Stockholm, Sweden, 2002. Mani, S., Sokhansanj, S., Bi, X. and Turhollow, A., Economics of producing fuel pellets from biomass. Appl Eng Agric, 22 (3) pp. 421426, 2006. DOI: 10.13031/2013.20447 Méausoone, P.J., Choice of optimal cutting conditions in wood machining using the coupled tool–material method, in: Proceedings of the 15th International Wood Machining Seminar, pp. 37-47, 2001. Davim, J.P., A note on the determination of optimal cutting conditions for surface finish obtained in turning using design of experiments. J. Mater. Process. Technol, 116 (2), pp. 305-308, 2001. DOI: 10.1016/S0924-0136(01)01063-9 Eyma, F., Méausoone, P. and Martin, P., Strains and cutting forces involved in the solid wood rotating cutting process. Journal of Materials Processing Technology, 148, pp. 220-225, 2004. DOI: 10.1016/S0924-0136(03)00880-X

[15] [16]

Osorio, J.A., Varón, F. and Herrera, J.A., Mechanical behavior of the concrete reinforced with sugar cane bagasse fibers. DYNA, 74 (153), pp. 69-79, 2007. Bianchini, A. and Magalhaes, P., Behavior of sugarcane trash submitted to shearing in laboratory conditions. Rev. bras. eng. agríc. ambient. Campina Grande, 8 (2/3), pp. 304-310, 2004. Bianchini, A. and Magalhaes, P.S.G., Evaluation of coulters for cutting sugar cane residue in a soil bin. Biosystems Engineering, 100, pp. 370-375, 2008. DOI: 10.1016/j.biosystemseng.2008.04.012 Bianchini, A., Desenvolvimento teórico experimental de disco de corte dentado passivo para corte de palhiço em cana-de-açúcar. Tesis Dr., Faculdade de Engenharia Agrícola, Universidade Estadual de Campinas, Brasil, 2002. Marey, S.A., Drees, A.M., Sayed-Ahmed, I.F. and El-Keway, A.A., Development a feeding mechanism of chopper for chopping sugarcane bagasse. Misr J. Ag. Eng., 24 (2), pp. 299-317, 2007. Habib, R.A., Azzam, B.S., Nasr, G.M. and Khattab, A.A., The parameter affecting the cutting process performance of agricultural plants. Misr J. Ag. Eng., 19 (2), pp. 361-372, 2002. European Standard EN 14774-3:2009, Solid biofuel. Determination of moisture content, oven dry method. Part 3: moisture in general analysis sample, 2009. Kuehl, R.O., Design of experiments: Statistical principles of research design and analysis. Second Edition, Duxbury/Thomson Learning, 2000. Kushwaha, R.L., Vaishnav, A.S. and Zoerb, G.C., Shear strength of wheat straw. Canadian Agricultural Engineering, Winnipeg, 25 (2), pp.163-166, 1983.

N. Arzola-de la Peña, received the BSc. Eng. in Mechanical Engineering in 1997 and the MSc. degree in Applied Mathematic in 1999, both of them from the Universidad de Cienfuegos, Cuba; after that received the PhD. degree in Technical Science in 2003 from the Universidad Central de las Villas, Cuba. From 1997 to 2004, he was an Assistance professor in the Mechanical Engineering Department, Universidad de Cienfuegos, Cuba; and since 2005 he is an associate professor in the Department of Mechanical and Mechatronics Engineering, Universidad Nacional de Colombia, Bogotá, Colombia. His research interests include: modeling, design and optimization of machines; robust design; and failure analysis and structural integrity of machines. J.D. García-Vásquez, received the BSc. Eng. in Mechanical Engineering 2008, the MSc. degree in Mechanical Engineering in 2015 all of them from the Universidad Nacional de Colombia. From 2008 to 2012, he worked for the Dairy Industry in the Maintenance Department. In 2012 and 2013 he was a full professor in the Metalworking Center, Capital District, SENA. Currently, he is a Project Engineer in the Engineering and Integral Services of Colombia.

175


Contribution of the analysis of technical efficiency to the improvement of the management of services David López-Berzosa a, Carmen de Pablos-Heredero b & Carlos Fernández-Renedo c a Facultad de Ingeniería, Universidad de Exeter, Exeter, Reino Unido. davidlopezberzosa@gmail.com Facultad de Ciencias Jurídicas y Sociales, Universidad Rey Juan Carlos, Madrid, España. carmen.depablos@urjc.es c Coordinación regional de donación y trasplante Servicios Sanitarios de Castilla y León, Madrid, España, tranplantes@grs.sacyl.es b

Received: July 16th, 2014. Received in revised form: November 13th, 2014. Accepted: January 30th, 2015.

Abstract The technical efficiency measures the ability that a system offers at maximizing the result restricted to budgetary restrictions. This article offers formal methods to quantify the technical efficiency in health systems and the influences of organizational structures and internal processes in the observed technical efficiency are also analyzed. The empirical analysis is focused on the quality of donation and transplant services. The results show a positive relationship between the levels related to quality indicators and the observed technical efficiency in the donation and transplant units of the 11 analyzed hospitals. This way it is possible to conclude that high levels in the quality indexes are a necessary condition to reach an increased level of the service offered. Keywords: health services; technical efficiency; Baldrige indicators; donation; transplant; quality; service; processes.

Contribución del análisis de la eficiencia técnica a la mejora en la gestión de servicios Resumen La eficiencia técnica mide la habilidad que tiene un sistema a la hora de maximizar su resultado sometido a restricciones en recursos. En este trabajo se proporcionan métodos formales para cuantificar la eficiencia técnica y se analiza la influencia de las estructuras organizativas y procesos internos en la eficiencia técnica observada. El análisis empírico se aplica a la calidad en la prestación de servicios de donación y trasplante. Se muestra una relación positiva entre los niveles relativos a los indicadores de calidad y la eficiencia técnica observada en las unidades de donación y trasplante de los 11 hospitales analizados. Por tanto, altos niveles en los indicadores de calidad son condición necesaria para obtener un elevado nivel de prestación de servicio. Palabras clave: servicios sanitarios; eficiencia técnica; indicadores Baldrige; donación; trasplante; calidad; servicio; procesos.

1. Introducción Los sistemas de prestación sanitaria presentan elevados niveles de interdependencia entre los agentes implicados, pacientes incluidos, lo que les hace extraordinariamente complejos en relación a otro tipo de servicios. Al mismo tiempo, los sistemas de prestación de servicios de salud (SPSS) son adaptativos porque, a diferencia de sistemas mecánicos, tienen capacidad de aprender y modificar su comportamiento en base a la experiencia y el contexto. Las características de complejidad y capacidad de adaptación

implican que el resultado de una intervención sanitaria no sea completamente predecible ex ante, a diferencia de un proceso de fabricación convencional, y por tanto sujeto a niveles elevados de variabilidad. Los SPSS necesitan modificar sus estructuras con el fin de seguir mejorando su rendimiento con un menor gasto asociado [1,2]. En relación a la optimización de recursos en los SPSS, hay un creciente interés por parte de la comunidad científica en el análisis de la productividad y eficiencia de dichos sistemas tal y como destacan [3,4]. La eficiencia técnica es la habilidad de un determinado sistema para maximizar el resultado de la prestación de un

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 176-182. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.44455


López-Berzosa et al / DYNA 82 (191), pp. 176-182. June, 2015.

servicio sujeto a un determinado nivel de recursos y restricciones [1]. La eficiencia técnica como concepto operativo es útil no solamente por cuanto permite medir el aprovechamiento de los recursos sino al mismo tiempo permite cuantificar el nivel de servicio ofrecido por un sistema. El presente artículo tiene dos objetivos principales, (1) aplicar métodos formales para cuantificar la eficiencia técnica en las organizaciones y (2) analizar la influencia de las estructuras organizativas y procesos internos en la eficiencia técnica observada. En relación al primer objetivo, numerosos autores han destacado la importancia de disponer de procedimientos analíticos para el análisis objetivo de la eficiencia productiva en lo que a la gestión de servicios se refiere [5,6]. Las técnicas no paramétricas de análisis de la productividad denominadas DEA (en inglés: data envelopment analysis) han ocupado un lugar destacado en lo que al análisis de la eficiencia productiva se refiere [4,7]. En su forma más básica DEA permite calcular la eficiencia técnica lograda por un conjunto de sistemas mediante el contraste de los niveles de servicio logrados con lo que se considera el máximo nivel teóricamente posible (denominada frontera de producción). La naturaleza no paramétrica de las técnicas DEA evita la necesidad de conocer la función de producción (es decir, la relación analítica entre los recursos y el resultado esperado), lo cual sin duda ha facilitado su utilización en numerosos sectores [8-11]. No obstante las técnicas DEA no permiten investigar en detalle qué factores influyen en la eficiencia técnica observada ni cómo mejorar dicha eficiencia. Dichas cuestiones solamente pueden ser resueltas mediante procedimientos de inferencia basados en técnicas estadísticas [12]. Con el fin de resolver las limitaciones de DEA en este sentido, algunos autores proponen la combinación de DEA con técnicas de regresión lineal convencionales, lo cual, puede presentar determinados riesgos en el resultado del análisis [5,6,12]. Sin embargo, en el contexto sanitario, posiblemente la limitación más importante de las técnicas DEA es la asunción de lo que se conoce como “separabilidad”, es decir que los niveles de rendimiento son independientes del contexto en el cual el servicio tiene lugar [12]. A modo de ejemplo, en el contexto de donación y trasplante suponer separabilidad implica asumir que la cantidad de trasplantes de hígado esperada es independiente de la edad media de los donantes. Por el contrario, las aproximaciones paramétricas evitan los problemas derivados de la condición de separabilidad mediante (1) la incorporación del contexto en el modelo de análisis y (2) la imposición de determinadas restricciones en los momentos estadísticos de los factores y la respuesta. La mayoría de los análisis paramétricos reflejados en la literatura observada utilizan modelos demasiado simples para poder caracterizar servicios sanitarios complejos [12-16]. En relación a estas limitaciones observadas, el presente artículo desarrolla una aproximación para el cálculo de la eficiencia técnica en contextos multinivel y multirespuesta. A modo de ilustración, la metodología desarrollada es capaz de

incorporar la estructura organizativa del sistema español de donación y trasplante en el análisis de la respuesta combinada de trasplante de riñón e hígado. En cuanto al segundo objetivo de este trabajo, la dificultad en el diseño de procesos y estructuras organizativas en las que intervienen diferentes especialistas guarda una gran relación con la forma en la que el conocimiento se transfiere entre los agentes [17]. Contextos sujetos a condiciones dinámicas, incluso sistemas con estructuras y procesos claramente definidos, necesitan implementar mecanismos de coordinación que faciliten la gestión de situaciones imprevistas mediante la integración de recursos humanos y conocimiento específico. Con el fin de analizar esta relación, el presente trabajo adopta el marco conceptual conocido como “modelo Baldrige” que se ha utilizado con frecuencia en sectores industriales y, de manera más reciente, en el sector sanitario [18]. Dicho modelo se organiza en torno a siete componentes interrelacionados: (1) liderazgo, (2) enfoque en los clientes y demás grupos de interés, (3) planificación estratégica, (4) gestión de los recursos humanos, (5) gestión de la información y análisis de datos, (6) gestión de los procesos, (7) y gestión de los resultados en el desempeño. El resto del artículo se organiza del siguiente modo: la sección 2 proporciona un método cuantitativo formal para el análisis de la eficiencia técnica observada en los sistemas de donación y trasplante y caracteriza las componentes asociadas a los sistemas mediante el modelo de Baldrige con el fin de identificar relaciones causales entre dicho modelo y la eficiencia técnica observada. Finalmente la sección 3 muestra los resultados para posteriormente en la sección 4 ofrecer una breve discusión y futuras líneas de trabajo derivadas. 2. Metodología A lo largo de este trabajo se hace referencia a estructura multinivel, o jerárquica, a aquella constituida por diferentes unidades agregadas por niveles, es el caso de unidades de donación integradas en un mismo hospital integrado dentro de un sistema regional y este a su vez en un sistema nacional. La existencia de este tipo de estructuras no es accidental ni debe ser ignorada a riesgo de obviar importantes efectos e interacciones en el análisis. De este modo el presente trabajo adopta modelos paramétricos generalizados multinivel dado que permiten definir funciones tecnológicas capaces de (1) modelar las estructuras multinivel existentes en el sistema nacional de donación y trasplante, (2) modelar efectos no lineales en los factores, (3) calcular la eficiencia técnica observada en el trasplante de varios órganos de forma conjunta. El Anexo A presenta un panel de datos correspondiente a 11 hospitales implicados en el sistema español de donación y trasplante durante el periodo [2008-2010]. Cada hospital se compone de diferentes unidades de servicio encargadas de (1) identificar donantes potenciales, (2) realizar los procesos médicos correspondientes así como (3) realizar trasplantes de órganos. El panel de datos representa dos tipos de variables

177


López-Berzosa et al / DYNA 82 (191), pp. 176-182. June, 2015. Tabla 1. Momentos estadísticos variables de trasplante Resumen: Trasplante de Riñón Media Desviació n estándar Tipo de Hospital Servicios 0 básicos de trasplante Servicio de 1 neurologí a Servicio de neurologí 2 ay servicios avanzado s

Resumen: Trasplante de Hígado N Medi Desviació N a n estándar

5.62

4.1

2 4

3.1

2.5

2 4

29

7.2

8

16.4

4.6

8

23.4

11.1

1 2

13.3

6.8

1 2

Figura 1.Funciones de producción. Fuente: elaboración propia

3. Resultados

Fuente: elaboración propia

3.1. Función de producción y nivel de eficiencia técnica de respuesta: trasplantes de riñón y de hígado respectivamente. Dichos trasplantes son llevados a cabo por el hospital “id” durante el periodo “year”. Tres variables de entrada se consideran, el número total de donantes “donors”, el tipo de hospital “unittype” y el número de donantes de edad superior a los 70 años “donors70_100”. La Tabla. 1 presenta la media y la desviación estándar para las dos variables consideradas, trasplante de riñón e hígado respectivamente. De acuerdo con los resultados, existe una clara dependencia entre el tipo de hospital y los momentos de las variables de respuesta, debido a la mayor dotación de tecnologías avanzadas para la detección y mantenimiento del donante y a una mayor población asignada a las unidades de tipo 2. Los resultados de la Tabla 2 hacen pensar en distribuciones de tipo Poisson con niveles elevados de dispersión. La disparidad observada entre los distintos hospitales, así como los niveles de dispersión, es decir desviaciones superiores a la media, impiden la aplicación de modelos convencionales de regresión lineal.

La Fig. 1 representa dos alternativas a la hora de construir la función de producción. A la izquierda un modelo de dos niveles en el cual observaciones correspondientes a trasplantes realizados en el tiempo (primer nivel) son agregadas en unidades de servicio (segundo nivel). En este sentido ζ representa la habilidad, es decir la eficiencia técnica, de una determinada unidad de servicio de realizar trasplantes (por ejemplo de riñón). El modelo representado a la derecha de la figura 3 define un nivel adicional que agrega diversas unidades de servicio en un mismo hospital (tercer nivel). De este modo ζ representa la eficiencia técnica lograda por un hospital en la ejecución de trasplantes de riñón e hígado. De este modo ensayando funciones de producción del tipo:

Fixed Part: rate ratios exp(β2) [unittype] exp(β3) [donors] exp(β4) [donors70-100]

(95% CI)

1.21*** 1.10*** 0.96**

(1.08,1.36) (1.08,1.12) (0.94,0.99)

: ′ →

E imponiendo una relación de tipo log-link la función de producción queda como sigue:

ln

(exp)

∙ ′ ⋯ ζ

Random Part

Log likelihood Fuente: elaboración propia

:

Est

ζ

(1)

Tabla 2. Función de Producción. Respuesta combinada RC-Poisson: multi-output

ζ

0.17 0.026 -0.066 -250.4

ζ

(2)

ζ ζ ⋯ La expresión previa (exp) se corresponde con una función de producción multinivel y multirespuesta de tipo Poisson 178


López-Berzosa et al / DYNA 82 (191), pp. 176-182. June, 2015. Tabla 3. Eficiencia técnica correspondiente a los trasplantes de riñón e hígado. Hospital Efecto de nivel 3 (Eficiencia técnica) Media Varianza -.54019696 .25949499 1 -.06833701 .33490417 2 .2812933 .34807859 3 -.31050514 .25101169 4 .2531918 .21514489 5 .56289722 .16580095 6 .45606749 .18930796 7 -.2067746 .24271143 8 -.11436051 .23453519 9 -.13384933 .21404345 10 -.17941197 .24079713 11 Fuente: elaboración propia

que (1) engloba la respuesta conjunta de trasplante de riñón e hígado respectivamente (2) refleja la dependencia existente entre observaciones relativas a la misma serie temporal y (3) la dependencia entre observaciones pertenecientes a un mismo hospital. Se observa que los efectos aleatorios son del tipo: ζ ∽ N 0, ψ y ζ ∽ N 0, ψ con una covarianza ψ . Para los datos presentados, la Tabla 3 muestra los coeficientes asociados a la función de producción asociada a los trasplantes de riñón e hígado. De los resultados anteriores podemos deducir que un incremento unitario en el factor “unittype” representa un incremento del 21% en la capacidad de trasplante de riñón e hígado. De manera análoga, un incremento en el factor “donors” representa incrementos del 10%. Por otra parte, un incremento unitario de número de donantes con edades superiores a los 70 años representa una disminución estimada del 4%. Se observa que el efecto de nivel 3, que representa variaciones a nivel de hospital, sigue una distribución de tipo ∽ 0; 0.17 . Es decir se pueden esperar variaciones en la eficiencia técnica entre hospitales según una distribución normal con desviación típica de 0.17. 3.2. Análisis de eficiencia técnica Para la función de producción representada mediante la función (exp) con coeficientes según la figura 2, la eficiencia técnica viene data por la exponenciación del efecto ζ , ver expression a continuacion. Es decir dicho efecto representa la contribución a la respuesta observada que no se explica por los niveles de entrada, ni por los factores externos considerados.

(3)

Por tanto dicho efecto representa los “esfuerzos” realizados a nivel interno por los hospitales para maximizar la capacidad trasplantadora sujeta a factores externos y recursos disponibles. Formalmente es posible considerar el efecto como una variable latente representativa de las características intrínsecas del

Figura 2. Niveles de eficiencia técnica. Fuente: elaboración propia

hospital, variable que se observa en la práctica mediante las variables de salida, trasplantes de riñón y de hígado realizados. La Tabla 1 representa la media y la varianza de la eficiencia técnica. La Fig. 2 representa de manera gráfica los niveles de eficiencia técnica que se ha logrado en los hospitales considerados. Cada punto representa la media para el periodo [2008-2010] con su intervalo de confianza asociado. De acuerdo con la Fig. 2 podemos identificar 3 grupos diferentes de hospitales de acuerdo con los niveles de eficiencia técnica logrados (1) grupo de excelencia: hospitales {#6,#7,#3,#5}, (2) grupo de base: hospitales {#2, #9, #10, #11, #8}, (3) grupo con potencial de mejora: hospitales {#1,#4}. 3.3. Análisis cualitativo de los servicios de donación y trasplante Una vez clasificados los hospitales en base a la eficiencia técnica observada, cabria preguntarse qué características internas, por ejemplo procesos operativos o estructura organizativa, en cada hospital puede influir en los niveles de eficiencia. Para el análisis de la estructura interna de cada hospital se recurre a la metodología de Baldrige. Siguiendo dicha metodología, para el cálculo de los indicadores, se realizó un análisis pormenorizado de los procesos implementados en cada unidad de donación y trasplante considerado junto con la documentación existente relativa a las auditorias de calidad y datos registrados de donación y trasplante durante el periodo de tiempo considerado. Finalmente los resultados fueron triangulados mediante entrevistas personales. De acuerdo con el análisis de las fuentes de información previas, la Fig. 3 presenta los niveles asociados a los indicadores (liderazgo, enfoque en los clientes y demás grupos de interés, planificación estratégica, gestión de los recursos humanos, gestión de la información y análisis de datos, gestión de los procesos) junto con el índice total de Baldrige (referido a un nivel máximo de 550 puntos).

179


López-Berzosa et al / DYNA 82 (191), pp. 176-182. June, 2015.

Figura 3. Indicadores de Baldrige para las unidades consideradas en el estudio Fuente: elaboración propia

Tabla 5. Multivariante, análisis factorial Facto Eigenvalu Difference r e Factor 1 5.356 5.116 Factor 2 0.239 0.193 Factor 3 0.046 0.038 Factor 4 0.007 0.020 Factor 5 -0.012 0.029 LR test: chis2(15)=111.5 independent v.s. 5 saturated Fuente: elaboración propia

Tabla 4. Relación entre la eficiencia técnica y el indicador total de Baldrige

Variable dependien te: Eficiencia Técnica Rsquared: 0.784

Variable independie nte

Coef.

Std. Err.

t

P>|t|

95% Conf. Int.

Índice Baldrige

0.006

0.001

5.72

0.00 0

[0.0036, 0.0084]

Termino constante

-2.733

0.480 9

5.68

0.00 0

[-3.821,1.645]

Prob>F: 0.0003

Fuente: elaboración propia

Resulta notorio que las 11 unidades hospitalarias logran unos niveles excelentes de calidad tal y como refleja el índice de Baldrige. Se puede observar que los hospitales {#2,#5,#6,#7,#9} destacan gracias a sus elevados niveles de liderazgo y planificación estratégica. La Tabla 4 muestra un análisis preliminar que establece la relación entre el nivel de eficiencia técnica observado y el índice de Baldrige. Se observa que el nivel de Baldrige es capaz de determinar el 78% de la variabilidad observada en la eficiencia técnica (R-squared 0.78). Podemos apreciar asimismo que el índice Baldrige modera positivamente la eficiencia técnica y que dicho efecto es estadísticamente significativo (Coef: 0.006, p-value: 0.000). 3.4.

Impacto de la estructura interna en la eficiencia técnica observada

Con el fin de profundizar en el análisis de la relevancia de la estructura organizativa en lo que a la eficiencia técnica se refiere, la Tabla 5 presenta el resultado de un análisis multivariante basado en los datos de la Tabla 4.

Proportio Cumulativ n e 0.957 0.9571 0.042 1.000 0.008 1.008 0.001 1.009 -.002 1.007 Prob>chi2=0.0000

De los resultados de la tabla anterior se puede concluir que la dimensión del problema puede reducirse a 2 factores en base a la proporción de variabilidad que es posible explicar (i.e. 0.957 y 0.042 para los factores 1 y 2 respectivamente). Es decir, en base a los resultados se pueden medir los niveles de excelencia organizativa mediante dos factores en lugar de los doce definidos en la metodología Baldrige. De este modo, conservando solamente 2 factores y realizando una rotación varimax se obtienen los resultados reflejados en la Tabla 6 y Fig. 3, la cual relaciona la contribución de cada indicador de Baldrige a cada uno de los factores principales. De acuerdo con la Fig. 4 se observa como el factor 1 mide principalmente la intensidad de los indicadores: liderazgo, estrategia y foco en las operaciones. De manera análoga se aprecia que el factor 2 mide la intensidad de foco en el cliente y el personal médico junto con la gestión de la información. Tabla 6. Relación entre elementos Baldrige y nuevos factores (2 factores principales) Variable Factor 1 Factor 2 Uniqueness Liderazgo (l) 0.764 0.632 0.015 Planificación estratégica (s) 0.837 0.544 0.0028 Foco en el cliente (c) 0.629 0.748 0.0436 Gestión del conocimiento(k) 0.599 0.748 0.081 Foco en el personal (w) 0.445 0.896 -.0014 Foco en las operaciones (o) 0.763 0.393 0.2621 Fuente: elaboración propia

180


López-Berzosa et al / DYNA 82 (191), pp. 176-182. June, 2015.

trabajadores y la gestión de la información. Esta relación causal entre los factores principales y la eficiencia técnica se confirma en los resultados presentados en la Tabla 8. De acuerdo con dicha tabla ambos factores moderan positivamente la eficiencia técnica (Coef: 0.21 y 0.17) además de ser estadísticamente significativos (p-value: 0.012 y 0.038).

.9

Factor loadings

.8

w

c

F a c to r 2 .6 .7

k

4. Conclusiones l

.5

s

.4

o

.4

.5

.6 Factor 1

.7

.8

Rotation: orthogonal varimax Method: iterated principal factors

Figura 4. Relación entre los indicadores de Baldrige y los nuevos factores. Fuente: elaboración propia

Tabla 7. Relación entre la eficiencia técnica y nuevos factores (2 factores principales). Hospital Eficiencia técnica Factor 1 Factor 2 1 -0.54 -1.365 -1.133 4 -0.31 -2.428 0.368 8 -0.20 0.528 -0.508 11 -0.17 0.236 -2.163 10 -0.13 0.553 -0.884 9 -0.11 0.079 0.806 2 -0.068 0.079 0.806 5 0.253 0.398 0.583 3 0.281 0.239 0.712 7 0.456 0.551 0.743 6 0.562 1.126 0.301 Fuente: elaboración propia

La Tabla 7 presenta los niveles de eficiencia técnica observados en cada unidad hospitalaria junto con los factores principales definidos en la presente sección. De este modo un hospital interesado en aumentar sus niveles de servicio, medido en términos de eficiencia técnica, necesita ser excelente en ambos factores principales, el primero de ellos relacionado con el liderazgo, desarrollo de la estrategia y operaciones, el segundo, relacionado con los Tabla 8. Relación entre la eficiencia técnica y nuevos factores. variable Coef Std. T P>/t independient . erro / e r 0.21 0.06 3.2 0.01 Variable Factor 1 dependiente

Eficienci a técnica

Factor 2

Término constante Rsquared: Prob>F: 0.0095 Fuente: elaboración propia

0.17 .005

6 0.06 7 .064

1 2.4 9 0.0 9

2 0.03 8 0.93 2

95% Conf. Int. [0.060;0.369 ] [0.124;0.325 ] [-0.142;0.154]

Los sistemas de prestación de servicios de salud (SPSS) presentan dos importantes características: interdependencia y capacidad de adaptación. Dichas características hacen que la gestión y evaluación de dichos sistemas sea compleja. El presente trabajo ha tomado como ejemplo paradigmático de un SPSS complejo el servicio de donación y trasplantes de una región en España. De la literatura analizada y de los resultados del presente trabajo es posible argumentar que la excelencia en la prestación de servicios de donación y trasplante requiere de la optimización en la ejecución de los procesos operativos. Los resultados del presente trabajo muestran que es posible implementar en la práctica mecanismos para la medición de dicha optimización mediante instrumentos cuantitativos y cualitativos. De este modo se estima la optimización a través de la eficiencia técnica y los indicadores de Baldrige. Un resultado interesante del presente trabajo es la relación existente entre los niveles relativos a los indicadores de Baldrige y la eficiencia técnica observada en las unidades de donación y trasplante de los 11 hospitales analizados en el estudio. De este modo es posible concluir que altos niveles en los indicadores de Baldrige son condición necesaria para obtener un elevado nivel de prestación de servicio. En este sentido, una mayor concienciación por parte de los decisores políticos en el ámbito sanitario del papel de Los profesionales de cara a los enfermeros ayudaría a aumentar la eficiencia del sistema de donación de órganos, dados los ingentes gastos que se soportan desde las instituciones sanitarias. Es precisamente éste, uno de los puntos que actualmente centran el debate sobre la sostenibilidad del mecanismo de financiación y eficiencia del sistema de donación y trasplantes, por lo cual es necesaria la producción y difusión de estudios comparativos que ayuden a que las organizaciones elijan las mejores alternativas desde el punto de vista coste-eficiente. Como líneas futuras de investigación podría considerarse ampliar la muestra de estudio a otras regiones sanitarias con el fin de analizar dependencias a nivel regional así como identificar relaciones cuantitativas entre los indicadores de Baldrige y la eficiencia técnica observada en otros tipos de trasplantes. Referencias [1] [2]

181

Chassin, M. Is Health care ready for six sigma quality?, Milbank Quarterly, 76 (4), pp 575-591, 1998. DOI: 0.1111/1468-0009.00106 Schuster, M.A., Mcglynn, E. and Brook, R., How good is the quality of health care in the United States?, The Milbank Quarterly, 76 (4), pp. 517-63, 1998. DOI: 10.1111/1468-0009.00105


López-Berzosa et al / DYNA 82 (191), pp. 176-182. June, 2015. [3] [4] [5] [6] [7] [8] [9]

[10]

[11] [12] [13] [14] [15]

[16] [17]

[18]

Hollingsworth, B., Non-parametric and parametric applications measuring efficiency in health care. Health Care Management Science, 6 (4), pp. 165-178, 2003. DOI: 10.1023/A:1026255523228 Hollingsworth, B., The measurement of efficiency and productivity of health care delivery. Health Economics, 17, pp. 1107–1128, 2008. DOI: 10.1002/hec.1391 Hoff, A., Second stage dea: Comparison of approaches for modelling the dea score, European Journal of Operational Research, 181, pp. 425-435, 2007. DOI: 10.1016/j.ejor.2006.05.019 Mcdonald J., Using least squares and tobit in second stage dea efficiency analyses, European Journal of Operational Research, 197, pp. 792-798, 2009. DOI: 10.1016/j.ejor.2008.07.039 Simar, L. and Wilson, P., Two-stage DEA: caveat emptor, Journal of Productivity Analysis, 36 (2), pp. 205-218, 2011. DOI: 10.1007/s11123-011-0230-6 Avkiran, N., Opening the black box of efficiency analysis: An illustration with UAE banks, Omega, 37, pp. 930-941, 2009. DOI: 10.1016/j.omega.2008.08.001 André, F.J, Herrero, I.H. and Riesgo, L.A., Modified DEA model to estimate the importance of objectives with an application to agricultural economics, Omega. 38, pp. 371-382, 2010. DOI: 10.1016/j.omega.2009.10.002 Zhong, W., Yuan, W., LI, S.X. and Huang, Z.M., The performance evaluation of regional R&D investments in China: An application of DEA based on the first official China economic census data, Omega, 39, pp. 447-55, 2011. DOI: 10.1016/j.omega.2010.09.004 Yongjun, L., Yao, CH., Liang, L. and Jianhui, X., DEA models for extended two-stage network structures, Omega, 40, pp. 611-618, 2012. DOI: 10.1016/j.omega.2011.11.007 Simar, L. and Wilson, P. Estimation and inference in two-stage, semiparametric models, Journal of Econometrics, 136, pp. 31-64,2007. DOI: 10.1016/j.jeconom.2005.07.009 Hofler, R.A. and Rungeling, B., US nursing homes: Are they cost efficient?, Economics Letters, 44 (3), pp. 301-305, 1994. DOI: 10.1016/0165-1765(93)00357-T Zuckerman, S., Hadley, J. and Lezzoni, L., Measuring hospital efficiency with frontier cost functions, Journal of Health Economics, 13 (3), pp. 255-280, 1994. DOI: 10.1016/0167-6296(94)90027-2 Defelice, L.C. and Bradford, W.D., Relative inefficiencies in production between solo and group practice physicians, Health Economics, 6 (5), pp. 455-465, 1997. DOI: 0.1002/(SICI)10991050(199709)6:5<455::AID-HEC290>3.0.CO;2-S Chirikos, T.N., Identifying efficiently and economically operated hospitals: The prospects and pitfalls of applying frontier regression techniques, Journal of Health Politics, 23 (6), pp. 879-904, 1998. Hernández-Nariño, A., Medina-León, A., Nogueira-Rivera, D., Negrín-Sosa, E. and Marqués-León, M., La caracterización y clasificación de sistemas, un paso necesario en la gestión y mejora de procesos. Particularidades en organizaciones hospitalarias. DYNA, 81 (184), pp. 193-200, 2014 Gorenflo, G.G., Klater, D.M., Mason, M., Russo, P. and Rivera, L., Performance management models for public health: Public health accreditation Board/Baldrige connections, alignment, and distinctions. Journal of Public Health Management and Practice, 20 (1), pp. 128-13, 2014. DOI: 10.1097/PHH.0b013e3182aa184e

Management, International Journal of Marketing Research, Revista de Ciencias Sociales, Revista de Economía Mundial, Dyna, Pensée, Medical Economics, REIS, etc. ORCID: 0000-0003-0457-3730 C. Fernandez-Renedo, es Dr. en Medicina. Coordinador de Trasplantes Regional de Castilla y León, España y miembro de la Organización Nacional de Trasplantes en España.

D. López-Berzosa, es Dr. en Ing. Mecánica por la Universidad Politécnica de Madrid, España. Profesor colaborador en el Instituto de Empresa de Madrid e Investigador en el grupo de investigación en innovación en la Universidad de Exeter, U.K. Ha co-editado libros en el ámbito de la innovación y publicado artículos en revistas con índices de impacto como Universia, Interciencia, TIBE, Pensée, Intangible Capital, Service Science, Technovation. C. de Pablos-Heredero, es Dra. en Ciencias Económicas y Empresariales por la Universidad Complutense de Madrid, España, profesora titular de Universidad. Directora del Máster Universitario en Organización de Empresas y co-directora del Máster de Emprendedores y Máster en Gestión de Proyectos Logísticos SAP ERP. Ha publicado libros y artículos en revistas con índices de impacto como CEDE, Universia, Interciencia, TIBE, Intangible Cpaital, Service Science, Journal of Entrepreneurship 182

Anexo A Hospital id

Tipo de unidad

1

0 1

2

3

4

5

6

7

8

9

10

11

1

0

0

2

0

0

2

2

0

Donantes

Donantes por encima de 70 años

Año

Trasplantes de Riñón

Trasplantes de Higado

6

2

2008

10

1

0

0

2009

0

0

3

1

2010

6

2

0

0

2011

0

0

17

3

2008

32

16

21

8

2009

32

20

22

7

2010

30

19

21

11

2011

26

11

1

2008

20

9

13

3

2009

20

12

18

7

2010

30

13

24

14

2011

42

22

3

2

2008

6

3

3

1

2009

6

2

2

0

2010

4

0

1

1

2011

2

1

2

2

2008

2

4

3

2

2009

4

2

4

3

2010

8

4

7

6

2011

14

7

16

5

2008

26

15

19

9

2009

26

17

26

11

2010

46

26

26

11

2011

41

25

10

5

2008

16

10

5

2

2009

8

5

3

0

2010

6

3

6

6

2011

10

6

3

1

2008

4

3

3

2

2009

4

3

2

1

2010

4

4

4

2011

4

4

13

4

2008

24

13

9

4

2009

14

9

5

2

2010

10

8

2

2011

14

7

13

3

2008

24

13

11

4

2009

18

11

14

3

2010

26

12

7

1

2011

12

7

6

4

2008

7

6

0

0

2009

0

0

4

3

2010

8

4

2

2

2011

2

2

20

2

4


Evaluation of the kinetics of oxidation and removal of organic matter in the self-purification of a mountain river Jorge Virgilio Rivera-Gutiérrez a a

Facultad de Ciencias Naturales e Ingeniería., Unidades Tecnológicas de Santander, Bucaramanga, Colombia. jorgevirgilior@gmail.com Received: July 23th, 2014. Received in revised form: January 28th, 2015. Accepted: February23th, 2015.

Abstract The study is based on the determination of the kinetic rates and assessment of self-purification of the Frio River, due to the uptake of organic load. The kinetic rates were calculated by applying differential and logarithmic methods on concentrations of water quality determinants present in each of the (7) reach of the river. The water system easily recovers the amount of oxygen, k 0.4, k 3.2 , only receives 27.7 Ton. d-1, the organic load, making high concentrations of carbon, ammonium and remain sediment. The length Influence of discharges, LIV- BOD yielded a mean per tranche of 10 km, compared to 3 km each way, means that the river can´t self- purification that need more length of travel. The study illustrates the modeling of the determinants of quality, developed by the QUAL2K, using the calculated rates. Keywords: Autopurification, Mountain streams. Modeling, Reaeration. Nitrification, Sedimentation, Deoxygenation

Evaluación de la cinética de oxidación y remoción de materia orgánica en la autopurificación de un río de montaña Resumen El estudio se basa en la determinación de las tasas cinéticas y la evaluación de la autopurificación del río Frío, debido a la captación de la carga orgánica. Las tasas cinéticas se calcularon aplicando métodos diferenciales y logarítmicos sobre las concentraciones de las determinantes de calidad del agua presente en cada uno de los (7) tramos del río. El sistema hídrico recupera fácilmente la cantidad de oxígeno, k 0.4, k 3.2 , solo que recibe, 27.7 Ton. d-1, de carga orgánica, haciendo que se mantengan altas concentraciones de carbono, amonio y sedimentos. La Longitud de Influencia de vertidos, LIV- DBO arrojó una media por tramo de 10 Km, comparado con 3 Km por tramo, significa que el río no puede autopurificarse por que necesita más longitud de recorrido. El estudio ilustra la modelación de las determinantes de calidad, desarrolladas por el Qual2K, usando las tasas calculadas. Palabras clave: Autopurificación. Ríos de montaña. Modelado. Reaireación. Nitrificación. Sedimentación. Desoxigenación

1. Introducción Los cuerpos hídricos del Área Metropolitana de Bucaramanga: Suratá, Tona, Oro, Frío y Lato, ubicados en las cabeceras de los Municipios: Lebrija, Girón, Bucaramanga, Floridablanca, y Piedecuesta, respectivamente, presentan contaminación por materia orgánica y metales pesados, convirtiéndose en una emergencia ambiental. La falta de cultura ambiental, el desarrollo industrial y el aumento poblacional del área metropolitana, (1.089.269 habitantes a 2015) [1]; permiten un aumento progresivo de contaminantes a estos ríos, que

son: fuente de captación y cuerpo receptor. La microcuenca río Frío (fuente de estudio), se ilustra en la, Fig. 1, en la cual se involucran los Municipios de Bucaramanga, Floridablanca y Girón. El río Frío, confluye en la subcuenca río de Oro, en la cual se descargan todos los vertidos tratados y crudos del Área Metropolitana de Bucaramanga, cuya jurisdicción comprende los Municipios de: Bucaramanga, Floridablanca, Girón y Piedecuesta. Dentro de las descargas de agua residual, se encuentran: el vertido tratado de la única planta de aguas residuales del área, denominada “PTAR Río Frío”, ubicada en Floridablanca, en zona baja del río Frío, aguas arriba de la estación Pórtico, Fig. 1. Los demás Municipios

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 183-193. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.44557


Rivera-Gutiérrez / DYNA 82 (191), pp. 183-193. June, 2015.

del Área Metropolitana, vierten sus aguas al río de Oro. La mitigación de la contaminación orgánica es urgente, se requiere establecer de qué manera logran estos ríos de montaña la autopurificación, manteniendo niveles elevados de materia orgánica. Evaluar la cinética de la oxidación de dicha carga orgánica, permitirá mejorar las condiciones de vertidos y a generar estrategias para lograr la mitigación de dicha contaminación para el Área Metropolitana de Bucaramanga. De acuerdo a la Corporación Autónoma Regional para la Defensa de la Meseta de Bucaramanga, (CDMB) [2], la microcuenca del río Frío, se divide en tres zonas en las cuales se realizaron mediciones in situ y ex situ, para el comportamiento de la calidad del agua, Figs. 4-8. En la zona alta, se observa una calidad óptima de agua, apta para consumo humano, con un OD = 8 mg.L-1 y una DBO= 2 mg.L-1, existe una abstracción de 0.5 m3.s-1, para el acueducto de Floridablanca, dejando 1.4 m3.s-1 aguas abajo. En la zona media, se observa un aumento del caudal, aforado en 1.8 m3.s1 , debido a la confluencia de la quebrada Mensuly, ubicada en el asentamiento urbano de Floridablanca, Fig. 1. En la zona baja, se encuentra la mayor carga orgánica del río generada por el efluente tratado de la “PTAR Río Frío” y los vertidos crudos: By-pass del alcantarillado Floridablanca y Angelina, se observa también, una fuente de dilución, denominada “Quebrada Aranzoque”, sumando un caudal de 2.5 m3.s-1, con el que confluye al río de Oro en la zona de Caneyes en el Municipio de Girón. La zona presenta cargas continuas de vertimientos domésticos, con concentraciones de 80 mg.L-1 y 400 mg.L-1 de DBO, para tratados y crudos respectivamente. La PTAR Río Frío, solo trata el 5% de las aguas residuales de Bucaramanga y el 80% de las de Floridablanca. Un estudio de modelación de calidad del agua en la zona baja del río Frío. [3], demuestra que se presenta una carga orgánica de 61.9 ton.d-1 y tan solo se remueven 16.85 ton.d-1, lo que permite una acumulación de materia orgánica, convirtiéndolo en una cloaca que afecta constantemente el río de Oro, el cual ya presenta contaminación generada por el Municipio de Piedecuesta. Controlar la capacidad de autopurificación es una de las alternativas más viables para alcanzar la sostenibilidad de dicha fuente hídrica. Esta investigación se enfoca en evaluar cuáles son las tasas de la cinética de oxidación de dicha materia orgánica que presenta un río de montaña como el río Frío, al igual que muchos de los ríos de cabecera en Colombia. Adicionalmente, se pretende determinar la capacidad de asimilación del río Frío, de acuerdo a la evaluación morfométrica, hidrológica, calidad e hidráulica del río y las características de calidad, cantidad y ubicación espacial y temporal de los vertimientos domésticos en la zona baja del río. La materia orgánica del río está constituida por una parte disuelta y otra particulada, con constituyentes tales como: carbono, nitrógeno, fósforo, patógenos y sólidos suspendidos. Las tasas establecidas para el estudio son: desoxigenación, reaireación, nitrificación, sedimentación, decaimiento de patógenos y remoción de materia particulada. La zona de estudio se encuentra ubicada entre los Municipios de Floridablanca y Girón en los meridianos 73°

Figura 1. Ubicación del río Frío, Santander. Colombia Fuente: Autor

Figura 2. Perfil longitudinal de altura- R Frío Fuente: Autor

2’ y 73° 9’ de longitud y los paralelos 7° 7’ y 7° 3’ de latitud norte en el Departamento de Santander, Colombia, Fig. 1. La morfometría de la zona, presenta una pendiente de 7% en la zona alta y 1% en la zona baja, una caída de altura de 2091 a 750 msnm, permitiendo una disminución de velocidad y un aumento de temperatura de (16 a 31°C), Fig. 2. El segmento hidráulico se discretiza en (7) tramos denominados: San Ignacio, Judía, Esperanza, Jardín Botánico, Pórtico, Callejuelas y Caneyes, en un trayecto 26.6 Km y un tiempo de viaje de 11.7 horas, Fig. 3. 2. Metodología La investigación se enfoca en la evaluación de las tasas de desoxigenación, reaireación, sedimentación y decaimiento de patógenos [4], necesarias para la determinación de capacidad de autopurificación del río Frío. Se aplican dos métodos para la determinación de la capacidad: El método LIV [5] (longitud de Influencia de vertidos) y la modelación con el QUAL2K [6]. Se ha tomado el río Frío como zona de estudio ya que recoge todas las características de río de

184


Rivera-Gutiérrez / DYNA 82 (191), pp. 183-193. June, 2015.

montaña, con afectación residual doméstica, como la mayoría de los ríos de cabecera en Colombia. La Investigación se desarrolló teniendo en cuenta estudios preliminares hidrológicos, hidráulicos e hidrográfico y la estimación de las tasas cinéticas de oxidación, sedimentación y decaimiento de patógenos, se evaluaron las concentraciones de parámetros que evidencian la cantidad de materia orgánica disuelta y particulada en cada uno de los (7) tramos escogidos en el río. Para evaluar la calidad del agua se realizaron una serie de inspecciones y muestreos desde el 2008 al 2012, en época de estiaje y húmedas, cumpliendo con la normas del Ministerio del Ambiente, Vivienda y Desarrollo Territorial [7], ICONTEC y la Organización Meteorológica Mundial. OMM. [8, 9]. El monitoreo se realiza teniendo en cuenta el tiempo de viaje del río, para mantener un balance hidráulico y representatividad del mezclado de la carga orgánica. Las muestra fueron filtradas a 0.45 µm, para determinar la materia orgánica disuelta y particulada del río.

2.2. Estimación de la tasa de desoxigenación La dinámica del oxígeno en los cuerpos lóticos, se evidencia por la ganancia o pérdida de la concentración del gas diluido en la columna de agua. El río presenta tres fases (aire/agua/fondo), donde existe flujos de oxígeno generadas por el consumo en procesos de oxidación de materia orgánica tanto en la columna, como en el fondo (bento), también es requerido por contaminantes no biodegradables [11]. Oxidables, como metales pesados, tóxicos o microorganismos. El cálculo de la tasa de pérdida de oxígeno por acción de la carga orgánica, se realiza por (4) métodos. Primero se con el método de la botella winkler, se evalúa el consumo de oxígeno determinando el cambio en la concentración de la DBO (y . durante 21 días de una muestra de agua del río en incubación a 20°C. La forma como se comporta la DBO, en un tiempo t. se representa por la siguiente ecuación: K∗L

2.1. Caracterización hidráulica Se realiza un estudio morfométrico del río, teniendo en cuenta cada uno de los tramos del canal principal, las fuentes de dilución, abstracción y vertimientos, las zonas muertas, tipo de cause y el coeficiente de dispersión con trazador [10] y caudales. La información meteorológica e hidrológica se obtiene de las estaciones de la CDMB: la Esperanza (RF-03), el Pórtico (RF-P) y Caneyes (RF-1A), y la Estación Climatológica Club Campestre. Se establecieron (4) estaciones más de monitoreo denominadas: San Ignacio, Judía, Botánico y Callejuelas, donde se evaluó la velocidad, el ancho, la profundidad y el coeficiente de dispersión longitudinal, Tabla 1.

(1)

Donde L = DBO remanente en el agua para un tiempo t, expresada en (mgO2.L-1); K = Tasa de desoxigenación o tasa de oxidación de la DBO expresada en (d-1) y t =tiempo de oxidación en (días). Derivando la Ec. (1). se obtiene. y L 1 e , en forma diferencial sería K L y ; donde L= es la DBO remanente en el agua para el tiempo t=0, o también llamada demanda bioquímica de oxígeno carbonácea última. DBOCu. y= DBO ejercida. Aplicando mínimos cuadrados se logra obtener una tasa a la cual se pierde el oxígeno en la botella durante la incubación. Las ecuaciones de estado son: N∗a a∑y

b∑y

∑y ∑yy

b∑y

0

(2) 0

(3)

Donde N es el número de pares de valores observados -1. Como el delta de tiempo entre las mediciones no fue constante, se da lugar a la utilización de dos tipos de ecuación para calcular la y . Para con ∆t constante, se usa: y Figura 3. Modelo Conceptual hidráulico del río. Fuente: Autor

Tabla 1. Morfometría del Segmento hidráulico río Frío. COORDENADAS X U id Lat (N) Lon (E) Km m/s 1 1280580 1114780 3.00 0.55 2 1276021 1112782 6.62 0.54 3 1274252 1111654 3.37 0.60 4 1273476 1109245 2.77 0.63 5 1273120 1105215 4.96 0.64 6 1272617 1102788 2.90 0.91 7 1273097 1100822 3.00 0.83 Fuente: Autor

(4)

Para Δt, variable, se usa:

H m 0.24 0.48 0.32 0.20 0.25 0.30 0.30

A m2 6.4 6.6 6.6 6.9 8.0 16.5 13.5

Q m3.s-1 0.86 0.90 1.92 1.44 1.78 2.42 2.48

y

(5)

Donde: y y t = son la magnitud y tiempo del punto para el cual se calcula y ; y y t = Valores del punto anterior; y y t = Valores del punto posterior. Finalmente se obtiene la tasa en base neperiana, cuya unidad es (d-1). b 185

K

∗∑ ∑

∗∑

∗∑

(6)


Rivera-Gutiérrez / DYNA 82 (191), pp. 183-193. June, 2015.

solubilización del oxígeno es proporcional al déficit de saturación, como lo indica la ecuación: K D, Integrando . Dónde D es el déficit de OD para un tiempo D Do ∗ e t. expresada en (mg/L); Do es el déficit inicial de OD. (mg/L); K es la constante de reaireación en base natural, expresada en (d-1) si se expresa la ecuación en términos de concentraciones de OD se obtiene:

Tabla 2. Valores de coeficiente de actividad de la fuente receptora. Pendiente (m/100m) n 0.05 0.10 0.10 0.15 0.20 0.25 0.50 0.40 1.00 0.60 Fuente: Adaptado de [13]

Ln La segunda forma es aplicando la ecuación de Hydroscience. [12], en la cual, determina que: para profundidades 0 H 8 pies, la tasa sería: k

.

0.3

(7)

0.3. Mientras que para H 8 pies; la tasa k El río Frío, presenta una profundidad máxima de 0.5 m es decir (1.64 pies). La tasa puede ser extrapolada al usar otra temperatura, así: k k θ Donde: k es la Tasa a temperatura (T). k = Tasa de descomposición a 20 °C; θ = 1.047. La DBO removida, tiende a incrementarse con la temperatura y aguas abajo de fuentes puntuales. La tercera forma es usando es usando la ecuación de Bosko [13, 14] k

k

n

Ln

(8)

∗ ̅

K t

(10)

Dónde: C = concentración de OD para el tiempo t. mg/L; CO = concentración inicial OD mg/L; Cs = concentración de saturación de OD mg/L. La solubilidad del oxígeno puede ocurrir sólo en la interfaz aire-agua, donde una película delgada de agua es rápidamente saturada. La tasa de reaireación depende de la difusión del oxígeno a través del cuerpo de agua, la cual es muy lenta. En ríos turbulentos, la capa superficial saturada se rompe continuamente y el proceso de reaireación se efectúa más rápido [14]. Existen algunas ecuaciones para calcular la tasa de reaireación, validadas en diferentes ríos con características continuación estudios de ríos, con muy diferentes características hidráulicas. Entre las más comunes se citan las siguientes: [15]

Donde; k es la Tasa de desoxigenación obtenida en botella, base e (d-1) y n es el coeficiente de actividad del lecho del río, el término n , refleja la importancia de los organismos en el lecho del río y que utilizan la DBO. El coeficiente n, está en función de la pendiente, algunos valores típicos, Tabla 2. La cuarta técnica, corresponde a la conversión del carbono orgánico total COT en DBOC última, partiendo del factor estequiométrico. 2.67 gO2/1gC, establecido para un sustrato de amonio; luego. DBOCU 2.67 ∗ COT y DBOC es la DBO carbonácea en el tiempo t,̅ de viaje en el río, como sigue:

k

K

3.93 ∗

[16]

K

5.03 ∗

U H

[17]

K

5.32 ∗

U H

K

[18]

.

U H

.

.

(12)

.

. .

C∗S∗U S

(11)

(13)

(14)

/

(9) [19]

2.3. Estimación de la tasa de reaireación Se conoce como reaireación, el proceso por el cual el oxígeno y demás componentes gaseosos del aire son renovados en la columna de agua, debido al movimiento del río. Los patrones que la controlan son diferentes de los que afectan la desoxigenación. Es importante, en consecuencia, estudiar estos patrones, así como los procedimientos para predecir las tasas reales de reaireación. Si por alguna razón el nivel de OD en el agua es menor que el valor de saturación, el agua disuelve más oxígeno de la atmósfera y se acerca nuevamente al nivel de saturación. Según la ley de Henry, a temperatura constante, la solubilidad de un gas en un líquido es proporcional a la presión parcial del gas. La tasa de

K

1.923 ∗ U H .

.

(15)

Donde: U=velocidad media (m.s-1); H=profundidad media (m); S=pendiente de la línea de energía (m.m-1); C= coeficiente de transferencia (m-1), C= 0.177 m-1 a 20°C, para 0.708 <Q< 85 m3.s-1 o C= 0.361 m-1 para 0.028 <Q< 0.28 m3.s-1. [18]. n= coeficiente Manning. (m-1/3.s) [20]. 2.4. Estimación de la tasa de nitrificación La demanda bioquímica de oxígeno nitrogenada. DBON se genera por la oxidación del nitrógeno presente en el agua en forma de amonio, en el proceso denominado “nitrificación”, que inicia cuando las bacterias Nitrosomonas,

186


Rivera-Gutiérrez / DYNA 82 (191), pp. 183-193. June, 2015.

convierten el ión amonio a nitrito [21], consumiendo 1.5 moles de oxígeno, como se muestra en la siguiente ecuación. NH

1.5O → 2H

H O

NO

(16)

En una segunda fase, las bacterias Nitrobacter, convierten el nitrito a nitrato, consumiendo 0.5 moles de oxígeno así: NO

0.5O → NO

(17)

De esta manera el oxígeno requerido por cada una de las formas de nitrógeno sería: r

1.5 32 14

3.43 gO gN

r

0.5 32 14

1.14 gO gN

Donde: r es el oxígeno consumido para la conversión de amonio -nitrógeno a nitrito- nitrógeno; r es el oxígeno consumido para la conversión de nitrito-nitrógeno a nitratonitrógeno, la cantidad de oxígeno requerido para oxidar el total de nitrógeno existente en el amonio, será igual a: r r r 4.57 gO gN La demanda de nitrogenada se valida monitoreo el nitrógeno total Kjeldahl (NTK), en cada uno de los tramos, de acuerdo al tiempo de viaje en el río Frío. Se toman concentraciones de NTK, el cual incluye el nitrógeno orgánico y el amonio presente en el agua. Para que se presente la nitrificación, deben existir los siguientes factores: 1) presencia suficiente de bacterias nitrificantes. 2) un pH alcalino, alrededor de 8 y 3) suficiente oxígeno (1 a 2 mg L-1). La DBON =L se calcula aplicando el factor estequiométrico: L 4.57 ∗ NTK. Para un sistema de flujo inyectado, el balance de masa para la DBON (L y el déficit de oxígeno. D. se puede calcular suponiendo que L L y D D para un tiempo t 0. así: L

L

∗e

2.5. Estimación de la tasa de decaimiento de patógenos Muchos de los microrganismos se encuentran dentro del cuerpo y se accede a otros por la piel o por ingesta de agua sin tratar. Los tipos de bacterias que pueden representar la mayoría de estos microorganismos son: 1) Coliformes Totales. CT. Escherichia coli o E-coli, la Aerobacter aerogenes, entre otros. 2) Coliformes Fecales. CF. pertenece a los CT, se encuentran en los intestinos de los animales de sangre caliente, sin incluir los terrestres; pueden llegar a ser el 20% de los CT. 3) Estreptococo Fecal, están incluidos en los estreptococos presentes en humanos y animales domésticos. Es muy importante determinar la cantidad de CF, para establecer contaminación fecal y no fecal del total. Las aguas naturales pueden contener entre 100.000 y 400.000 número.100 mL-1, de fecales. El cálculo de la tasa de decaimiento de patógenos se calcula, sumando la tasa de mortalidad Kb; la tasa de pérdida por radiación solar y la tasa de sedimentación de la fracción de patógenos adherible a las partículas, [20]. La salinidad (sólidos y sales disueltas) y la temperatura son otros factores que afectan la mortalidad de patógenos [14]. La tasa decaimiento de patógenos se puede de representar como: K

K

K

(19)

K

K

(20)

0.8

0.02s 1.07

(21)

Donde s=salinidad (ppt ó g.L-1); para agua dulce s= 0; para aguas de mar. 30< s < 35 ppt. Normalmente la tasa de pérdida se encuentra en un rango de 0.8 d-1, para agua dulce y de 1.4 d-1 para agua de mar. La tasa de pérdida de patógenos por efectos de la radiación, representada por: K

La L es la DBON inicial o DBON de la cabecera del modelo de nitrificación. Para efectos del estudio se toma la “Estación Truchera” ubicada, aguas arriba del inicio del primer tramo “San Ignacio”, de esta manera se obtiene una serie de concentraciones de DBON de cada uno de los tramos, monitoreados en la mitad de la longitud de dicho tramo. La Ec. (18). se construye con 6 grados libertad y un nivel de confianza del 95%, arrojando un t crítico de 1.943, la ecuación no tendrá diferencia significativa cuando el valor t crítico se mayor al valor t calculado, de acuerdo a los

K

Donde: K tasa de decaimiento total, (d-1); K tasa de mortalidad base, (d-1); K tasa de pérdida por radiación solar, (d-1); K tasa de pérdida por sedimentación, (d-1). La tasa de mortalidad de patógenos se calcula en función de la salinidad y la temperatura [22].

(18)

Despejando K de la Ec. (16), se obtiene la tasa de nitrificación, usando las concentraciones de DBON en cada tramo, teniendo en cuenta el tiempo de retención, y tiempo de viaje del río Frío. Ln

resultados arrojados al aplicar dicha ecuación. El valor que se toma como referencia es de 1.2 d-1, de acuerdo a un estudio de validación de varios modelos para Colombia [4]. Ahora, existen otros autores [20], que plantean valores de K entre 0.1 y 0.5 d-1, para el caso de ríos poco profundos y más de 1d1 , para ríos profundos [14].

α ∗ I̅

(22)

Donde: α= constante de proporcionalidad; I̅ energía lumínica (ly.h-1), de acuerdo a Thomann, α 1. En cuanto a la extinción de la luz, se puede modelar de acuerdo a la ley de Beer- Lambert: I

I ∗ e

(23)

Donde: I es la energía lumínica (ly.h-1); I la energía lumínica superficial (ly.h-1); K el coeficiente de extinción

187


Rivera-Gutiérrez / DYNA 82 (191), pp. 183-193. June, 2015.

(m-1); z la profundidad (m). El coeficiente de extinción, está en función de la materia particulada y el color en el agua. Si se aplica el disco Secchi (DS) y la profundidad, se tiene la siguiente ecuación: . K . Ahora, si se relacionan los sólidos suspendidos (mg.L-1), quedaría K 0.55 SST . La luz promedio I,̅ se puede calcular integrando con la profundidad del río. [23]: I̅

1

e

..

1

(24)

e

V

0.033634α ρ

ρ

d

(29)

Donde: V (m.d-1); d (μm ; ρ . ρ g.m-3). La densidad de la materia orgánica en el río, tiene un valor cercano a 1.027 g.m-3 y el diámetro de las partículas oscilan entre 40 y 80 μm, permitiendo calcular la velocidad y la tasa de sedimentación al relacionarla con la profundidad de cada tramo en el río [14].

Puede sustituirse por K

Donde: V = velocidad de sedimentación (cm.s-1); α= factor reflectivo, aproximadamente 1.0; ρ = densidad de la partícula (g.m-3); ρ = densidad del agua; μ= viscosidad dinámica del agua (0.014 g.m-3); d= diámetro de la partícula (cm) [22].

(25)

2.7. Remoción de la Materia Orgánica particulada. MOP La pérdida por sedimentación de los patógenos, depende de qué tanto se adhieren a las partículas o qué tantos quedan libres flotando. De tal manera que el total de bacterias se subdividen en flotantes y adheridas a las partículas sedimentables: N N N , donde, N bacterias flotantes (número.100mL-1) y N bacterias adheridas (número.100mL-1). La cantidad de bacterias adheridas, pueden expresarse como una concentración de masa específica. r (número.100mL-1). Por lo tanto el volumen 10 r ∗ m; donde m= específico de las partículas N sólidos suspendidos (mg.L-1). La tendencia de la adherencia de las bacterias a las partículas, se puede representar por una 10 , donde, K es el tendencia lineal así: K coeficiente de tendencia (m3.g-1). La tasa con la que se absorben y desorben las bacterias a las partículas, se asume K ∗ m ∗ N ; quedando en un equilibrio así: N N F ∗ N, donde F es la entonces una ecuación final. N fracción de las bacterias libres flotantes. F ; luego,

En el río la remoción de la materia orgánica, sucede en dos procesos simultáneamente: oxidación y sedimentación. La remoción de la DBO (L) en el río definida como: U

k L. En estado estable sería: 0 U k L. Se asume que en la localización de la descarga ocurre una mezcla puntual, definida como: L

Ahora, si V , es la velocidad de sedimentación de las partículas (m.d-1), la Tasa de pérdida por sedimentación puede ser representada como: F

K

(26)

Dando como resultado la ecuación para calcular la tasa de decaimiento total: K

0.8

0.02s 1.07

1

e

F

(27) 2.6. Estimación de la tasa de sedimentación La velocidad de sedimentación se calculó a partir de la ecuación de Stokes: V

α

d

(28)

(30)

L L e luego, la DBO, aguas abajo en cualquier tramo se define como:

F ∗ N; donde F es la fracción de bacterias adheridas.

Luego F

Donde. Q L ; Q L es la carga definida por el caudal y la concentración del río y el vertido respectivamente. Se calcula la DBO en el tiempo t. de acuerdo a la ecuación:

N

k

(31)

Si K K K ; siendo: K la tasa de desoxigenación y K la tasa de sedimentación de la DBO, se calcula por la relación entre la velocidad de sedimentación de la DBO que oscila entre 0.1 y 1 m.d-1 con promedio de 0.5 m.d-1, Vs la profundidad H en (m) [20, 14]. k

(32)

2.8. Capacidad de Autopurificación del río Frío La autopurificación se relaciona con la capacidad de asimilación de cargas contaminantes en fuentes hídricas de origen lótico como el río Frío. Se modela el déficit de oxígeno disuelto generado en la columna de agua del río, de acuerdo a una cinética definida para la desoxigenación, la nitrificación, la remoción de la DBO por sedimentación y el decaimiento de los patógenos (Coli-Fecales). La capacidad de asimilación de un río se puede determinar, siguiendo la metodología de la “longitud de influencia”, mediante el factor de asimilación, propuesta por

188


Rivera-Gutiérrez / DYNA 82 (191), pp. 183-193. June, 2015.

Rojas y Camacho [5], existen modelos predictivos de calidad del agua calibrados y validados [6, 24], con los cuales se puede establecer la capacidad de asimilación de carga orgánica de los ríos. En esta investigación se predice la capacidad de asimilación de carga orgánica del río Frío, estimando el factor de asimilación por cada determinante (OD, DBOCU, NTK, CF, SST). Se determinan los diferentes factores de asimilación con el fin de establecer el tiempo de viaje del contaminante en cada tramo y de esta manera calcular la LIV (longitud de Influencia de los vertidos) en el río Frío. Por otra parte se realiza la modelación con el QUAL2K, aplicando un promedio de las tasas evaluadas en el estudio con el fin de comprobar la trazabilidad de las concentraciones de los diferentes determinantes. La parametrización del modelo hidráulico se desarrolló por medio del ADZ-Quasar [25]. El modelo considera advección, y dispersión longitudinal y reacción de primer orden. El balance de masa para un determinante de calidad del agua C se puede definir como: .C t

e

τ

C t

k. C t

(33)

Donde. C , es la concentración del determinante en la salida del tramo; C es la concentración del determinante a la entrada al tramo; k es la tasa de reacción. Tr, es el tiempo de residencia, calculado como: Tr t̅ τ, donde τ es el tiempo de arribo. t,̅ es el tiempo medio de viaje absoluto, relacionados con la dispersión en el río. El factor de asimilación a , permite relacionar la carga contaminante a la entrada del tramo W , con la concentración que resulta aguas abajo C asi: C

a

.

. . ̅

.

. ̅

(34) (35)

Los factores de asimilación para los diferentes determinantes son: a

a

a

Q

Q. 1

k . t̅ e .

Q. 1

k . t̅ . e

Q. 1

k . t̅ e .

a

a

ς

k .

k . t̅ e .

τ

Q. 1 k . t̅ e . ς. t̅

τ τ

Q. 1

o NTK 4.57 OD OD k .

SST

(40)

OD

(41)

DBO OD

Donde: k . k . k . k , son las tasas de remoción, nitrificación, sedimentación y reaireación, respectivamente. o , es el oxígeno saturado que se calcula por la ecuación (APHA) [26]. Se aplica una modelación de calidad del agua con el QUAL2K versión 2.11 [6], con el fin de observar la tendencia de las concentraciones de cada determinante y determinar la capacidad de impacto que presenta la carga orgánica en el río Frío, tomado como ejemplo de río tipo montaña con una gran afectación de su calidad, en un corto recorrido, como sucede en la mayoría de los asentamientos urbanos en Colombia. El QUAL2K, simula el comportamiento de una determinante de calidad de concentraciónc, en un tiempo t para cada tramo en el río. La ecuación general que define el modelo es la siguiente: ∂c/ ∂t

.

(42)

. [20]. Para una condición de estado

estable, la concentración aguas abajo y el factor de asimilación en cada tramo en el río, está dada por las ecuaciones: c

a

τ

τ τ

S. Cons

(36)

DBO

(37)

NTK

(38)

Coli. Fecal

(39)

Donde. A , es el área de la sección transversal (L2); E es los cambios en el coeficiente de dispersión (L2T-1); componentes de crecimiento y consumo (LT-1); s son las fuentes externas o vertidos (MT-1), en la cual. M es la masa (M); L la distancia (L); T es el tiempo y C es la concentración (ML-3). Como M V ∗ C se pude decir que Vx Ax ∗ dx que es el incremento del volumen (L3). Si consideramos que 0 entonces el flujo en la corriente es estacionario 0. El término es el gradiente local de concentraciones. Este último incluye el efecto de los cambios en los componentes, así como dispersión, advección, fuentes/vertidos y diluciones. Bajo condiciones de estado estacionario la derivada local es cero, es decir la =0, los cambios incluyen las reacciones químicas y biológicas y las interacciones que ocurren en la corriente, por ejemplo la reaireación, respiración y fotosíntesis de algas y decaimiento de Coliformes. El modelo considera que el régimen hidráulico de la corriente es estado estacionario = 0 por lo tanto el balance hidráulico, puede escribirse como: Q ; donde, Q es la suma de los afluentes externos y aprovechamientos para dicho elemento. Esto indica que el caudal de aprovechamiento en el canal es

189


Rivera-Gutiérrez / DYNA 82 (191), pp. 183-193. June, 2015.

fundamental para realizar el balance de masa de la carga contaminante del río. La calibración del modelo QUAL2K, versión 2.11 se realizó en una investigación preliminar. [3], aplicando el método GLUE. [24], basada en simulaciones de Monte Carlo. [27], aplicando coeficiente de determinación R2 para (10) parámetros (Tasas), con 1000 iteraciones, ajustando el modelo para la morfometría e hidráulica del río Frío.

Tabla 6. Tasa de remoción de la DBO. TRAMO

Kd d-1 0.36 0.17 0.32 0.36 0.32 0.45 0.41

3. Resultados y Discusión

Ignacio Judía Esperanza Botánico Pórtico Callejuelas Caneyes Fuente: Autor

La tasa de desoxigenación obtenida en botella, muestra en promedio un R2 =0.5, mediante el método de incubación en botella winkler, los resultados se ilustran a continuación. Las tasa de desoxigenación calculada mediante las ecuaciones (Bosko, Experimental por conversión de DBOC, (Ex) e Hydroscience, presentan diferencias en la correlación de tendencia. Se comparan con la experimental obtenida en la botella, con el fin de evaluar la diferencia de la tasa obtenida con datos experimentales de campo como se ilustra en la siguiente tabla.

Tabla 7. Tasa de reaireación experimental. Ct TRAMO mgL-1 Ignacio 7.80 Judía 7.10 Esperanza 7.00 Botánico 6.90 Pórtico 6.80 Callejuelas 3.16 Caneyes 1.80 Fuente: Autor

Tabla 3. Correlación de Kd (Botella). TRAMO

Kd (d-1)

R2

0.03 0.01 0.04 0.04 0.01 0.03 0.02

0.60 0.42 0.53 0.73 0.60 0.37 0.43

Ignacio Judía Esperanza Botánico Pórtico Callejuelas Caneyes Fuente: Autor

Tabla 4. Comparación de la Tasa desoxigenación. DBO5 COT ID mg/L mg/L 1 1.3 2 1.4 3 1.5 4 1.8 5 8.4 6 58.0 7 49.0 * DBOCU=COT * 2.67(O/C) Fuente: Autor

0.03 0.01 0.04 0.04 0.01 0.03 0.02

Kd

mg/L

d-1

15 26 19 16 18 146 186

0.50 0.59 0.51 0.43 0.15 0.18 0.27

5.9 9.9 7.1 5.9 6.6 54.5 69.9

Tabla 5. Comparación de la Tasa desoxigenación. Botella Bosko TRAMO Kd d-1 Kd d-1 Ignacio Judía Esperanza Botánico Pórtico Callejuelas Caneyes Fuente: Autor

DBOCU*

0.36 0.17 0.32 0.36 0.32 0.45 0.41

Ex Kd d

Hydroscience -1

0.50 0.59 0.51 0.43 0.15 0.18 0.27

Kd d

-1

2.32 1.70 2.03 2.49 2.26 2.09 2.09

Ks d-1 0.85 0.42 0.63 1.00 0.80 0.67 0.67

Co mgL-1 8.10 7.20 7.05 6.95 6.85 4.98 2.48

Kr d-1 1.20 0.58 0.95 1.36 1.12 1.12 1.07

Cs mgL-1 8.16 7.31 7.34 7.30 7.02 6.50 6.55

Ka d-1 6.4 1.6 1.0 1.1 0.8 1.7 3.3

La materia remoción de la materia orgánica se determina de acuerdo a la sedimentación y la descomposición en el río, de acuerdo a las características del río. La tasa de reaireación se calcula por el método logarítmico de la correlación de la concentración del oxígeno disuelto y saturado en cada tramo, de acuerdo a las condiciones climatológicas de presión barométrica y temperatura. La tasa de reaireación se compara aplicando 4 ecuaciones estocásticas, de acuerdo a las condiciones morfométricas e hidráulicas del río. La tasa de nitrificación es calculada a partir con el método logarítmico de la DBON, convertida estequimetricamente a partir del nitrógeno total kjeldahl. Tabla 8. Comparación de Tasas de reaireación. Exp (1) (2) (3) (4) (5)* TRAMO d-1 d-1 d-1 d-1 d-1 d-1 Ignacio 6.4 5.9 0.1 31.6 51.7 2.3 Judía 1.6 3.1 0.0 9.5 13.7 0.2 Esperanza 1.0 4.6 0.1 20.6 31.2 2.0 Botánico 1.1 7.1 0.1 47.5 76.9 4.2 Pórtico 0.8 5.9 0.1 33.2 51.5 0.5 Callejuelas 1.7 5.5 0.1 34.4 46.5 0.8 Caneyes 3.3 5.4 0.1 31.5 43.7 0.3 *(1) Texas [19]; (2) O'Connor [15]; (3) Churchill [16]; (4) Owens [17]; (5) Tsivoglou [18]. Fuente: Autor Tabla 9. Tasa de Nitrificación. TRAMO Ignacio Judía Esperanza Botánico Pórtico Callejuelas Caneyes Fuente: Autor 190

NTK mgN/L 0.74 0.75 0.78 0.81 2.72 28.40 30.50

Ln mgO/L 3,4 3,4 3,6 3,7 12,4 129,8 139,4

Kn d-1 0.22 0.09 0.60 0.74 13.5 63.6 1.71


Rivera-Gutiérrez / DYNA 82 (191), pp. 183-193. June, 2015. Tabla 10. Tasa de Decaimiento de Patógenos. TRAMO Ignacio Judía Esperanza Botánico Pórtico Callejuelas Caneyes Fuente: Autor

d-1 0.6 0.8 0.9 0.9 1.0 1.4 1.5

d-1 29.9 24.6 27.9 30.8 28.0 22.4 18.2

d-1 5.3E-05 8.4E-05 3.4E-05 1.4E-05 3.7E-05 3.1E-04 3.4E-05

d-1 30.5 25.4 28.7 31.7 29.1 23.8 19.6

Tabla 11. Valores de Longitud de Influencia de Vertidos LIV /Determinante. LIV TRAMO L DBO NTK CF Km Km Km Km Ignacio 3.0 2.4 3.2 3.0 Judía 6.6 13.7 18.8 33.6 Esperanza 3.4 13.2 17.7 204.5 Botánico 2.8 3.9 5.7 6.2 Pórtico 5.0 12.2 2.9 493.2 Callejuelas 2.9 13.1 384.6 73.2 Caneyes 3.0 14.8 634.2 29.7 Fuente: Autor

Figura 6. Modelado del amonio en el río Frío. Fuente: Autor

SST Km 2.7 25.0 38.8 41.1 5.0 3.0 94.8

La tasa de decaimiento de patógenos, es calculada a partir de las perdidas por salinidad, radiación solar y sedimentación. A continuación se ilustra el comportamiento de los diferentes parámetros que definen la acumulación de la carga orgánica temporal. Los parámetros a evaluar son: el oxígeno disuelto, Fig. 4; la demanda bioquímica de oxígeno carbonacea rápida (CBODf), Fig. 5; el amonio (NH4+), Fig. 6; los patógenos, (Pathogen), tomados como Coliformes fecales, Fig.7; y los sólidos suspendidos totales (TSS), Fig. 8.

Figura 4. Comportamiento del Oxígeno disuelto. Fuente: Autor

Figura 7. Modelado de los patógenos en el río Frío Fuente: Autor

Figura 8. Modelado de los sólidos totales en el río Frío Fuente: Autor

4. Conclusiones

Figura 5. Modelado de la DBOC rápida en el Frío Fuente: Autor

La tasa de desoxigenación obtenida en la botella winkler, es muy baja 0.03 d , comparada con la obtenidas en campo, 0.4 d , obtenida por el método Bosko, esto indica que es importante evaluar la tasa con experiencias en campo, la botella no representa las condiciones hidroclimáticas de un río. La tasa de desoxigenación media obtenida por el método experimental (k 0.38 d ) y por el método Bosko (k 0.34 d ), no presentaron diferencias significativas, (T 191


Rivera-Gutiérrez / DYNA 82 (191), pp. 183-193. June, 2015.

critico=1.83, T calculado= 0.32), con una cola y una confianza del 95%. Lo que quiere decir que pueden usarse las dos ecuaciones para dicha medición con (9) grados de libertad. La medición realizada con la ecuación Hydroscience, refleja un tasa media de (k 2.14 d , lo que representa un valor muy elevado con respecto al rango establecido de 0.3 0.6 d . La tasa de reaireación evaluada experimentalmente usando las concentraciones de oxígeno disuelto y oxígeno saturado validados en campo, arrojó un tasa media de (k 3.24 d ), comparada con la tasa calculada por las ecuaciones estocásticas de Texas (5.37 d , O'Connor (0. 08 d , Churchill (29.74 d y Owens (45.04 d ), la que más se ajusta a lo experimental es la de Texas, con un (T critico=1.80) y un (T calculado = 0.002) para un 95% de confianza y 11 grados de libertad. El cálculo de la tasa de nitrificación desarrollado por el método de conversión de DBON a partir del NTK, presenta una media de (k 11.52 d ), (T critico =1.08, T calculado = 1.94), existiendo una diferencia significativa que logra superar la referencia de 1.2 d propuesta para ríos en Colombia. Si se comparan las tasas de desoxigenación y reaireación, se observa una relación aproximada de 10 veces mayor la reaireación, lo explica cierta capacidad del río para absorber fácilmente el oxígeno, pero la concentración de carga no permite su recuperación. Con respecto a la tasa de remoción de la DBO carbonacea, se observa una tasa media de (k 1.06 d ), con una tasa de sedimentación de la DBO en (k 0.72 d ), esto indica que la DBOC se sedimenta en un 70% aproximadamente. El diámetro medio de la materia orgánica es de 62 micras, la velocidad media de sedimentación es de 4.9 m. d y la tasa de sedimentación media es de k 17.9 d , lo que significa que se tiene un río con suficiente cantidad sedimentos en su cauce y esto prolifera la contaminación y la perdida de profundidad. Los patógenos mueren por salinidad a una tasa media de k 1 d (4 patógenos por hora), por radiación solar k 26 d (108 patógenos por hora) y por sedimentación 0.0004 d (nula) y su tasa de decaimiento promedio k es de (K 27 d ), es decir 112 patógenos por hora. La presencia de patógenos es muy elevada en fuentes naturales cercanas a asentamientos urbanos es evidente, debido a la alta concentración de heces fecales por presencia de vertidos domésticos. Si no existieran vertimientos en el río, el oxígeno disuelto que se perdería, sería tan solo de 2 mg.L-1, por efecto de la caída en la vertical (1376 m). Un río de montaña debe mantener como mínimo 6.5 mg.L-1 de oxígeno disuelto para mantener una buena actividad biológica. En ríos de baja profundidad (< 1 m), la reaireación se ve afectada porque no se tiene la suficiente interface de agua para hacer una absorción alta de oxígeno atmosférico y por otra parte, las partículas de arena aceleran la desoxigenación porque arrastran patógenos, pero la alta rugosidad aumenta la reaireación, al generarse turbulencia. El río Frío, con 26.6 Km de recorrido, y un tiempo de viaje de 11.7 horas, presenta temperaturas que oscilan entre

los 18 a 28 °C, ocasionado por la caída de altitud, de 2091 a 715 msnm. En la zona baja, transporta una alta carga orgánica, que dispara la conductividad a 650 µS.cm-1, y la alcalinidad a 170 mg.CaCO3 L-1, debido a la presencia de amonio como resultado de vertidos domésticos. La carga orgánica vertida es de 26.2 ton.d-1 y la asimilación es tan solo del 10%, en términos de DBO, lo que corresponde a 12.2 mg.L-1. Esto indica que el río no puede autopurificarse. De acuerdo al análisis LIV, el río solo presenta 26 Km de recorrido y requiere más de 42 Km para autodepurar la cantidad de carga orgánica que presenta El río tiene algunas características que pueden llegar a mejorar su autopurificación, como la alta rugosidad Manning (0.15) y una velocidad media de 0.6 m3.s-1, que le permite una tasa de reaireación de 3.26 d-1, lo que significa que puede absorber 17 mg O2.L-1 por hora, aunque la saturación promedio de oxígeno sea de 7 mg O2.L-1. Por otra parte, presenta factores negativos que no le permiten una autopurificación rápida, como la alta carga orgánica, la alta temperatura, la escaza zona riparia y los procesos erosivos, sobre todo en la zona baja. Agradecimientos El autor agradece al apoyo generado por el Convenio interinstitucional entre las UTS y la CDMB y al Dr. Luis Alejandro Camacho (Investigador del Universidad de Los Andes – Uniandes) Referencias

[1] [2] [3] [4]

[5]

[6]

[7]

[8]

192

Roldán, J.J. y Sánchez, C., Estudios Postcensales 7, Proyecciones nacionales y departamentales de población 2005-2020. Colombia, DANE, 2010, 7 P. CDMB. Plan de Manejo y Ordenamiento de la Subcuenca río de Oro. Subdirección de Ordenamiento y Planificación Integral del Territorio, CDMB, 2008. Rivera, J., Evaluación de la materia orgánica en el río frío soportada en el qual2k versión 2.07, DYNA, 78, (169), pp.131-139, 2011. Camacho, L., Rodríguez, E.A., Gélvez, R., González, R., Medina, M. y Torres, J., Metodología para la caracterización de la capacidad de autopurificación de ríos de montaña, DYNA [en línea], 2007. [Fecha de consulta: 17 Agosto 2007]. Disponible en: http://www.docentes.unal.edu.co/ricagonzalezp/docs/VF-ArticuloCongAgAmb_AutopurificacionRM.pdf. Rojas, A., Aplicación de factores de asimilación para la priorización de la inversión en sistemas de saneamiento hídrico en Colombia. Bdigital [en línea], 2011. Disponible en: http://www.bdigital.unal.edu.co/4093/. Chapra, et al., Documentation and User’s Manual. QUAL2K: A Modeling Framework for Simulating River and Stream Water Quality. Versión 2.11 [en línea], Documentation and Users Manual. Civil and Environmental Engineering Dept., Tufts University, Medford, MA, 2007 [consulta 18 agosto de 2012]. Disponible en: http://www.epa.gov/ATHENS/wwqtsc/html/qual2k.html. ANLA. Metodología para la definición de la longitud de influencia de vertimientos sobre corrientes de agua superficial [en línea], Ministerio de Ambiente y Desarrollo Sostenible, Bogotá, 2013, [consulta 12 junio de 2013]. Disponible: http://www.anla.gov.co/documentos/Consultas_publicas/Metodologi a_-_Longitud_de_Influencia_de_Vertimientos.pdf. ICONTEC, Guía para el muestreo de aguas de ríos y corrientes NTCISO 5667-6 [en línea], ICONTEC, Bogotá, 2006, [consulta 12 junio de 2013]. Disponible: http://tienda.icontec.org/brief/NTC-ISO56676.pdf.


Rivera-Gutiérrez / DYNA 82 (191), pp. 183-193. June, 2015. [9] [10]

[11]

[12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23]

[24] [25]

[26]

[27]

[28] [29]

OMM. Guía de Prácticas Hidrológicas -168 [en línea], Organización Meteorológica Mundial, USA, 1994. Disponible en: http://www.inamhi.gov.ec/educativa/WMOSPA.pdf. Constain, A., Revalidation of Elder´s equation for accurate measurements of dispersion coefficients in natural flows, [En línea]. DYNA, 81 (186), 2014 [Date of reference: June 2014:. Available at: http://dyna.medellin.unal.edu.co/es/ediciones/186/articulos/v81n186 a02/v. Posada, E., Mojica, D., Pino, C. Bustamante y A. Monzón. Establecimiento de índices de calidad ambiental de ríos con bases en el comportamiento del oxígeno disuelto y de la temperatura. Aplicación al caso del río Medellín, en el Valle de Aburra en Colombia [En línea] [Fecha de consulta: 20 Agosto 2013]. Disponible: http://dyna.medellin.unal.edu.co/es/ediciones/181/articulos/v80n181 a21/v80n181a21.pdf. Hydroscience, Inc , Simplified Matematical Modeling of Water Quality, Washington D.C: Mitre Corporation and USEPA, Water Programs, 1971. Eckenfelder, W. Industrial Water Polluition Control, McGraw Hill, 1966, 1989, 2000. Bowie, G.L., Mills, W., Porcella, D. and Campell, C., Rates, constan, and kinetics formulation in surface water quality modeling, EPA 600/3/85/40, Georgia, 1985.orgia, 1985. D.J. and O'Connor, D., Mechanism of reaeration in natural streams, Trans. Am. Soc. Civil Engin, 123, pp. 641-666, 1958. Churchill, M.A., Elmore, H.L. and Buckingham, R.A., Prediction of streams reaereation rates, J. San. Engr. Div. ASCE SA4:1, pp. 3199. 1962. Owens, M., Edwars, R., and Gibbs. J., Some rearation studies in streams, Int. J. Air Water Poll. 8, pp. 469-486, 1964, Tsivoglou, E.C. and Wallace, S. R., Characterizaction of Stream Reaeration Capacity, USEPA, Report No. EPA - R3-72-012, 1972. Texas Water Development Board, simulation of water quality in stream and canals, Program Documentation and User's Manual, Austin, TX, 1970. Chapra, S., Surface water quality modelling, New York, Mc Graw Hill, 1997. 997 P. Gaudy, A.F. and Gaudy, E.T., Microbiology for enviromental scientists and engineers, New York, Mc Graw Hill, 1980. Thomann, R. and Mueller, J., Principles of surface water quality modeling and control, Chicago, Haper & Row, 1987. Di Toro, D., Fitzpatrik, J. and Thoman, R., Water quality analysis simulation program (WASP) and Model verification program (MVP) - Documentation. Hydroscience, Inc., Nueva Jersey, Westwood, NJ. for USEPA, Duluth, MN, Contract No. 68-01-3872, 1981. Beven, J., Generalized likelihood uncertainty estimation (GLUE) – User manual, 1998. Lees, M.J., Camacho, L. and Whitehead, P., Extension of the QUASAR river water quality model to incorporate dead-zone mixing. Hydrology and Earth System Sciences Discussions, Copernicus Publications, 2 (2/3), pp.353-365, 1998. APHA (American Public Health Association)., Standard methods for the examination of water and wastewater, American Public Health Association, American Water Works Association and Water Environment Federation, Washington. D.C, 1995. Lees, and Wagener, Monte-Carlo Analysis Toolbox (MCAT) [Online], Civil and Environmental Engineering DepartmentImperial College of Science Technology and MedicineLondon, SW7 2BU, UK, 2000. Available at:: http://www3.imperial.ac.uk/ewre/research/software/toolkit. Camacho, LA., Calibración y análisis de la capacidad predictiva de modelos de transporte de solutos en un río de montaña colombiano. Avances en Recursos Hidráulicos, 14, pp. 39-51, 2006. USEPA. The Enhanced stream Water Quality Models, Qual2E and Qual2E Uncas: Documentation and User Manual, EPA, 600 P., 1987.

J.V. Rivera-Gutiérrez, es Tecnólogo Químico en 1992, de la Universidad Francisco de Paula Santander de Cúcuta, Colombia; es Administrador de Empresas en el 2002, de la Universidad Cooperativa de Colombia de Bucaramanga, Colombia. MSc en Gestión y Auditoria Ambiental en el 2011 de la Universidad Politécnica de Cataluya de Barcelona, España y MSc. en Ingeniería y Tecnología Ambiental en el 2012 del Centro Panametricano de Estudios Superiores de México. Trabajó para el acueducto de Cúcuta como analista de laboratorio, en el Instituto Colombiano de Petróleos de Ecopetrol, se desempeñó como analista de laboratorio de agua, suelos, caracterización de crudos y pruebas estandar en derivados del petróleo. Actualmente, es docente investigador ambiental en el área del recurso hídrico en las Unidades Tecnológicas de Santander y consultor Ambiental. Su interes por la investigación del recurso hídrico, incluyen temas como: simulación, modelización de sistemas hídricos abiertos, sistemas de tratamiento en potabilización y saneamiento y estudios hidrológicos, utilizando técnicas compuracionales de medición en tiempo real, alarmas tempranas, desoxigenación en ciénagas y transporte de sedimentos en ríos.

193

Área Curricular de Medio Ambiente Oferta de Posgrados

Especialización en Aprovechamiento de Recursos Hidráulicos Especialización en Gestión Ambiental Maestría en Ingeniería Recursos Hidráulicos Maestría en Medio Ambiente y Desarrollo Doctorado en Ingeniería - Recursos Hidráulicos Doctorado Interinstitucional en Ciencias del Mar Mayor información:

E-mail: acia_med@unal.edu.co Teléfono: (57-4) 425 5105


Comfort perception assessment in persons with transfemoral amputation Juan Fernando Ramírez-Patiño a, Derly Faviana Gutiérrez-Rôa b & Alexander Alberto Correa-Espinal c b

a Facultad de Minas, Universidad Nacional de Colombia, Medellín, Colombia. jframirp@unal.edu.co Facultad de Minas, Universidad Nacional de Colombia, Medellín, Colombia. dfgutierrezr@unal.edu.co c Facultad de Minas, Universidad Nacional de Colombia, Medellín, Colombia. alcorrea@unal.edu.co

Received: July 30th, 2014. Received in revised form: November 14th 2014. Accepted: April 30th, 2015.

Abstract Historically, the design and fitting of prostheses had to rely on a slow process of trial and error, depending on the expertise of the prosthetist. Therefore, a clear definition of the concept of comfort and clear knowledge of its contributing factors are important when designing comfortable prostheses. However, there are currently no standardized methods to adequately measure prosthesis-related comfort. The aim of this study is to identify the factors that underlie the concept of comfort with prosthesis use in transfemoral amputees. Forty-one transfemoral amputees completed a questionnaire to evaluate the perception of comfort and to analyze the influence of six factors. It found a significant model that correctly classifies 84.9% of the cases. It can predict whether the patient feels comfort while using the prosthesis. Although all of the factors were significant, the factors with the greatest influence on the perception of comfort were functionality and pain. Keywords: rehabilitation; trans-femoral amputees; prostheses; comfort.

Valoración de la percepción de confort en personas con amputación transfemoral Resumen Históricamente, el diseño y ajuste de prótesis es un proceso lento de ensayo y error que depende de la experiencia del protesista. Por lo tanto, una definición clara del concepto de confort es importante para diseñar prótesis cómodas y para el conocimiento de los factores que le contribuyen. Sin embargo, actualmente no existen métodos estandarizados para medir adecuadamente el confort cuando se usan prótesis. Este estudio identifica los factores que subyacen en el concepto de confort al usar prótesis. 41personas con amputación transfemoral completaron un cuestionario para evaluar la percepción de confort y analizar la influencia de seis factores. Se encontró un modelo significativo que clasifica correctamente el 84,9 % de los casos, lo que permite predecir si el paciente siente confort durante el uso de la prótesis. Aunque todos los factores fueron significativos, los factores que más influyen en la percepción de confort fueron funcionalidad y dolor. Palabras clave: rehabilitación; amputados transfemorales; prótesis; confort.

1. Background Despite the frequent use of the term, there is no widely accepted definition of comfort. In transfemoral amputees, the term “comfort” is used in reference to two groups: the rehabilitation of the patient and the interactions with the residual limb and prosthetic socket. For the first group, comfort is a subscale of the physical and social well-being of the individual’s quality of life, which

is represented by the degree of rehabilitation and is measured as the independence of the individual to perform daily activities [1]. Because no universal consensus exists regarding the optimal instrument, several questionnaires have been developed to assess the rehabilitation treatment [2] or prosthesis-related quality of life of transfemoral amputees [3-10]. In these instruments, mobility is commonly regarded as an important factor [11-13] together with pain. When evaluating comfort, such as the interaction between

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 194-202. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.44700


Ramírez-Patiño et al / DYNA 82 (191), pp. 194-202. June, 2015.

the socket and the residual limb, the ability to perform common daily activities is directly related to both fit and discomfort [6,14-16]. Comfort may also be defined as a constitutive aspect of the satisfaction with the use of the prosthesis [8]. Pain sensations may be associated with discomfort, although pain and discomfort are not necessarily correlated because other factors may lead to discomfort [15,17]. The static and dynamic alignment of the prosthetic system during adaptation and the subsequent transmission and distribution of pressure determine the comfort of the amputee's gait [18]. In the measurement of comfort within the socket, several authors agree on the absence of a measurable scale of comfort [6,15,17,19,20]. In the literature, only one study attempted to directly measure the level of comfort felt by the amputee in the socket. That study offered the hypothesis that pain and comfort are subjective perceptions and adapted a numerical scale, initially used to measure pain [6]. The vast majority of studies that investigate the effects of prosthesis design on the amputee's performance have compared the biomechanical and physiological effects of different prostheses. Other studies have referenced the mechanical properties of the prosthesis that directly influence the comfort and performance of the amputee [21]. Unfortunately, currently, there are no standardized, wellaccepted methods to adequately measure prosthesis-related comfort either for research or for clinical use. Incorrect measures of comfort cause delays in the adjustment of the socket by the prosthetic technicians, who must rely on a slow process of trial and error [16,18]. Historically, the design, construction, and fitting of prostheses have been an art, depending on the accumulated expertise of the practitioner [22]. This situation requires the use of non-standardized descriptive terms to express comfort, which must be explained to the patient, the practitioner and the prosthetic technician. Efforts should be directed at improving communication between patients and practitioners, to improve the quality of care provided to the growing numbers of persons with limb loss [23]. The nonstandardized terms prevent the implementation of accurate clinical measurements of the patient’s perception of comfort. Furthermore, each patient has a different way of assessing his/her perception of comfort [17], which may vary over time [24,25] and increases the difficulty of developing a standardized scale. In the comfort theories based on studies on comfort in sitting, some issues are generally accepted [26]: (1) comfort is a construct of a subjectively defined personal nature; (2) comfort is affected by factors of various natures (physical, physiological, psychological); and (3) comfort is a reaction to the environment. Therefore, a clear definition of the concept of comfort is important to the rehabilitation practitioners and researchers, as is knowledge about which factors contribute to the comfort with prosthesis use, and their relative importance. However, there is a lack of knowledge about comfort in persons with transfemoral amputation. Until now, relatively few studies have analyzed the factors that influence quality of life [27] in transfemoral amputees, but their relationships with the patient’s comfort experience are generally unknown.

The aim of this research was to identify the factors that underlie the concept of comfort with prosthesis use in transfemoral amputees. In order to achieve this goal, whether there are significant differences between the persons with transfemoral amputation who feel comfort and persons who feel discomfort when using prosthesis, was determined through a modified scale. 2 Methods 2.1 Participants The participants were invited by telephone using databases from organizations involved with disabled individuals. The study region was limited to Antioquia, Colombia, and the study included persons with unilateral transfemoral amputations and who used prosthesis for ambulation with or without an additional mobility aid. The Universidad Nacional de Colombia ethics committee granted ethical approval for the study. After written consent, the subjects were asked to complete a questionnaire. 2.2 Questionnaire To determine the factors that underlie the concept of comfort in transfemoral amputees, we prepared a questionnaire with 30 questions organized into the following six scales or factors: appearance, well-being, pain, functionality, psychological health and social health. These were based on the Prosthesis Evaluation Questionnaire (PEQ) for psychometric properties. Because the factors were not dependent on one another, only the factors relevant to the research question were used. The following questions were added to the PEQ: “Do you feel comfort when using your prosthesis?” “Which of these factors improves your comfort?” and “Which of these factors reduces your comfort?” The questionnaire was available in Spanish. The original scale uses a 100 mm Visual Analog Scale (VAS), but for the purposes of the present study, we decided to change it because it has been reported in the literature that older and less educated patients experienced difficulty understanding the VAS response format, and the instructions of the questionnaire needed to be explained carefully [25,28,29]. Similar observations were reported by Guyatt et al. [30], who found subjects had fewer problems and required less training, by using a numerically based Likert scale as opposed to a VAS. The Italian validation of the PEQ suggests the need for simplifying the questionnaire format to be feasible for widespread clinical use [29,31]. The PEQ uses a VAS because the questions cover a wide variety of response types; however, the VAS is not necessarily more accurate than multiple-choice responses [32]. Responding to a 30 question survey can be tedious, especially when it is related to a sensitive topic. The PEQ design requires an instrument that captures the participants’ attention, encourages reading, and is easy to answer. Therefore, we designed six boards measuring 43 x 28 centimeters, with one board for each factor (Fig. 1). The verbal scales of the six categories were adapted to the content

195


Ramírez-Patiño et al / DYNA 82 (191), pp. 194-202. June, 2015.

Figure 1. Boards for the comfort perception assessment. Source: Authors

of each question to allow a minimum and maximum response to each question. The final survey included 24 direct questions and 6 indirect questions. The final survey included 24 direct questions and 6 indirect questions (Appendix 1). Several distinguishing characteristics were used to define each factor to convey the concept it represented. Each board was named and was given a color, an image and a different pictogram to reinforce the concept of each factor on each board.A moderator was trained in the use of the instrument. The original questionnaire has an inclusion criterion that patients are able to read because it is a self-administered questionnaire. The literacy requirement could be a limitation to the widespread use of the instrument in clinical practice because a percentage of patients, especially older and less educated patients, have difficulty understanding written language or are visually impaired. 2.3 Conducting the analytical survey Without defining “comfort,” the moderator asked

whether the participant felt comfort while using the prosthesis. Then, the participant answered the questions using the six boards. When necessary, the moderator read the questions and multiple-choice responses without further clarification. Finally, the moderator introduced all of the boards to the participant and asked the following questions twice, independently: “Which of these factors improves your comfort?” and “Which of these factors reduces your comfort?” There was no compensation for participation. 2.4 Statistical analyses The statistical analysis used in the PEQ was factor analysis, which is a descriptive method that provides a direct view of the interrelationships between the variables, and allows information reduction. After finding the underlying structure among a number of variables, it is important to determine the relationships between the independent and dependent variables. The most appropriate statistical technique is logistic regression

196


Ramírez-Patiño et al / DYNA 82 (191), pp. 194-202. June, 2015.

because it only requires data concerning whether an event occurred (e.g., whether the patient feels comfort or not) as a dependent variable for predicting the probability that the event may or may not take place. Logistic regression does not require any assumptions of normality, linearity or homogeneity of variance for the independent variables. 3. Results 3.1. Participants’ profile Although 85 individuals were invited, surveys were conducted with 41 participants. Table 1 provides detailed data regarding the study sample. The participants were 75.61% male and mainly belonged to socioeconomic strata 2 and 3, which are areas with many economic constraints. In Colombia, households are classified into six strata, with strata 1 identified as the poorest. The participants exhibited a low level of education: 46.34% of the participants did not complete high school, and only 24.39% of the participants obtained technical degrees or higher. Of the 41 surveys conducted, three were eliminated because of contradictions in the participants’ answers.

3.3. Sample size and division of the sample The logistic regression technique is sensitive to the relationship between sample size and the number of predictor variables. At least five observations per each predictor variable should be used, and the sample size of each group should be considered. The smallest group must be larger than the number of independent variables. The technique requires that the sample is split into two sub-samples: one sample is used to estimate the model, and the other sample is used for validation. However, the study did not properly split the sample, and it was constructed from the responses obtained from participants, owing to the sample size. 3.3.1. Sample for validation The coded responses of the same factor were averaged to obtain the values of the independent variables. The dependent variable was obtained from the answer to the question, “Do you feel comfort when using your prosthesis?” The validation sample has a size of 38 data points. 3.3.2. Sample for estimating model

3.2. Constructing dummy variables The participants’ answers were coded for the statistical analysis using dummy variables based on a scale of zero to five depending on the direction of the question (direct or indirect). The highest score corresponded to the most positive response, and the lowest score corresponded to the most negative response. Table 1. Background and amputation characteristics of the study population. Frequency Percentage Sex Men 31 75.61 Women 10 24.39 Socioeconomic strata Strata 1 3 7.32 Strata 2 23 56.10 Strata 3 12 29.27 Other 3 7.32 Education Level Secondary education 19 46.34 High School Graduate 12 29.27 Higher education 10 24.39 Etiology Traffic accident 18 43.90 Illness 13 31.71 Work accident 3 7.32 Anti-personnel mine 3 7.32 Other 4 9.76 Prosthetic use Full time 32 78.05 Half day 2 4.88 A few hours a day 7 17.07 Average SD Age 50 16.98 Years since amputation 11 11.79 Months elapsed between amputation 17 28.54 and prosthesis use Source: Authors

As in the validation sample, the coded responses of the same factor were averaged to obtain the values of each factor. The sample was extended until it was four times the initial size ( 152). For the first 38 data points, the first factor that was chosen by the participant as enhancing comfort was replaced by 5, and the other values remained constant. For the next 38 data points, the second factor was treated the same way. For the first group ( 76), the dependent variable was converted to 1, which corresponded to comfort. The following 76 data points were treated in the same manner, except that the factors that reduce comfort were set to 0, and the dependent variable corresponding with discomfort was set to 0, which was used as the second group. All of the processes for the first participant are shown in Table 2. 3.4. Regression model The model was estimated using SPSS Version 15 and the backward stepwise method with the Wald statistic as contrast. The of the model is significant at 0.001 and the independent variables describing the dependent variable significantly differ with a Nagelkerke's R-Square of 0.654, which correctly classifies 84.9% of the cases. Table 3 presents the variables in the equation. The Wald statistic of the coefficients is significant for all of the factors ( 0.05). The value represents the extent to which raising the corresponding measure by one unit influences the odds. Larger values belong to the functionality and pain factors. Then, eq. (1) shows the probability equation that was used to evaluate whether a transfemoral amputee feels comfort with the use of the prosthesis. The validation process using the corresponding sample showed that the estimated model classified 73.17% of cases.

197


Ramírez-Patiño et al / DYNA 82 (191), pp. 194-202. June, 2015.

Table 2. Sample for model estimation and validation for the first participant. Sample for validation The first participant feels comfort with his/her prosthesis use. The averages of his/her coded answers for each factor are: Appearance Well-being Pain Functionality Psychological health Social health Comfort 4.20 4.00 3.20 3.56 3.60 5.00 1 Factor that improves the comfort First time: Well-being Second time: Psychological health Factor that reduces the comfort First time: Functionality Second time: Pain Sample for estimating model N Appearance Well-being Pain Functionality Psychological health Social health Comfort 1 4.20 5.00 3.20 3.56 3.60 5.00 1.00 39 4.20 4.00 3.20 3.56 5.00 5.00 1.00 81 4.20 4.00 3.20 0.00 3.60 5.00 0.00 122 4.20 4.00 0.00 3.56 3.60 5.00 0.00 Source: Authors

Table 3. Variables in the equation. Step 1* Appearance Well-being Pain Functionality Psychological health Social health Constant Source: Authors

β 0.683 0.526 0.769 1.107 0.445 0.528 -12.486

S.E. 0.198 0.201 0.183 0.201 0.221 0.211 2.121

Wald 11.884 6.858 17.553 30.356 4.079 6.280 34.658

Df 1 1 1 1 1 1 1

Sig. 0.001 0.009 0.000 0.000 0.043 0.012 0.000

Exp(β) 1.981 1.693 2.157 3.025 1.561 1.695 0.000

1 1

.

.

.

.

.

4. Discussion The aim of this study was to find the underlying factors of comfort with prosthesis use in persons with transfemoral amputation, through the modification of a previously validated questionnaire, which allows practitioners to overcome the implementation difficulties reported in the literature. Forty-one transfemoral amputees who used prosthesis and varied by age, sex, reason for amputation and years since amputation, completed a questionnaire using audio plugs to select their answers. Only persons with bilateral lower limb amputations were not included in this study. The conducted empirical application detects the existence of statistically significant differences between patients who feel comfort or discomfort with the prosthesis. Therefore, it was possible to identify the factors that underlie the concept of comfort with prosthesis use in transfemoral amputees. The factors analyzed in this study were appearance, well-being, pain, functionality, psychological health and social health. Although all of the factors analyzed were significant, the factors with the greatest influence on the perception of comfort were functionality and pain, as in previous studies about factors related to quality of life [3]. In the literature, two main approaches to quality of life measurement can be found; functionalist and needs-based. The authors argued that human needs are the foundations for quality of life and that quality of life is the degree of satisfaction of those needs [33]. This is reminiscent of

.

P

.

(1)

Maslow's hierarchy of need in which five levels of needs were identified and postulated that they are satisfied in strict sequence. Physiological needs are the physical requirements for human survival. If these requirements are not met, the human body cannot function properly, and will ultimately fail. Physiological needs are thought to be the most important; they should be met first. With their physical needs relatively satisfied, the individuals' safety needs take precedence and dominate behavior. Safety needs include health and well-being, safety net against accidents or illness and their adverse impacts. This study concurs with these findings, because the functionality factor focuses on the ease, usefulness, convenience and mobility using the prosthesis. The influence of this factor may be due to the economic situation of the sample studied. Low-income patients may have difficulties acquiring more technologically advanced prosthetic systems, which would allow them to move more easily. A patient’s desire to acquire a prosthetic system to meet all his/her functional needs may be a basic requirement to perceive comfort. After the basic necessities of functionality are covered, patients do not want the prosthesis to cause pain, which is an adverse impact of an illness. It is, however, noteworthy that appearance, the third most important factor, differs from the third need: love and belonging. This can be explained by the high degree of vanity that culturally exists among Colombians. It is also evidenced that there are potential differences between the patients’ and clinicians’ goals, such as restoring the walking function

198


Ramírez-Patiño et al / DYNA 82 (191), pp. 194-202. June, 2015.

versus appearance. The practitioner is primarily interested in subjective and objective measures relating to function and sometimes forgets the significant differences between the rehabilitation expectations of the patients, whereas the patient is often more interested in communicating their subjective personal impressions [19]. The organization into a hierarchy factors, will help researchers and designers to focus on specific therapeutic solutions for these patients. Additionally, this finding provides tools to determine whether the patient experiences comfort with the use of the prosthesis, which are quick and easy to set up, administer, and analyze. This prediction will improve the communication between the patient, the doctor and the prosthetic technician. Practitioners will know when a person has actually become accommodated to a new prosthesis or a new prosthetic component. This will in turn avoid delays that could cause deterioration of the soft tissues of the residual limb, abnormal gait patterns of the amputee, decreased productive capacity and social reintegration of disabled patients and complexes such as depression after a traumatic loss of a limb [8] that result in personal and social losses given that most amputees in developing countries are young working people [34]. Most of the innovations made throughout the years have been aimed at helping amputees to return to their working lives [35]. For future applications of the instrument, the authors believe that the patient can complete the questionnaire without the presence of the moderator. In clinical practice, the patient could answer the questions using the boards in the waiting room without making a doctor's appointment. In future studies, the boards could be answered by the patient at home if the patient exhibits a sufficient education level and then mailed. Because the PEQ was modified for the purpose of the present study, further testing is required to compare the PEQ’s use, reliability, and responsiveness by using different response options. In this study, we calculated summary scores for each scale. This approach assumes equality between the different levels of the response categories. Additional research, by using techniques such as Rasch Analysis, should be conducted to test this assumption. 5. Conclusions

Francisco Restrepo for the layout of the boards, and all of the people who contributed directly and indirectly to this research. References [1]

[2] [3]

[4]

[5]

[6] [7]

[8]

[9]

[10]

[11]

The conducted empirical application detects the existence of statistically significant differences between patients who feel comfort or discomfort with the prosthesis. Comfort for transfemoral amputees is determined by interactions among the following factors: functionality, pain, appearance, well-being, psychological health and social health. The degree of influence for each factor on the construct is similar to Maslow's hierarchy of needs. Acknowledgments

[12]

[13]

[14]

We are grateful to all of the study participants. We express our deepest gratitude to Orthopraxis, the moderators, Robinson Chica for designing images for the boards,

199

Bosmans, J., Suurmeijer, T., Hulsink, M., van der Schans, C., Geertzen, J. and Dijkstra, P., Amputation, phantom pain and subjective well-being: a qualitative study. International Journal of Rehabilitation Research, 30 (1), pp.1-8, 2007. DOI: 10.1097/MRR.0b013e328012c953 Kent, R. and Fyfe, N., Effectiveness of rehabilitation following amputation. Clinical Rehabilitation, 13 (1), pp. 43-50, 1999. DOI: 10.1191/026921599676538002 Demet, K., Martinet, N., Guillemin, F., Paysant, J. and André, J-M., Health related quality of life and related factors in 539 persons with amputation of upper and lower limb. Disability and Rehabilitation, 25 (9), pp. 480-486, 2003.DOI: 10.1080/0963828031000090434 Gallagher, P., Franchignoni, F., Giordano, A. and MacLachlan, M., Trinity amputation and prosthesis experience scales: A psychometric assessment using classical test theory and rasch analysis. American Journal of Physical Medicine Rehabilitation, 89 (6), pp. 487-496, 2010. DOI: 10.1097/PHM.0b013e3181dd8cf1 Hagberg, K., Brånemark, R. and Hägg, O. Questionnaire for Persons with a Transfemoral Amputation (Q-TFA): initial validity and reliability of a new outcome measure. Journal of Rehabilitation Research and Development, 41 (5), pp. 695-706, 2004. doi: 10.1682/JRRD.2003.11.0167 Hanspal, R., Fisher, K. and Nieveen, R., Prosthetic socket fit comfort score. Disability and Rehabilitation, 25 (22), pp. 1278-1280, 2003. DOI: 10.1080/09638280310001603983 Legro, M., Reiber, G., Smith, D., del Aguila, M., Larsen, J. and Boone, D., Prosthesis evaluation questionnaire for persons with lower limb amputations: Assessing prosthesis-related quality of life. Archives of Physical Medicine and Rehabilitation, 79 (8), pp. 931938, 1998. DOI:10.1016/S0003-9993(98)90090-9 Rybarczyk, B., Nyenhuis, D., Nicholas, J., Schulz, R., Alioto R, Blair C., Social discomfort and depression in a sample of adults with leg amputations. Archives of Physical Medicine and Rehabilitation [Online]. 73 (12), pp. 1169-1173, 1992. [date of reference July 27th of 2010]. Available at: http://www.archives-pmr.org/article/00039993(92)90116-E/pdf Ware, J., Kosinski, M. and Keller, S., A 12-Item short-form health survey: Construction of scales and preliminary tests of reliability and validity. Medical Care [Online]. 3 (34), pp. 220-233, 1996. [date of of 2010]. Available at: reference July 26th http://www.jstor.org/stable/3766749 Ware, J. and Sherbourne, C., The MOS 36-item short-form health survey (SF-36): I. Conceptual framework and item selection. Medical Care, 6 (30), pp. 473-483, 1992. DOI: 10.1097/00005650199206000-00002 Franchignoni, F., Orlandini, D., Ferriero, G. and Moscato, T., Reliability, validity, and responsiveness of the locomotor capabilities index in adults with lower-limb amputation undergoing prosthetic training. Archives of Physical Medicine and Rehabilitation, 85 (5), pp. 743-748, 2004. DOI:10.1016/j.apmr.2003.06.010 Rommers, G., Vos, L., Groothoff, J. and Eisma, W., Mobility of people with lower limb amputations, scales and questionnaires: A review. Clinical Rehabilitation, 15 (1), pp. 92-102, 2001. DOI: 10.1191/026921501677990187 Treweek, S. and Condie, M., Three measures of functional outcome for lower limb amputees: a retrospective review. Prosthetis and Orthotics International, 22 (3), pp. 178-185, 1998. DOI: 10.3109/03093649809164482 Van de Meent, H., Hopman, M. and Frölke, J., Walking ability and quality of life in subjects with transfemoral amputation: A comparison of osseointegration with socket prostheses. Archives of Physical Medicine and Rehabilitation, 94 (11), pp. 2174-2178, 2013. DOI: 10.1016/j.apmr.2013.05.020


Ramírez-Patiño et al / DYNA 82 (191), pp. 194-202. June, 2015. [15] Neumann, E., Measurement of socket discomfort-part I: Pressure sensation. Journal of Prosthetics and Orthotics, 13 (4), pp. 99-110, 1992. DOI: 10.1097/00008526-200112000-00010 [16] Zheng, Y., Mak, A. and Leung, K., State-of-the-art methods for geometric and biomechanical assessments of residual limbs: a review. Journal of Rehabilitation Research and Development [Online]. 38 (5), pp. 487-504, 2001. [date of reference June 28th of 2010]. Available at: http://www.rehab.research.va.gov/jour/01/38/5/pdf/zheng.pdf [17] Harms-Ringdahl K, Brodin H, Eklund L, Borg G., Discomfort and pain from loaded passive joint structures. Scandinavian Journal of Rehabilitation Medicine [Online]. 15 (4), pp. 205-211, 1983. [date of of 2010]. Available at: reference June 26th http://www.ncbi.nlm.nih.gov/pubmed/6648392 [18] Meier, R. 3rd, Meeks, E. and Herman, R., Stump-socket fit of belowknee prostheses: Comparison of three methods of measurement. Archives of Physical Medicine and Rehabilitation [Online]. 54 (12), pp. 553-558, 1973. [date of reference July 10th of 2010]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/4759446 [19] Miller, L. and McCay, J., Summary and conclusions from the academy’s sixth state-of-the-science conference on lower limb prosthetic outcome measures. Journal of Prosthetics and Orthotics [Online]. 18 (1s), pp. 2-7, 2006. [date of reference July 12th of 2010]. Available at: http://www.oandp.org/jpo/library/2006_01S_002.asp [20] Smith, D., Special challenges in outcome studies for amputation surgery and prosthetic rehabilitation. Journal of Prosthetics and Orthotics [Online]. 18 (1s), pp. 116-118, 2006. [date of reference July of 2010]. Available at: 12th http://www.oandp.org/jpo/library/2006_01S_116.asp [21] Major, M., Twiste, M., Kenney, L. and Howard, D., Amputee independent prosthesis properties-a new model for description and measurement. Journal of Biomechanics, 44 (14), pp. 2572-2575, 2011. DOI:10.1016/j.jbiomech.2011.07.016 [22] Papaioannou, G., Mitrogiannis, C., Nianios, G. and Fiedler, G., Assessment of amputee socket-stump-residual bone kinematics during strenuous activities using Dynamic Roentgen Stereogrammetric Analysis. Journal of Biomechanics, 43 (5), pp. 871878, 2010. DOI:10.1016/j.jbiomech.2009.11.013 [23] Pezzin, L., Dillingham, T., MacKenzie, E., Ephraim, P. and Rossbach, P., Use and satisfaction with prosthetic limb devices and related services. Archives of Physical Medicine and Rehabilitation, 85 (5), pp. 723-9, 2004. DOI:10.1016/j.apmr.2003.06.002 [24] Neumann, E., Measurement of socket discomfort-part II: Signal detection. Journal of Prosthetics and Orthotics, 13 (4), pp. 111-122, 2001. doi: 10.1097/00008526-200112000-00011 [25] Zidarov, D., Swaine, B. and Gauthier-Gagnon, C., Quality of life of persons with lower-limb amputation during rehabilitation and at 3month follow-up. Archives of Physical Medicine and Rehabilitation, 90 (4), pp. 634-645, 2009. DOI: 10.1016/j.apmr.2008.11.003 [26] De Looze, M., Kuijt-Evers, L. and van Dieën J., Sitting comfort and discomfort and the relationships with objective measures. Ergonomics, 46 (10), pp. 985-997, 2003. DOI: 10.1080/0014013031000121977 [27] Sinha, R., van den Heuvel, W. and Arokiasamy, P., Factors affecting quality of life in lower limb amputees. Prosthetis and Orthotics International, 35 (1), pp. 90-96, 2011. DOI: 10.1177/0309364610397087

[28] Ferrand-Ferri, P., Rodríguez-Piñero Durán, M., Echevarría-Ruiz de Vargas, C. and Zarco-Periñán, M.J., Versión española del Prosthesis Evaluation Questionnaire (PEQ): Parte inicial de su adaptación transcultural. Rehabilitación, 41 (3), pp. 101-107, 2007. DOI: 10.1016/S0048-7120(07)75496-8 [29] Miller, W., Deathe, A. and Speechley, M., Lower extremity prosthetic mobility: a comparison of 3 self-report scales. Archives of Physical Medicine and Rehabilitation, 82 (10), pp. 1432-1440, 2001. DOI: 10.1053/apmr.2001.25987 [30] Guyatt, G., Townsend, M., Berman, L. and Keller, J., A comparison of Likert and visual analogue scales for measuring change in function. Journal of Chronic Diseases, 40 (12), pp. 1129-1133, 1987. DOI: 10.1016/0021-9681(87)90080-4 [31] Ferriero, G., Dughi, D., Orlandini, D., Moscato, T., Nicita, D. and Franchignoni, F., Measuring long-term outcome in people with lower limb amputation: Cross-validation of the Italian versions of the prosthetic profile of the amputee and prosthesis evaluation questionnaire. Europa Medicophysica [Online]. 1 (41), pp. 1-6, 2005. [date of reference September 24th of 2010]. Available at: http://www.researchgate.net/profile/Franco_Franchignoni [32] Fitzpatrick, R., Davey, C., Buxton, M. and Jones, D., Evaluating patient-based outcome measures for use in clinical trials. Health Technology Assessment. [Online]. 2 (14), pp. 1-74, 1998. [date of of 2010]. Available at: reference June 25th http://www.soton.ac.uk/~hta [33] Hörnquist, J., The concept of quality of life. Scandinavian Journal of Social Medicine, 10 (2), pp. 57-61, 1982. DOI: 10.1177/140349489001800111 [34] [34] Restoration. From surgery to community reintegration. Disability and Rehabilitation, 26 (14-15), pp. 8318-36, 2004. DOI: 10.1080/09638280410001708850 [35] Loaiza, J. and Arzola, N., Evolución y tendencias en el desarrollo de prótesis de mano, DYNA [Online]. 78 (169), pp 191-200, 2011. [date of 2013]. Available at: of reference July 25th http://www.scielo.org.co/scielo.php?script=sci_arttext&pid=S001273532011000500022&lng=en&nrm=iso J.F. Ramírez-Patiño, graduado de Ing. Mecánico de la Universidad Nacional de Colombia en 1999, MSc en Ingeniería Mecánica de la Universidad Simón Bolívar, Venezuela en 2002 y Dr. en Ingeniería de la Universidad Nacional de Colombia en 2011. Profesor de tiempo completo del Departamento de Ingeniería Mecánica, Facultad de Minas, Universidad Nacional de Colombia. D.F. Gutiérrez-Rôa, graduada de Ing. Industrial de la Universidad Mayor de San Simón, Bolivia en 2004, Esp. en Sistemas de Gestión de Calidad en 2006 y MSc en Ingeniería Administrativa de la Universidad Nacional de Colombia en 2010. A.A. Correa-Espinal, graduado de Ing. Industrial de la Universidad Nacional de Colombia en 1995, MSc en Ingeniería Industrial de la Universidad de los Andes, Colombia en 1999 y Dr en Estadística e Investigación Operativa de la Universidad Politécnica de Cataluña, España en 2007. Profesor de tiempo completo del Departamento de Ingeniería de la Organización, Facultad de Minas, Universidad Nacional de Colombia.

Appendix 1 Factors, questions and multiple answers for assessment the perception of comfort in persons with transfemoral amputation: Apariencia 1. ¿Qué tan agradable es el contacto de su muñón con su socket? o Nada agradable o Muy poco agradable o Poco agradable o Medianamente agradable o Muy agradable o Exageradamente agradable 2. ¿Qué tan bien luce Usted con su prótesis? o No luzco bien

o Luzco muy poco bien o Luzco poco bien o Luzco medianamente bien o Luzco muy bien o Luzco exageradamente bien 3. ¿Durante el uso de su prótesis, cuánto le molesta el sonido que produce? o No me molesta o Me molesta muy poco o Me molesta poco o Medio me molesta 200


Ramírez-Patiño et al / DYNA 82 (191), pp. 194-202. June, 2015. o Me molesta mucho o Su sonido es insoportable 4. ¿Cuántas veces ha dañado su prótesis a su ropa? o Nunca la ha dañado o La ha dañado muy pocas veces o La ha dañado pocas veces o La ha dañado algunas veces o La ha dañado muchas veces o Siempre la daña 5. ¿Qué tanto le limita su prótesis la elección de ropa y zapatos? o No me limita o Me limita muy poco o Me limita poco o Me limita medianamente o Me limita mucho o Siempre me limita Bienestar 6. Usando su prótesis, ¿qué tanto aumenta la sudoración de su muñón? o No aumenta o Aumenta muy poco o Aumenta poco o Aumenta medianamente o Aumenta mucho o Aumenta exageradamente 7. En el último año, ¿cuántas veces ha tenido que cambiar de socket porque su muñón ha cambiado de tamaño? o Cero veces o Una vez o Dos veces o Tres veces o Cuatro veces o Cinco veces o más 8. ¿Ha sentido o detectado algún brote o salpullido en su muñón por el uso de su prótesis? o Nunca lo he sentido o Muy pocas veces lo he sentido o Pocas veces lo he sentido o Algunas veces lo he sentido o Muchas veces lo he sentido o Siempre lo he sentido 9. ¿Qué tan a menudo se le generan ampollas, raspones o moretones en su muñón por el socket? o Nunca los he tenido o Muy pocas veces los he tenido o Pocas veces los he tenido o Algunas veces los he tenido o Muchas veces los he tenido o Siempre los he tenido 10. ¿Con qué frecuencia las molestias ocasionadas por el uso de su prótesis, le han incapacitado realizar sus actividades cotidianas? o Nunca o Muy pocas veces o Pocas veces o Algunas veces o Muchas veces o Siempre Dolor 11. Usando la prótesis, ¿Con que frecuencia ha sentido dolor en su muñón? o Nunca o Muy pocas veces o Pocas veces o Algunas veces o Muchas veces o Siempre 12. Usando su prótesis, ¿Con que frecuencia ha sentido dolor en su otra pierna? o Nunca o Muy pocas veces o Pocas veces

o Algunas veces o Muchas veces o Siempre 13. Usando su prótesis, ¿Con que frecuencia ha sentido dolor en su espalda? Nunca o o Muy pocas veces o Pocas veces o Algunas veces o Muchas veces o Siempre 14. ¿Con que frecuencia ha dejado de usar su prótesis debido al dolor que ésta le causa? o Nunca o Muy pocas veces o Pocas veces o Algunas veces o Muchas veces o Siempre 15. ¿Con que frecuencia ha sentido dolor mientras se coloca su prótesis? o Nunca o Muy pocas veces o Pocas veces o Algunas veces o Muchas veces o Siempre Funcionalidad 16. ¿Qué tan bien se ajusta su socket a su muñón? o No se ajusta bien o Se ajusta muy poco bien o Se ajusta poco bien o Se ajusta medianamente bien o Se ajusta muy bien o Se ajusta exageradamente bien 17. ¿Qué tan pesada siente su prótesis? o Nada pesada o Muy poco pesada o Poco pesada o Medianamente pesada o Muy pesada o Exageradamente pesada 18. Cuando usa su prótesis, ¿su socket le dificultad estar sentado? Nunca o o Muy pocas veces o Pocas veces o Algunas veces o Muchas veces o Siempre 19. ¿Cuánto esfuerzo requiere para realizar sus actividades cotidianas utilizando su prótesis? o Ningún esfuerzo o Muy poco esfuerzo o Poco esfuerzo o Algún esfuerzo o Mucho esfuerzo o Esfuerzo exagerado 20. ¿Qué tan fácil es ponerse su prótesis? o Nada fácil o Muy poco fácil o Poco fácil o Medianamente fácil o Muy fácil o Exageradamente fácil 21. ¿En cuál de los siguientes niveles de movilidad se clasificaría usted? o Puede caminar a un ritmo fijo o Puede caminar sobre obstáculos o Puede caminar a un ritmo variable o Puede caminar libremente Psicológico 22. ¿Qué tan feliz se siente con su prótesis?

201


Ramírez-Patiño et al / DYNA 82 (191), pp. 194-202. June, 2015. o Nada feliz o Muy poco feliz o Poco feliz o Medianamente feliz o Muy feliz o Exageradamente feliz 23. ¿Qué tan frecuentemente se siente frustrado con su prótesis? o Nunca o Muy pocas veces o Pocas veces o Algunas veces o Muchas veces o Siempre 24. Usando su prótesis, ¿siente usted que la gente la observa? o Nunca o Muy pocas veces o Pocas veces o Algunas veces o Muchas veces o Siempre 25. Usando su prótesis, ¿con qué frecuencia hace uso de piscinas, gimnasios o centros recreativos? o Nunca o Muy pocas veces o Pocas veces o Algunas veces o Muchas veces o Siempre 26. Usando su prótesis, ¿con qué frecuencia ha evitado relacionarse con personas desconocidas? o Nunca o Muy pocas veces o Pocas veces o Algunas veces o Muchas veces o Siempre

29. Usando su prótesis, ¿Con qué frecuencia se ha sentido rechazado por su entorno social? o Nunca o Muy pocas veces o Pocas veces o Algunas veces o Muchas veces o Siempre 30. Usando su prótesis, ¿con qué frecuencia se ha sentido incapaz de cuidar de alguien más? o Nunca o Muy pocas veces o Pocas veces o Algunas veces o Muchas veces Siempre

Social 27. Usando su prótesis, ¿con qué frecuencia evita hacer actividades que puedan generar reacciones en los demás? o Nunca o Muy pocas veces o Pocas veces o Algunas veces o Muchas veces o Siempre 28. Usando su prótesis, ¿Con qué frecuencia se ha sentido rechazado por su familia? o Nunca o Muy pocas veces o Pocas veces o Algunas veces o Muchas veces o Siempre

Área Curricular de Ingeniería Administrativa e Ingeniería Industrial Oferta de Posgrados

Especialización en Gestión Empresarial Especialización en Ingeniería Financiera Maestría en Ingeniería Administrativa Maestría en Ingeniería Industrial Doctorado en Ingeniería - Industria y Organizaciones Mayor información: E-mail: acia_med@unal.edu.co Teléfono: (57-4) 425 52 02

202


The water budget and modeling of the Montes Torozos' karst aquifer (Valladolid, Spain) Germán Sanz-Lobón a, Roberto Martínez-Alegría b, Javier Taboada c, Teresa Albuquerque d, Margarida Antunes e & Isabel Montequi b a

Escola de Engenharia Civil. Universidade Fedeeral de Goiás. Goiânia, GO, Brasil. gsl9384@yahoo.com Escuela Politécnica. Universidad Europea Miguel de Cervantes. Valladolid. España. rmartinez@uemc.es c Escuela Técnica Superior de Ingeniería de Minas. Universidad de Vigo. Vigo. España. jtaboada@uvigo.es d Escola Superior de Tecnologia. Instituto Politécnico de Castelo Branco. Castelo Branco. Portugal. teresal@ipcb.pt e Escola Superior Agrária. Instituto Politécnico de Castelo Branco. Castelo Branco. Portugal. imantunes@ipcb.pt f Escuela Politécnica. Universidad Europea Miguel de Cervantes. Valladolid. España. imontequi@uemc.es b

Received: August 02nd, 2014. Received in revised form: December 10th, 2014. Accepted: May 6th, 2015.

Abstract The main core of this paper is the computation of the hydrological balance for the karstic aquifer of the sedimentary Duero watershed, middle-west of Spain. The obtained results for the hydrological balance have been used in the construction of simulated scenarios. In order to characterize the spatial and temporal dynamics of the water resources evolve through the hydrological year geostatistical (SpaceStat software) and numerical methodologies (Visual Modflow, software) were carried out. The Montes Torozos is a sedimentary karst unconfined aquifer, and its hydrological balance is defined by rainfall and drains. Keywords: karst aquifer, hydrologic model, groundwater flow, kriging and sustainability.

Balance y modelización del acuífero karstico de los Montes Torozos (Valladolid, España) Resumen En este trabajo se presenta el balance hídrico de una masa de agua subterránea de naturaleza kárstica de la cuenca sedimentaria del Duero, centro oeste de España. Los datos obtenidos para la realización del balance han sido introducidos en una herramienta de simulación con el fin de conocer la evolución temporal de los recursos hídricos a lo largo del año hidrológico. Para ello se ha sido necesario recurrir a la utilización de técnicas geostadísticas y el programa Visual Modflow. La masa de agua de los Montes Torozos se trata de un acuífero kárstico libre y colgado de origen sedimentario, y su balance está determinado por las precipitaciones y por el drenaje. Palabras clave: acuífero kárstico modelo hidrogeológico, flujo subterráneo, kriging y sostenibilidad.

1. Introducción Este trabajo se fundamenta en la necesidad de conocer y cuantificar el recurso hídrico. El principal objetivo es determinar el volumen de los recursos de la masa de agua del Páramo de los Montes Torozos, delimitada en el plan Hidrológico de la Cuenca del Duero de 2009 (código 400032) [1]. Se trata de una aproximación numérica al modelo hidrometeorológico de un acuífero kárstico.

Partiendo de la descripción geométrica del acuífero se han descrito y analizado sus características cualitativas y cuantitativas, y se ha establecido su dinámica y evolución a lo largo del año hídrico. La base de esta caracterización es el balance hídrico, y a partir de él, se han determinado las entradas (aportaciones) y las salidas (extracciones), que condicionan el régimen del acuífero, los volúmenes de recursos y las reservas disponibles. Usando como base de conocimiento los principios

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 203-208. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.44732


Sanz-Lobón et al / DYNA 82 (191), pp. 203-208. June, 2015.

Sistema del Información Geográfico, se ha planteado un método de trabajo pluridisciplinar, basado en un proceso continuo de obtención y análisis de datos que permita la creación de un modelo hídrico válido de acuerdo a la siguiente metodología (Fig. 2). La metodología desarrollada se ha apoyado en herramientas informáticas que facliten la visualización, y por tanto la interpretación de los resultados obtenidos. Para el análisis geostadístico se ha utilizado el programa Space-Stat. Posteriormente sus resultados se han exportado a un Sistema de Información Geográfica (SIG) implementado sobre la plataforma de ArcGis; a partir de la cual se ha creado, editado y procesado toda la información cartografica necesaria para construir el modelo de flujo en Visual Modflow.

Figura 1. Localización del páramo de los Montes Torozos. Fuente: el autor.

3. Antecedentes

teóricos fundamentales de: la teoría elemental del flujo de agua en los medios porosos (Ley Darcy), y la Hidráulica de Captaciones [2]. La masa de agua del Páramo de los Montes Torozos se localiza dentro de la cuenca del Duero y se extiende por las provincias de Valladolid y Palencia, ocupando una superficie de unos 1000 km2 (Fig. 1). La zona de estudio posee un clima mediterráneo con una temperatura media de 11.14ºC, una precipitación de 456 mm/año y una evapotranspiración de 312 mm/año seco [3]. Lo que supone un balance hídrico próximo a los 144 mm/año de lluvia útil capaz de alcanzar el acuífero. 2. Material y método La caracterización de todas las variables ambientales ha permitido obtener una visión holística a partir de la cual se pueda estimar el balance hídrico y el régimen de usos del acuífero de los Montes Torozos, como soporte esencial de cualquier actividad antrópica. Para ello, partiendo de un Sistema de Información Geográfica

Elaboración Cartografía

Resultados Analíticos Construcción de Modelos

Geoestadísticos

El cálculo del balance hídrico de un acuífero (o masa de agua), así como la realización de modelos de comportamiento y predictivos son herramientas habituales en la gestión de los recursos hídricos. Para su construcción se requiere una serie de pasos previos, como son: la delimitación espacial del acuífero, el establecimiento de sus límites, y el procesado de los datos piezométricos y climáticos. Respecto a la delimitación geográfica e hidrogeológica se ha realizado de acuerdo con lo descrito por la Confederación Hidrológica del Duero (CHD) [1]. La piezometría fue calculado mediante técnicas geoestadísticas de tipo kriging [4]. Para la síntesis de los datos climáticos se usaron los estadísticos descriptivos básicos necesarios (media, mediana, desviación típica, etc.), descartando por falta de necesidad recurrir al uso de técnicas avanzadas de almacenamiento, análisis multidimensional o minería de datos [5,6]. Finalmente, el cálculo de la recarga de la masa de agua se realizó en base al método propuesto por Mendes [7]. A pesar de ser un dato fundamental para el modelo, el empleo de metodologías comparativas [8] que permitan acotar la incertidumbre se antoja complicado, ya que es preciso el uso de fórmulas empíricas con series climáticas más precisas y largas que permitan ajustar mejor a datos bibliográficos calculados para áreas puntuales próximas [1]. Por su parte, las herramientas numéricas de modelado y simulación hidrogeológica están descritas minuciosamente por numerosos autores como Custodio y Llamas [2], y aplicadas en distintas regiones geográficas [9], respondiendo en cada caso a la información disponible y a la percepción del modelador. 4. Marco geológico-estructural

Flujo

Gestión del Recurso Figura 2. Gráfico de la propuesta metodológica desarrollada. Fuente: el autor.

Desde el punto de vista geológico, la masa acuífera objeto de estudio se trata de un páramo calcáreo elevado, limitado a muro por un paquete arcilloso prácticamente impermeable. En la zona de estudio afloran materiales pertenecientes al Neógeno Mioceno y al Cuaternario. El Mioceno ocupa la totalidad de la zona y está parcialmente recubierto por materiales Pliocenos y Cuaternarios aluviales y coluviales de variada naturaleza (Tabla 1). 204


Sanz-Lobón et al / DYNA 82 (191), pp. 203-208. June, 2015.

materiales con una mayor componente de terrígenos (margas calcáreas y arenosas). Hidrogeologicamente, la porosidad de esta unidad se desarrolla por procesos de karstificación aprovechando las brechas de las zonas de fractura tectónica o diagenética, los nódulos o los moldes fosilíferos (porosidad móldica), y las aperturas de inserción de raíces. Las microcavernas suelen estar rellenas de arcillas rojas residuales (terra rossa) del proceso de disolución kárstica. Finalmente, los procesos neotectónicos en los estertores del paroxismo alpino a finales del terciario y principios del cuaternario (pliocuaternario) sincrónicos al proceso de sedimentación de la formación del páramo calcáreo, han incidido sobre la geometría de éste, con un basculamiento generalizado hacia el oeste de toda la cuenca sedimentaria terciaria, y un suave cambio lateral de facies representado en un acuñamiento de la serie calcárea, también hacia el oeste. Resultado de estos procesos neotectónicos se generan dos grandes fracturas que compartimentan el frágil zócalo hercínico de la península Ibérica y que se reflejan en alineamientos en los materiales sedimentarios terciarios más dúctiles, se trata de las fallas de Ventaniella con dirección NW-SE y la falla del Alentejo-Plasencia de dirección SWNE [11]. Estas alineaciones reproducidas en los materiales del páramo, coincidiendo la intersección de ambas direcciones con puntos de debilidad sobre los que progresa el proceso de karstificación y reflejado en la alineación coincidente de la red de dolinas. En el páramo este basculamiento y acuñamiento se corrobora por el mayor desarrollo que adquieren los arroyos del borde suroeste, con mayores aportaciones y mayor poder erosivo.

Tabla 1. Descripción estratigráfica del páramo (subrayada la principal formación acuífera). Área Espesor Litología Periodo 2 m km Gravas, arenas, limos (depósitos de aluviales, fondos de valle y terrazas Cuaternario bajas en los ríos principales)

54

2-5

Gravas, arenas, limos, arcillas Pleistoceno(depósitos terrazas medias ya altas) Holoceno

59

5-10

Calizas, margocalizas y brechas Mioceno calcáreas y oncolíticas. Calizas del Superior páramo 2 Tortoniense

0

0-1

Mioceno Margas, limos, arenas y arcillas ocres Superior o rojas Tortoniense

1

0-15

Mioceno Calizas y margas. Calizas del páramo Superior 1 o inferiores Vallesiense

930

2-18

Mioceno Medio113 Superior

2-80

arcillas Mioceno Medio278 Superior

2-6

Margas yesíferas (Facies Cuestas) Margas, margocalizas (Facies Cuestas)

y

Limos y arenas ocres, con niveles conglomeráticos y costras (Facies Mioceno Medio Tierra de Campos)

80

30-60

Margas, margocalizas (Facies Dueñas)

arcillas Mioceno InferiorMedio

20

29-329

Arcosas y limos arcillosos, blancos, Mioceno Inferiorgrises-verdosos u ocres, con costras Medio

15

29-41

y

5. Discusión y resultados

Fuente: Adaptada de [10].

Estratigráficamente los materiales del Mioceno se corresponden a los términos de colmatación de la serie sedimentaria de la cuenca Terciaria del Duero, y está constituida por sedimentos generados en un medio muy salino y de muy baja energía, depositados en el Pontiense. En el Páramo de Torozos están representados los tres tramos del Mioceno castellano de la zona central de la cuenca del Duero: Facies Tierra de Campos, Facies de las Cuestas, y Calizas de los Páramos, es decir, el páramo está formado por un nivel inferior constituido por una series de arcillas y margas yesíferas y calcáreas que configuran las cuestas. El contacto es gradual a techo y aparecen niveles calcáreos intercalados hasta constituir series de calizas continuas, que constituyen el cuerpo principal del acuífero [10]. La Facies Tierra de Campos es una serie formada por arcillas limo-arenosas ocres, con paleocanales arenosos y conglomeráticos intercalados. Esta serie presenta una cierta permeabilidad en los niveles arenosos y conglomeráticos. La Facies de las Cuestas está constituida por arcillas, margas yesíferas, intercalaciones de calizas y yesos especulares en algunas zonas. Desde un punto de vista hidrogeológico, y en conjunto, se puede considerar como un paquete impermeable. Las Calizas del Páramo incluyen tres niveles sedimentarios, dos niveles de caliza separados entre sí por un nivel de

La caracterización hidrogeológica se ha realizado a través del cálculo de: la piezometría; la porosidad, permeabilidad y transmisividad; el coeficiente α y la curva de agotamiento; y el balance se ha determinado a partir de la estimación de: la recarga; las aportaciones; y las extracciones. 5.1. Cálculo de la piezometría La determinación de la piezometría se realizó mediante un modelo geoestadístico basado en una interpolación de tipo Kriging esférico. De acuerdo con la orientación preferencial geológica, se construyó un modelo anisótropo con un flujo preferencial, en dirección NE-SO, con un ángulo aproximado de 45º. A partir de los resultados obtenidos se identificaron dos estructuras diferenciadas: una de aproximadamente 18 km, correspondiente con la parte norte; y otra de 43 km que incluye todo el resto del páramo (Fig. 3). 5.2. Parámetros hidrogeológicos básicos La estimación de los parámetros hidrogeológicos parte de los datos bibliográficos de las siguientes variables hidrogeológicas [2]: • Porosidad efectiva de las calizas: 15% • Porosidad total de las calizas: 30% • Coeficiente de almacenamiento: 0,15

205


Sanz-Lobón et al / DYNA 82 (191), pp. 203-208. June, 2015.

y drenaje radial, como es este acuífero, el proceso de liberación del agua eq. (1) [2].



 2T 4SI 2

(1)

De esta forma se obtiene un valor de α igual a: 4,32x10-3 (adimensional). 5.4. Aproximación a la curva de agotamiento (gasto) Conocido α, se ha aplicado el método de los logaritmos de los caudales [7], se crea la recta de agotamiento que responde a la eq. (2):

ln Q 0  9 ,85  0 ,0002 x

(2)

Deshaciendo el neperiano de los caudales se obtiene: Q 0  18958,35 m 3 / d

5.5. Recarga Una vez aproximada la curva de gasto a una recta, se puede conocer cómo varía el volumen de agua almacenada (V0) a través de la eq. (3), obtenida a partir del trabajo de Custodio y Llamas [2]:

V0 

Figura 3. Mapa y variogramas resultantes de la interpolación con SpaceStat. Fuente: [12].

Para determinar la inercia del acuífero se han correlacionado las variaciones de caudal con la precipitación media. De esta forma se ha detectado una correlación R2 de 0,84 entre la precipitación caída entre los meses de octubre a marzo de 2011 con el caudal medido entre los meses de febrero a julio de 2012, por lo que se puede afirmar que existe una relación directa caudal-precipitación con un desfase (o inercia) de cuatro meses. A partir del periodo de inercia y el gradiente hidráulico obtenido de la piezometría se puede calcular la permeabilidad [K]. Aceptando una inercia de cuatro meses, y sabiendo que la fuente está situada a 9.500 m, se obtiene una permeabilidad de 9,16x10-3 cm/s. De igual manera, suponiendo un espesor medio del acuífero de 12 m, la Transmisividad [T] se ha estimado en 950 m2/d. 5.3 Cálculo del coeficiente α Aplicando la ecuación para acuíferos con forma de cúpula

0

Qdt   Q 0 e  t dt  0

Q0

(3)

Donde: Q0: es el caudal al inicio del descenso α: es el coeficiente de agotamiento, el cual está definido por la pendiente de la recta. Sustituyendo, se obtiene:

V0 

18958,35  111,52 hm3 0,00017

Considerando un área triangular para la cuenca de 1452 ha, es posible calcular la cantidad de agua infiltrada (I) usando la relación de la eq. (4):

I

V0 111519705. 88   7,68mm A 14520451

(4)

5.6. Cálculo de aportaciones Aportaciones superficiales (escorrentía). Se ha realizado a partir de los datos de los mapas raster de aportaciones anuales y mensuales medios de la Confederación Hidrográfica del Duero [1]. El resultado medio es una aportación media de 8,4 mm/año, que al multiplicarla por la superficie del páramo, se obtiene un volumen de recurso 8,4 hm3.

206


Sanz-Lobón et al / DYNA 82 (191), pp. 203-208. June, 2015.

Recarga. Los niveles acuíferos dependerá de los Coeficientes de Infiltración de los diferentes materiales, este modelo solo considera los recursos hídricos útiles almacenados en las calizas. A partir de la infiltración, la lluvia útil y las aportaciones, se obtiene un coeficiente de infiltración de 94,17%. 5.7. Cálculo de extracciones Drenaje perimetral. El drenaje perimetral radial se han identificado a partir de la cartografía de 78 puntos de agua. El resultado es un drenaje estimado en 10,22 hm3/año. Extracciones mediante pozos. Para los cálculos de las extracciones se ha partido de los datos obtenidos de la IDE del Duero relativos a extracciones subterráneas y unidades de demanda agraria [1]. Se han identificado 211 captaciones con un volumen demanda total anual estimando de 9,02 hm3. 5.8. Balance hídrico La forma de calcular las variaciones de volumen del acuífero es restar de las entradas las salidas. De esta forma se obtiene un recurso anual procedente de las precipitaciones estimado en unos 447,95 hm3/año; y unas pérdidas de: 306,85 hm3/año por evapotranspiración, 9,02 hm3/año por las extracciones por bombeo, y un drenaje perimetral de 10,22 hm3/año. Esto supone un balance hídrico anual positivo de 113,63 hm3/año, de los cuales una parte se perderá por: el efecto goteo desde las calizas hacia el paquete impermeable. Lo que supone un balance positivo de 113, 63 hm3/año, esta cifra se puede considerar como el modelo más sencillo del acuífero. 5.9. Modelo de flujo El punto de partida del modelo para la simulación es el modelo conceptual (Fig. 4). A partir de él, se define el movimiento del agua dentro del acuífero. Para la modelización geométrica del acuífero se ha partido de la discretización del área, apoyada por un reconocimiento de campo de las litologías permeables y sus contactos impermeables. Se ha definido una malla de 50x50 m sobre la que se han implementado los datos hidrogeológicos y de explotación del acuífero. Precipitación Evapotranspiración Captaciones

Acuífero Montes Torozos

Drenaje difuso

Drenaje perimetral

Figura 5. Resultado gráfico del modelo: Piezometría y líneas de flujo. Fuente: [10].

El resultado obtenido, son los valores de la piezometría en todas y cada una de las celdas del modelo (Fig. 5). El modelo también es capaz de determinar el flujo subterráneo, y por lo tanto es capaz de representar la trayectoria que sufriría una partícula en caso de existir un vertido de una sustancia soluble. Estos resultados espaciales se ven corroborados por los resultados numéricos, ya que el resultado del balance del modelo es de 101 hm3, valor muy similar a los 113 hm3 obtenidos a partir del balance hídrico convencional. De esta forma se obtiene que la medida registrada de los tres piezómetros de control, se encuentra dentro del intervalo de confianza del 95% de los calculados por el modelo. 6. Conclusiones Las calizas del páramo de los Montes Torozos configuran un acuífero libre, colgado topográficamente y aislado; que se recarga directamente por lluvia; y se drena por: a) drenaje perimetral a través de manantiales; b) el efecto goteo difuso a lo largo del contacto entre las calizas del páramo superiores y el nivel impermeable de las arcillas de la Facies Cuestas; y c) a través de explotaciones de pozos. El resultado del balance hídrico del acuífero es positivo con un recurso anual de 100 hm3. A partir de la reconstrucción piezométrica del régimen de extracciones y del aforo de manantiales, se ha modelizado y simulado el comportamiento del flujo del acuífero en régimen estacionario y transitorio. Los resultados obtenidos corroboran diversas hipótesis sobre flujo dominante hacia el SO. Se ha observado una alta sensibilidad en la variación estacional de los recursos hídricos, ya que responden de forma directa a los cambios. Esta respuesta permite calificar al acuífero de fácilmente recuperable sin necesidad de intervención.

Figura 4. Modelo conceptual. Fuente: [10]. 207


Sanz-Lobón et al / DYNA 82 (191), pp. 203-208. June, 2015.

Agradecimientos Este trabajo forma parte de la proyecto de investigación desarrollado en la Tesis Doctoral "Modelo y simulación hidrogeológica para la sostenibilidad del acuífero libre de los Montes Torozos" de la Universidad de Vigo. Bibliografía [1]

Confederación Hidrográfica del Duero. Sistema de Información de la Confederación Hidrográfica del Duero. [en línea]. [fecha de consulta abril 28 de 2015]. Disponible en: http://www.mirame.chduero.es [2] Custodio, E. y Llamas, M.R., Hidrología Subterránea. Editorial Omega. Barcelona. Segunda Edición Corregida. 2001. [3] Martínez-Alegría, R., Taboada, J., Sanz, G. and Giraldez, E., Sustainability in the exploitation of an aquifer for agriculture and urban water supply uses. AgroLife Scientific Journal, [on line] 3 (2), pp. 9-12, 2014. [date of reference April 28th of 2015]. Avaliable at: http://agrolifejournal.usamv.ro/pdf/vol3_2/art1.pdf [4] Mejia, O, Betancur, T. y Londoño, L., Aplicación de técnicas geoestadísticas en la hidrogeología del bajo Cauca Antioqueño. DYNA, 74 (152), pp. 136-149, 2007. [5] Duque-Mendez, N.D., Orozco-Alzate, M. and Velez, J.J., Hydrometeorological data analysis using OLAP techniques. DYNA, 81 (185), pp. 160-167, 2014. DOI: 10.15446/dyna.v81n185.37700 [6] Keskin, M.E., Taylan, D. and Kucuksille, E.U., Data mining process for modeling hydrological time series. Hydrology Research, 44 (1), pp. 78-88, 2013. DOI: 10.2166/nh.2012.003 [7] Mendes, E., Perímetros de protecção de captações de água subterrânea para consumo humano em zonas de montanha. Caso de estudo da Cidade da Covilhã. Tésis MSc.. Departamento de Engenharia Civil e Arquitectura, Universidade da Beira Interior, Covilhã, Portugal. 2006. [8] Quiroz, O.M., Martinez, D.E. y Massone, H.E., Evaluación comparativa de métodos de cálculo de recarga en ambientes de llanura. La llanura interserrana bonaerense (Argentina), como caso de estudio. DYNA, 79 (171), pp, 239-247, 2012. [9] Betancur, T. y Palacio, C., La modelación numérica como herramienta para la exploración hidrogeológica y construcción de modelos conceptuales (Caso de aplicación: Bajo Cauca antioqueño). DYNA, 76 (160), pp 137-149, 2009. [10] Sanz, G. 2014. Modelo y simulación hidrogeológica para la sostenibilidad del acuífero libre de los Montes Torozos. Tesis Doctoral en Tecnología Medioambiental por la Universidad de Vigo, Vigo, España. [En linea]. 235 P. [fecha de consulta Abril 28 de 2015]. Disponible en: http://dialnet.unirioja.es/servlet/tesis?codigo=42524 [11] García, F.J., Moreno, F. and Nozal, F., Neotectonics and associate seismicity in Northwester Duero Basin. Publicación IGN. Serie Monografías [online] .8, pp 255-267, 1991. [date of reference April 28th of 2015]. Avaliable at: http://www.researchgate.net/profile/F_Gracia/publication/25970733 6_Neotectonics_and_associated_seismicity_in_Northwestern_Duero _Basin/links/0f317530dd253b6b0c000000.pdf [12] Sanz, G., Albuquerque, T., Martinez-Alegria, R., Antunes, M., and Taboada, J., Hydrogeological vulnerability assessment in Urban systems, Spain, accepted for oral presentation, ICESD 2015, February, 2015, Amsterdam, Netherlands, http://www.icesd.org

trayectoria profesional se ha centrado en la investigación sobre la explotación minera y las tecnologías respetables con el medio ambiente y especializado en la evaluación y modelación de riesgos e impactos ambientales. El Dr. Taboada es coordinador de del Programa de Doctorado de la Ingeniería de Minas de la Universidad de Vigo, España. Uno de sus proyectos más recientes es el software SISMIGAL, cuyo objetivo era desarrollar una herramienta para la gestión del riesgo sísmico en Galicia (zona nordeste de España). T. Albuquerque, es PhD en Ingeniería de Minas en 1999 por la Universidad Técnica de Lisboa, Portugal. Es profesora associada de dedicación exclusiva en el departamento de Ingenieria Civil del Instituto Politécnico de Castelo Branco, Portugal. Trabaja en la evaluación de impactos y vulnerabilidad ambiental, así como el analisis de riesgos usando sistemas de información geográficos y simulaciones numericas y geostadísticas. La Dra. Albuquerque es miembro de IMGA (International Medical Geology Association); IAMG (International Association of Mathematical Geoscience) e investigadora senior de APCBEES-Asia-Pacific Chemical, Biological& Environmental Engineering. M. Antunes, es PhD en Geoquímica en por la Universidad de Coimbra, Portugal. Es profesora asociada con dedicación exclusiva en el departamento de Recursos Naturales del Instituto Politécnico de Castelo Branco, Portugal. Su trayectoria está ligada con la evaluación de la vulnerabilidad, los impactos y los riesgos ambientales. La Dra. Antunes es miembro de IMGA (International Medical Geology Association); IAMG (International Association of Mathematical Geoscience) e investigadora senior de APCBEES-Asia-Pacific Chemical, Biological& Environmental Engineering. I. Montequi, es Dra. en Ciencias, Sección de Químicas, por la Universidad de Valladolid, España en 1986. Profesora en la Universidad Europea Miguel de Cervantes, España, con la categoría de Profesor Director, Directora de la Escuela Politécnica Superior del 2002 al 2004 y Defensor de la Comunidad Universitaria desde mayo del 2007. La Dra. Montequi fue Investigadora Principal del proyecto AGUEDA, y cuenta con una amplia experiencia en la gestión de proyectos científicos y de investigación sobre química analítica y cinética de las reacciones.

G. Sanz-Lobón, es PhD en Tecnología Ambiental por la Escuela de Ingeniería de Minas de la Universidad de Vigo, España en 2014. Es investigador de la Escuela de Ingeniería Civil de la Universidad Federal de Goiás, Brasil. Vinculado a la evaluación y gestión de riesgos e impactos ambientales usando herramientas SIG y de simulación numérica. R.Martinez-Alegria, es PhD en Geología por la Universidad de Vigo, España en 2005. Es profesor invitado en la Universidad Europea Miguel de Cervantes, Valladolid, España y Técnico Superior en riesgos naturales y antrópicos de Protección Civil de la Delegación del Gobierno en Castilla y León, España, donde desarrolla trabajos de investigación asociado con la gestión y prevención de riesgos. J.Taboada, es PhD en Ingeniería de Minas en 1993 por la Universidad de Oviedo, España y Catedrático de la Universidad de Vigo, España. Su 208

Área Curricular de Medio Ambiente Oferta de Posgrados

Especialización en Aprovechamiento de Recursos Hidráulicos Especialización en Gestión Ambiental Maestría en Ingeniería Recursos Hidráulicos Maestría en Medio Ambiente y Desarrollo Doctorado en Ingeniería - Recursos Hidráulicos Doctorado Interinstitucional en Ciencias del Mar Mayor información:

E-mail: acia_med@unal.edu.co Teléfono: (57-4) 425 5105


Determination of the tide constituents at Livingston and Deception Islands (South Shetland Islands, Antarctica), using annual time series Bismarck Jigena-Antelo a,d, Juan Vidal b,d & Manuel Berrocoso c,d a.

Department of Applied Physics, University of Cádiz, Puerto Real, Cádiz, Spain.bismarck.jigena@gm.uca.es b. Department of Naval Construction, University of Cádiz, Puerto Real, Cádiz, Spain.juan.vidal@uca.es c. Department of Mathematics, University of Cádiz, Puerto Real, Cádiz, Spain. manuel.berrocoso@uca.es d. Astronomy, Geodesy an Cartography Laboratory (LAGC-UCA), Faculty of Science, University of Cádiz, Puerto Real, Cádiz, Spain. Received: August 26th, 2014. Received in revised form: January 20th, 2015. Accepted: January 26th, 2015

Abstract A detailed study is presented of the tidal constituents for Livingston and Deception Islands (Antarctica) obtained at the LIVMAR and DECMAR tide gauge stations. Data were acquired with tide gauge pressure sensors, and calculated from a long time series of 798 days of data-logging, using the least-squares harmonic estimation method. The results show an improvement over previous results in the region. Seventy tidal constituents were obtained, of which 19 were the most representative with amplitudes greater than 1 cm and a contribution of 93% of the wave energy. In both stations, it was confirmed that the tides are mixed, with a semi-diurnal behavior. The tidal gauge benchmarks (TGBMs) were linked to vertical and horizontal Antarctic Geodetic Networks, which provides a very important contribution for geodetic, oceanographic and hydrographic studies in the area. Keywords: Antarctica; tidal constituents; harmonic analysis; tidal gauge station; tidal series; tide; Deception Island; Livingston Island; Spanish Antarctic Base.

Determinación de las constituyentes de marea en las Islas Livingston y Decepción (Islas Shetland del Sur, Antártida), usando series temporales anuales Resumen Se presenta un estudio detallado de las constituyentes de marea en las islas Livingston y Decepción (Antártida), obtenidas en las estaciones mareográficas LIVMAR y DECMAR. Los datos fueron tomados con mareógrafos de sensores de presión. Los cálculos se realizaron a partir de series de marea de largo período, con 798 días de observación, aplicando el método de análisis armónico por mínimos cuadrados. Los resultados obtenidos mejoran trabajos previos en la región, se han obtenido setenta constituyentes, siendo 19 las más representativas con amplitudes superiores a 1 cm y un aporte del 93% de la energía de la onda. En ambas estaciones se confirma que la marea tiene un régimen mixto de comportamiento semi-diurno. Los benchmark de referencia de mareógrafos (TGBM) se encuentran enlazados a las redes geodésicas de control vertical y horizontal de la Antártida, que es de gran importancia para trabajos geodésicos, oceanográficos e hidrográficos en la zona. Palabras clave: Antártida, constituyentes de marea, análisis armónico, estación de mareas, series de marea, marea, Isla Decepción, Isla Livingston, Base Antártica Española.

1. Introducción Las Islas Shetland del Sur, al ser parte de la Antártida, es una zona geográfica bastante extrema y con un entorno bastante complejo para realizar observaciones

oceanográficas y geofísicas en el terreno. Las observaciones de marea en la Antártida son muy escasas y la mayoría corresponden a registros de la temporada del verano austral, debido a las duras condiciones climáticas de la Antártida durante el resto del año, donde normalmente las aguas de las zonas costeras permanecen congeladas hasta el inicio del

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 209-218. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.45207


Jigena-Antelo et al / DYNA 82 (191), pp. 209-218. June, 2015.

verano [10]. Se tienen muy pocos registros con un período de tiempo suficiente que permita obtener los principales constituyentes de marea, en la mayoría de los casos los datos han sido tomados en temporada de verano y no se conocen datos de un año de duración marea [18, 28]. Hasta la fecha, para muchos estudios oceanográficos y geofísicos, se han utilizado datos obtenidos indirectamente, como las cartas cotidales, cartas de iso-amplitud y datos de altimetría por satélites. Es muy importante obtener una serie de mareas en base a medidas directas, pues nos permite estudiar, con datos reales, la propagación y características de la onda en el entorno de la estación y extrapolarlo a lugares próximos. Las primeras observaciones de los niveles del mar en la isla Decepción se realizaron en 1970, por personal del Servicio Hidrográfico Argentino (SHA), los datos de marea se obtuvieron durante cuatro días en verano por medio de una regla de marea (mareómetro) instalado cerca de la Base Argentina "Decepción"; podemos ver un histórico de los registros de marea en esta área en [4, 6]. Con estos datos de marea se obtuvo el primer datum vertical referido al vértice geodésico BARG (Base Argentina), perteneciente a la Red Geodésica Isla Decepción (REGID), que fue utilizado como referencia vertical y horizontal durante varios años. Este datum vertical fue trasladado mediante nivelación geométrica de primer orden geodésico hasta el punto LN00, utilizado como referencia vertical de red RENID (Red de Nivelación de Isla Decepción) a partir del año 2003[2, 3, 5, 17]. Diversos autores utilizaron estaciones hidrográficas y correntómetros para estimar la circulación y el transporte en la cuenca oriental del Estrecho de Bransfield [20, 21, 22]. Además de los datos hidrográficos y de corrientes, estos autores instalaron una serie de mareógrafos de fondeo (Registradores de fondeo para niveles de marea, tipo Aanderaa) en la islas Low, Rey Jorge y Livingston, en su estudio, pusieron mucho énfasis en el carácter de las mareas, de las corrientes y la importancia relativa de flujo de las mareas en la hidrodinámica general de circulación en el estrecho. Determinaron que las mareas en el Estrecho de Bransfield tienen una combinación de frecuencias diurnas y semi-diurnas, donde las componentes principales de mareas son O1 (0.0387 cph, ciclos por hora), K1 (0.0418 cph), M2 (0.0805 cph) y S2 (0.0833 cph). Estos autores observaron que las mareas varían de diurna a semi-diurna en período quincenal, con rangos máximos de marea entre 1.7 m y 2.1 m. De estas tres estaciones, la que corresponde a nuestra área de estudio es Isla Livingston (62°30' S, 60°23' W) y tiene un nivel máximo de variación de 1.98 m, donde la componente M2 es la más importante, con una amplitud de 0,39 m y retardo de fase de 277 °. Las componentes K1, 01 y S2 presentan amplitudes similares, de alrededor de 0,28 m. En base a los estudios de los niveles medios del mar en las islas Low, Rey Jorge y Livingston [14], se demostró que la propagación de la onda M2es perpendicular a la dirección del estrecho de Bransfield, mientras que las ondas diurnas viajan longitudinalmente hacia el sur-oeste a lo largo de la costa, con dirección NE-SW. Una descripción detallada de la propagación y amplificación de la marea en el lado norte de la Península Antártica (Estrecho de Gerlache, Estrecho de Bransfield y el

Tabla 1. Constantes Armónicas de las principales constituyentes de marea, en las estaciones mareográficas HM, PC y WB. Estación HM PC WB Amp. Fase Amp. Fase Amp. Fase Const. (m) (G) (m) (G) (m) (G) M2 0.43 281 0.46 280 0.44 281 S2 0.24 335 0.28 0.29 O1 0.28 49 0.29 55 0.29 48 K1 0.28 66 0.26 73 0.26 66 Source: Adapted from Dragani et. al., 2004

noroeste del Mar de Weddell) se da en un estudio realizado por [10]. Basado en cartas co-tidales y y cartas de isoamplitud (co-range), estos autores reportaron las amplitudes y desfases de las principales componentes de marea (M2, S2, O1 y K1) en el Estrecho de Bransfield. Para su estudio utilizaron los resultados del análisis armónico de series de mediciones directas del nivel del mar tomadas de [9, 31]. En el trabajo citado [10], las estaciones se encuentran situadas en Isla de Livingston (Estación Media Luna, HM) y en Isla Decepción, Caleta Péndulo (Estación Péndulo, PC) y Bahía Balleneros (Estación Balleneros, WB). Los niveles de marea se midieron con un mareógrafo de flotador en la Estación de Media Luna (HM) y con un mareómetro o regla de marea (Visual Tide Staff) en las estaciones de Caleta Péndulo (PC) y la Bahía Balleneros (WB). Los períodos de tiempo del registro de los datos de marea fueron muy variables: 38 días en HM, 19 días en PC y 12 días en WB. La Tabla 1 presentan las constantes armónicas de las principales mareas en estas tres estaciones, según [10]. Un modelo regional de alta resolución (1/30 ° x 1/60 °) para la región de la Península Antártica y el Mar de Weddell, fue desarrollado por [25] y descrito más recientemente por [39]. A partir de este modelo, es posible estimar las principales componentes de marea para esta zona de estudio. El modelo calcula las principales constituyentes armónicas de marea en las islas y en los pasos estrechos alrededor de la Península Antártica. Los resultados de este estudio se muestran en [25, 30]. También se estudiaron los registros de marea obtenidos en la Base Argentina Jubany (actual Base Carlini), en la isla Rey Jorge [27]. Estos autores presentan el análisis armónico de dos series temporales de registros de mareas correspondientes a los períodos comprendidos entre febrero y diciembre de 1996 y de marzo a diciembre de 1997, con un total de 12 componentes de marea, donde la componente M2 presenta amplitudes de 0.47 - 0.48 m y retardos de fase de 277º a 278°. El factor de forma de la marea (índice de Courtier) es de 0,8 calculado como: F = (K1 + O1) / (M2 + S2)

(1)

De acuerdo a este índice obtenido, las mareas se definen como mareas mixtas predominantemente semi-diurna, teniendo todos los días dos pleamares y dos bajamares con desigualdades en altura y tiempo, coincidiendo en lo definido por [8, 27]. La amplitud media de la marea viva, calculada como 2 (M2 + S2), es de 1.48 m. Posteriormente, el Laboratorio de Astronomía,

210


Jigena-Antelo et al / DYNA 82 (191), pp. 209-218. June, 2015.

Geodesia y Cartografía de la Universidad de Cádiz (LAGCUCA) realiza observaciones de marea en las Islas Decepción y Livingston (Antártida) desde diciembre de 2007. En la Campaña Antártica 2007-08 se fondearon dos mareógrafos con sensores de presión, obteniendo registros continuos in situ durante 81 días de observación, se obtienen las primeras series de marea para las estaciones de LIVMAR y DECMAR, entre diciembre de 2007 y marzo de 2008, calculando las primeras constituyentes de marea para las Islas Decepción y Livingston. Se obtienen valores de amplitud y desfase para nueve componentes (M2, S2, N2, K2, O1, P1, K1, Mm, MSf) [36, 37, 17]. Los resultados son similares en amplitud a los obtenidos por [10], pero respecto al desfase hay diferencias a considerar en la componente K1. El coeficiente de forma obtenido fue de 0,86 para DECMAR y 0.80 para LIVMAR, que determina el mismo régimen definido anteriormente por [8, 10]. Con estos registros de marea se calcularon los niveles medios del mar respecto a vértices de la red REGID (Red Geodésica de Isla Decepción) y red RGAE (Red Geodésica Antártica Española) para las estaciones DECMAR, Isla Decepción y LIVMAR, Isla Livingston [37, 17]. En sucesivas campañas se continuaron las observaciones durante el período estival, pero a partir de Febrero de 2011 se han obtenido series temporales continuas con más de dos años de observación (798 días) en ambas islas [17]. En este trabajo presentamos los resultados de nuevas observaciones de marea, correspondientes a dos años completos en las estaciones de DECMAR y LIVMAR, que corresponden a las campañas antárticas españolas 2010-11, 2011-12 y 2012-13 y que constituyen las primeras series mareográficas continuas en la Antártida, cuyos resultados pueden ser válidos para las Islas Shetland del Sur y el Estrecho de Bransfield, tomando como referencia las islas Livingston y Decepción. 2. Área de estudio El área de estudio corresponde a las islas Livingston y Decepción, que forman parte del archipiélago de las islas Shetland del Sur y del Estrecho de Bransfield. El Estrecho de Bransfield es parte del Océano Glacial Antártico, se encuentra delimitado por el Archipiélago de las islas Shetland del Sur, al Norte, y la Península Antártica, al sur. El estrecho es también denominado por los argentinos como Mar de la Flota, tiene unos 500 km. de longitud y unos 120 km de anchura media, es una cuenca marginal activa en donde se ha identificado un eje de extensión con dirección NE-SW entre los paralelos 60º y 63º Sur. Bautizado con ese nombre por James Weddell alrededor de 1825 en homenaje a Edward Bransfield, que cartografió las islas Shetland del Sur en 1820. El estrecho está protegido de mar abierto por las islas Smith, Snow y Livingston por el norte y al oeste, y por la Isla Rey Jorge, al norte y al este [19]. La cuenca está ocupada por seis edificios volcánicos alineados aproximadamente según la dirección principal de la propia cuenca, tiene una actividad volcánica reciente manifestada por los volcanes Bridgeman, Pingüino (Penguin) y Decepción, siendo este último (63º00’ S, 60º35’ W) el que en la actualidad presenta volcanismo activo y uno de los pocos volcanes activos existentes en la Antártida.

Figura 1. A. Ubicación del área de estudio. B. Ubicación de las estaciones mareográficas en las islas Livingston y Decepción. C. Detalle de Isla Livingston, ubicación de la estación LIVMAR y la BAE “Juan Carlos I”. D. Detalle de Isla Decepción, ubicación de la estación DECMAR, BAE “Gabriel de Castila” y Base Antártica Argentina “Decepción” (BARG). Source: LAGC-UCA

En las islas Livingston y Decepción se instalaron dos estaciones mareográficas en puntos cercanos a la costa, denominadas LIVMAR (Caleta Johnsons, Isla Livingston) y DECMAR (Punta Colatinas, Isla Decepción), las estaciones se encuentran próximas a las Bases Antárticas Españolas “Juan Carlos I” y “Gabriel de Castilla”, respectivamente. La Isla Decepción es un volcán activo, tiene forma de herradura y un diámetro de unos 14 kilómetros: La isla tiene una bahía interior llamada Puerto Foster, que fue la caldera central del volcán, que colapsada, fue inundada por el agua de mar y se comunica al estrecho de Bransfield a través de un paso angosto y poco profundo denominado Fuelles de Neptuno[29]; la estación DECMAR se encuentra ubicada en Punta Colatinas a unos 3 km al sur de la BAE Gabriel de Castilla y cerca de los Fuelles de Neptuno (unos 3 km). Livingston es la segunda isla en tamaño del archipiélago de las Shetland del Sur, tiene unos 70 km de largo y ancho variable entre los 4 y los 32 kilómetros, la estación LIVMAR se encuentra instalada en Caleta Johnsons, es una pequeña cala semi-cerrada, poco profunda, situada en el sur de la isla Livingston, cerca de la Base Antártica Española “Juan Carlos I”, la Fig. 1 muestra la ubicación de las estaciones mareográficas en el área de estudio. 3. Instrumentación, datos y metodología. 3.1. Equipos utilizados para toma de datos de marea Los datos fueron obtenidos mediante dos sensores de presión, se utilizaron como referencia sensores de presión SAIV modelo TD304, con sensores adicionales de temperatura y conductividad que tienen una precisión de ± 0,01% del fondo de escala en la medida de la presión por cada 10 m, es decir, una precisión de 1 mm. Los datos de presión, temperatura y conductividad se registraron en las estaciones por medio de un sensor CTD 204 SAIV SD que tiene una precisión de ± 0,02 ppt en salinidad (para conductividad) y ± 0,01 º C para la temperatura; además, se utilizaron sensores

211


Jigena-Antelo et al / DYNA 82 (191), pp. 209-218. June, 2015.

AQUAlogger PT520 de temperatura y de presión que tienen una precisión de ± 0,05 º C en temperatura y de 0,005% a escala completa para cada 10 m, en presión. Con estos datos, se obtuvieron mediciones instantáneas del nivel del mar. 3.2. Fondeo de sensores de presión El problema principal de instalación de estaciones mareográficas en la Antártida, son las extremas condiciones climatológicas, que tienen como consecuencia el congelamiento de la capa superficial del agua de mar durante el invierno, formando banquisa y placas de hielo que al moverse pueden desplazar el sensor y romper los cables y cabos que los aseguran a la costa y con ello perder el equipo y los datos. Por esta razón, hemos diseñado un sistema bastante seguro, que protege tanto a los sensores de presión, asegurados en los lastres de fondeo y colocados en el interior de un tubo de hierro galvanizado, abierto en uno de sus costados; como a todo el sistema de fondeo, pues el lastre de fondeo fue asegurado a las rocas de la orilla, mediante anclajes con cables y cabos de gran resistencia a la tracción, para evitar que se rompan y que el sensor pueda perderse en el fondo (Fig. 2, 3) y de esta manera minimizar los riesgos de desplazamiento, obtener mediciones directas de los niveles de marea con la mayor seguridad y precisión posible y evitar los saltos en la serie temporal provocados por estos desplazamientos. Los sensores de presión fueron fondeados a unas profundidades aproximadas de 7.5 m y 3.5 m en LIVMAR y DECMAR respectivamente, y en ambos casos cercanos a la costa, a unos 30 metros de la línea de playa, con la finalidad de minimizar los errores en la posterior referencia al respectivo benchmark de referencia de mareógrafo (TGBM, Tide Gauge Benchmark), ubicado en tierra.

Figura 2. Sistema de fondeo y anclaje del mareógrafo la costa Source: LAGC-UCA

Figura 3. Fondeo del sensor de presión principal (PTG) en la estación mareográfica DECMAR Source: LAGC-UCA

Para el fondeo de los mareógrafos se utilizó como plataforma de trabajo una embarcación neumática Zodiac Pro 500, quedando los fondeos ubicados a unos 30 metros de la línea de playa y entre los 3 y los 9 metros de profundidad, dependiendo de la configuración del fondo en los sitios de fondeo. La Fig. 6 muestra la maniobra de fondeo de los mareógrafos. 3.3. Georreferenciación de estaciones Para el control vertical, los datos de marea fueron referidos altimétricamente a un TGBM, que a su vez está enlazado con otras marcas auxiliares de referencia de nivel (TGAR, Tide Gauge Auxiliary Reference). Los valores locales de profundidad promedio en cada una de las estaciones fueron de 3.147 m. en la isla Decepción y 7.475 m. en Livingston. Para obtener los valores medios de profundidad de fondeo de los sensores, las medidas del nivel del mar obtenidas con el sensor principal (PTG) se vincularon altimétricamente al TGBM fijo en tierra, mediante un método indirecto, instalando cercano a la zona donde se encuentra fondeado el PTG, otro sensor de presión colocado en una regla de nivelación (TSG, Tide Staff Gauge), del cual se conoce su cero o nivel de referencia vertical y cuya parte superior sobresale del agua para poder realizar las lecturas mediante un nivel óptico; tomando las lecturas simultáneas a las dos reglas de nivelación, TSG y TGBM, mediante nivelación geométrica, obtenemos el desnivel del TSG con respecto al TGBM, y por lo tanto las alturas de marea son referidas al TGBM, que luego son comparadas con las lecturas de marea obtenidas por el PTG, del cual hemos obtenido su profundidad mediante un ajuste lineal y por lo tanto, todas las lecturas de marea quedan referidas al TGBM [33, 34]. Por este método, lo que se hace es comparar las lecturas simultáneas del nivel de marea obtenida con el sensor principal PTG, contra sensor de control móvil instalado en la regla de nivelación TSG, que ha sido referenciado mediante nivelación geométrica al TGBM, que se encuentra en tierra. Este procedimiento de referenciación altimétrica fue realizado en ambas estaciones (LIVMAR y DECMAR). Por el método de nivelación geométrica, también se realizaron los enlaces entre el TGBM y sus respectivas marcas auxiliares de referencia (TGAR), para más detalles ver Fig. 4-6. Los TGBM y TGAR fueron monumentados mediante un clavo, situados sobre una roca de fácil visualización que se encuentre por encima del nivel de la pleamar y muy próximos al sitio de fondeo de los mareógrafo [35, 17]. Cada TGBM se encuentra vinculado al menos a dos marcas auxiliares TGAR, que es la marca vertical auxiliar de seguridad para la recuperación de la referencia principal TGBM, en caso de una posible destrucción o pérdida. Toda la vinculación altimétrica se ha realizado mediante nivelación geométrica de primer orden geodésico con la ayuda de nivel óptico Wild (Leica) modelo NA2, cuya precisión es de 0.7 mm por cada kilómetro de doble nivelación. Los TGBM son posteriormente vinculados a puntos de las redes geodésicas de la Antártida, tanto en vertical como en horizontal [37, 17].

212


Jigena-Antelo et al / DYNA 82 (191), pp. 209-218. June, 2015.

Figura 6. Referenciación altimétrica de las medidas de marea Source: LAGC-UCA

Figura 4. Diagrama de enlaces de nivelación en las estaciones mareográficas, entre los benchmarks de referencia de mareógrafos (TGBM), las marcas auxiliares de referencia (TGAR) y los sensores utilizados en la toma de datos. Source: Adapted from Vidal et. al., 2012.

Para obtener la profundidad de fondeo respecto al TGBM en las estaciones de LIVMAR y DECMAR, se realizó un ajuste lineal entre las alturas instantáneas del nivel del mar medidas por el sensor principal PTG, definida como D, contra las alturas medidas con el sensor de control TSG, definida como H, tomadas simultáneamente y temporalmente sincronizadas. Las mediciones de control, entre PTG y TSG se realizaron en días escogidos, días sin viento y sin olas (mar en calma) tomando las lecturas simultáneas durante 1 hora con una frecuencia (data sampling) de 10 minutos. Para la estación DECMAR, se realizaron medidas durante 7 días obteniéndose 36 medidas distribuidas durante todo el período de registro de la serie. El mismo procedimiento se siguió para LIVMAR, con controles durante dos días con un total de 13 medidas que se realizaron al inicio y al final del período de registro de marea (dos días).

Figura 5. Ajuste lineal entre las medidas instantáneas del nivel medio del mar en LIVMAR y DECMAR. Source: LAGC-UCA

El ajuste lineal entre la presión del sensor principal PTG, transformada en profundidad y corregida por presión atmosférica (D), contra los datos de altura (H), tomados por el sensor de control TSG, instalado en la regla de marea y corregidos por lectura de nivelación tomada al TGBM, nos proporciona un valor medio del nivel de marea para cada instante de la lectura, y en base a los cuales se corrigen altimétricamente los datos de la serie. La Fig. 5 nos muestra los resultados del ajuste lineal realizado por este método en las estaciones de DECMAR y LIVMAR. Este método de control altimétrico de los registros de marea es muy difícil de aplicar de forma continua, debido a que solo puede ser instalado y utilizado durante la época estival; debido, al uso de personal técnico para su ejecución; además, el mareógrafo de control TSG se debe quitar durante el invierno para evitar daños o su destrucción por la banquisa o las placas de hielo marino. La Fig. 6 nos muestra la instalación y utilización de las reglas de control. El período de registro de marea, tanto en LIVMAR como en DECMAR, está comprendido entre el 3 de febrero de 2011 y el 11 de Abril de 2013. El intervalo de muestreo de los datos fue de 10, 20 y 60 minutos, aunque para el cálculo y análisis armónico todas las series fueron normalizadas a 60 minutos (1 dato/hora), obteniendo las series temporales respectivas en cada estación, que fueron sometidas a un análisis armónico utilizando una aplicación en Matlab desarrollada de acuerdo a [12]. Hay que recalcar que esta es la primera vez que se tienen series de marea en la Antártida con más de un año de registro, hasta la fecha los registros de marea se limitaban a

Figura 7. Series de marea obtenidas en LIVMAR y DECMAR Source. LAGC-UCA

213


Jigena-Antelo et al / DYNA 82 (191), pp. 209-218. June, 2015.

Figura 8. Serie de presión atmosférica en LIVMAR Source: LAGC-UCA

la temporada de verano debido a las extremas condiciones climatológicas durante el resto del año. Durante todo el período del registro de datos de marea, también se obtuvieron datos meteorológicos de presión atmosférica y temperatura del aire, tomados por la Agencia Española de Meteorología (AEMET), registrados en las estaciones de Gabriel de Castilla (Isla Decepción) y Juan Carlos I (Isla Livingston). La presión atmosférica en la estación de Isla Livingston osciló entre un mínimo de 943,35 mb el 22 de mayo de 2012 y un máximo de 1025, 59 mb el 09 de abril de 2013 y un promedio de 988,27 mb. Para la presión atmosférica, se tomó como serie de referencia, la serie obtenida en Livingston, al ser una serie muy completa y estable, puesto que el registro de la estación Gabriel de Castilla tenía muchos datos faltantes. La Fig. 7, presenta la serie de presión atmosférica obtenida en la estación LIVMAR. Para convertir la presión hidrostática a una altura equivalente del nivel del mar, usamos: h = (P-Pa) / g

sustracción de los cambios causados en la presión atmosférica respecto a los valores de referencia; a frecuencia lo suficientemente baja la corrección es de aproximadamente al descenso de un centímetro (-1.0 cm) en el nivel de la superficie del mar, por cada milibar de aumento (+1 mb) en la presión atmosférica [7,15, 24, 40]. Se ha realizado el análisis armónico por mínimos cuadrados a los registros de marea, acorde con [12, 13, 16]. Las series de marea analizadas tienen un total de 19143 datos horarios, lo que supone un registro de más de dos años (798 días) para ambas estaciones. Sin embargo, los registros corresponden a varias series tomadas para la misma estación, siendo seis series para DECMAR y tres series para LIVMAR, que fueron unidas en una sola y corregidas por saltos, principalmente por las diferentes profundidades a las que se instalaron los sensores en cada campaña. Para corregir los saltos provocados durante el cambio de equipos en un nuevo fondeo, se ajustan las dos series aplicando el algoritmo de corrección del escalonamiento por elevación de la serie. También se utilizó como sensor de referencia un sensor de presión adicional fondeado en las proximidades, cuyos registros se solapan con los de las series anterior y posterior al cambio de equipo. Una vez corregidos los saltos, hemos reconstruido las series completas de marea en ambas estaciones. A las alturas de marea corregidas aplicamos el análisis armónico para evaluar las amplitudes y el desfase de las componentes. Las amplitudes de marea y el desfase han sido calculadas para la hora UTC correspondiente a huso horario de la estación, aplicando la corrección nodal, con un intervalo de muestreo de una hora y para un intervalo de confianza del 95%. 4. Resultados y discusión

(2)

donde, P es la presión registrada por el sensor del mareógrafo, Pa es la presión atmosférica de referencia (un valor constante, 990.8 mb), g es la aceleración debida a la gravedad cuyo valor medio en Isla Decepción es 9,822956 m/s2 y  la densidad del agua en el área de estudio, cuyo valor medio es de 1025 kg/m3, calculado utilizando los registros de temperaturas y salinidades para un valor medio de la zona [33, 11]. Al valor obtenido se denomina nivel del mar ajustado (NMA). El sensor de presión mide la presión total, por lo tanto, no muestra los efectos estáticos de variaciones de la presión atmosférica. Una variación de la presión atmosférica de referencia es equivalente a mover el nivel de referencia del sensor [24, 1]. Para el presente trabajo se tomaron registros de presión atmosférica, simultáneos a las medidas de marea, obteniendo unos valores medios de 988,39 mb en Decepción y 987,27 mb en Livingston, siendo el valor medio para la zona de 988.27 mb, que no difiere significativamente del valor utilizado en [37, 17], que utiliza como presión atmosférica de referencia, un valor constante de 990.8 mb (hPa) que corresponde a la presión atmosférica media en la región de la Isla Rey Jorge y el entorno cercano de la estación Arctowski durante el período 1978-1989 [26]. De esta manera, el nivel del mar efectivo, lo podemos obtener para cualquier tiempo en particular y puede ser corregida mediante la adición o

Los resultados obtenidos en amplitud de los principales componentes de marea en ambas estaciones se muestran en la Fig. 9, se han realizado cuatro análisis para cada estación, tres de ellos para series de un año y el cuarto para la serie

Figura 9. Amplitud de las principales constituyentes de marea en LIVMAR y DECMAR. Source: LAGC-UCA

214


Jigena-Antelo et al / DYNA 82 (191), pp. 209-218. June, 2015.

Figura 10. Desfase de las principales constituyentes de marea en LIVMAR y DECMAR. Source: LAGC-UCA

completa. En todos ellos podemos observar bastante uniformidad en los resultados obtenidos que nos dan una buena idea de la validez de los mismos. En cuanto al desfase los resultados también son similares. En cuanto al desfase los resultados son similares en las principales constituyentes y se muestran en Fig. 10. En la Tablas 2A y 2B, se muestran los resultados del análisis realizado a la serie completa, habiendo obtenido para ambas estaciones una condición de matriz de 0.82 y un ajuste de 0.993. Tabla 2A. Estación LIVMAR. Principales constituyentes de marea Estación LIVMAR Amplitud Constante Frecuencia Desfase (grados) (m) M2 0.39 0.08051 290.1 K1 0.04178 0.27 73.5 O1 0.03873 0.26 54.9 S2 0.08333 0.20 351.2 P1 0.04155 0.09 73.1 Q1 0.03722 0.06 41.8 K2 0.08356 0.06 354.6 N2 0.07900 0.05 253.8 MF 0.00305 0.02 195.5 NO1 0.04027 0.02 62.3 MM 0.00151 0.02 128.5 L2 0.08202 0.01 308.9 SSA 0.00023 0.01 133.7 J1 0.04329 0.01 71.9 T2 0.08322 0.01 335.4 RHO1 0.03742 0.01 49.4 SIG1 0.03591 0.01 37.9 2Q1 0.03571 0.01 29.2 NU2 0.07920 0.01 250.6 Longitud serie (días) 797.63 Datos Observados 19143 Condición de Matriz 0.82 Ajuste 0.993 Courtier 0.91 Source: LAGC-UCA

Tabla 2B. Estación DECMAR. Principales constituyentes de marea Estación DECMAR Amplitud Constante Frecuencia (m) M2 0.08051 0.40 K1 0.04178 0.28 O1 0.03873 0.27 S2 0.08333 0.21 P1 0.04155 0.09 K2 0.08356 0.06 Q1 0.03722 0.06 N2 0.07900 0.05 SSA 0.00023 0.04 MF 0.00305 0.02 NO1 0.04027 0.02 L2 0.08202 0.01 MM 0.00151 0.01 T2 0.08322 0.01 SIG1 0.03591 0.01 RHO1 0.03742 0.01 J1 0.04329 0.01 NU2 0.07920 0.01 2Q1 0.03571 0.01 Longitud serie (días) 797.63 Datos Observados 19143 Condición de Matriz 0.82 Ajuste 0.993 Courtier 0.90 Source: LAGC-UCA

Desfase (grados) 280.5 68.9 52.5 341.2 68.5 338.5 40.3 245.1 324.9 190.4 51.7 300.2 110.3 348.6 37.0 39.6 68.5 243.6 25.0

En la Tablas 2A y 2B presentamos los diecinueve constituyentes de marea más importantes, cuya amplitud es superior a un centímetro (M2, K1, O1, S2, P1, Q1, K2, N2, NO1, MF, MM, L2, SSA, J1, T2, RHO1, SIG1, 2Q1, NU2) y cuyo aporte de energía es del 93% del total de la onda para ambas estaciones. En todos los análisis armónicos realizados, se muestra que la mayor cantidad de energía de la onda de marea (85%) es aportada por ocho componentes, cuatro semi-diurnas (M2, S2, K2, N2) y cuatro componentes diurnas (K1, O1, P1, Q1), con un aporte similar para cada grupo de más o menos un 42,5%. En lo referente a las amplitudes de las series, los resultados nos muestran que las amplitudes, máxima y mínima, son de 4.986 m. y 2.224 m con un rango de 2.762 m para DECMAR, y de 8.334 m y 6.169 m con un rango de 2.165 m para LIVMAR. El factor de forma de marea o índice de Courtier obtenido es 0,91 y 0.90 para LIVMAR y DECMAR respectivamente, ha sido calculado de acuerdo a (1) y nos determina que en el área de estudio las mareas tienen un régimen mixto, con una componente predominantemente semi-diurna (0.25 < C < 1.50 ), teniendo todos los días dos pleamares y dos bajamares, como habían definido [8, 10]. En general, los resultados en amplitud, desfase y régimen de marea, no difieren significativamente, los resultados obtenidos por [30, 28, 20, 14, 27, 22, 31, 25, 9, 10, 37, 17], sin embargo, se observan variaciones significativas en la constituyente K1 con respecto a los valores dados por [10, 37]. La Fig. 11 nos muestra el registro de mareas durante un mes y en ella se aprecia el régimen mixto y semi-diurno de las mismas.

215


Jigena-Antelo et al / DYNA 82 (191), pp. 209-218. June, 2015.

de nivel (TGAR, Tide Gauge Auxiliary Reference), que quedaran como datums fundamentales para referencia vertical y horizontal de las estaciones mareográficas de LIVMAR y DECMAR, y que utilizaremos posteriormente para la determinación del nivel medio del mar en la zona. El nivel medio del mar (NMM) es el plano de referencia vertical fundamental, de obligada utilización en aplicaciones geodésicas, geofísicas y oceanográficas tanto técnicas como científicas. Este trabajo es uno de los primeros de una serie de estudios de marea utilizando series de datos a largo período y que en el futuro podrá ser vinculado para la referenciación altimétrica de la Red Geodésica de Isla Decepción (REGID) y la Red Geodésica de la Antártida Española (RGAE). Agradecimientos Figura 11. Serie de marea en LIVMAR (Enero/2012) y DECMAR (julio/2011).

Al contar con una serie de más de dos años de observaciones con el análisis armónico hemos obtenido setenta constituyentes de marea, entre las cuales tenemos constituyentes de frecuencia quincenal, mensual, semi-anual y anual. El resultado del ajuste entre los datos reales y los datos predichos es de un 99.3%, que es un parámetro excelente y muy representativo para realizar predicciones en base a los datos obtenidos y utilizados en el análisis. 5. Conclusiones La Antártica es una región donde los datos de mareas son escasos; por este motivo, las series temporales de más de un año, son datos muy valiosos para estudios de la marea en la zona. En este trabajo presentamos un estudio de los registros de mareas en Decepción y Livingston correspondientes a 798 días de observación continua, lo que ha permitido obtener las constituyentes de marea de corto periodo (diurno y semidiurno) y de largo período (quincenal, mensual, semi-anual y anual). Estas series de marea corresponden a los datos obtenidos en las estaciones de LIVMAR y DECMAR, en las campañas antárticas 2010-11, 2011-12, 2012-13, en la que además se han tomado datos simultáneos de presión atmosférica para las mismas estaciones. Para la obtención de las amplitudes y los retardos de fase de los diferentes constituyentes, hemos utilizados el análisis armónico por mínimos cuadrados, habiendo obtenido 70 armónicos, siendo 19 constituyentes los más representativos, cuya amplitud es superior a 1 cm y que aportan un 93% de la energía de la onda de marea. Los resultados obtenidos nos muestran que la marea en ambas estaciones tiene un régimen de marea mixta con un comportamiento semi-diurno. Una conclusión importante es la falta de registros de marea referenciados altimétricamente a puntos de referencia vertical permanentes, que sean utilizados a partir de la fecha como benchmarks de referencia de mareógrafos (TGBM). Uno de los objetivos fue dejar monumentados los (TGBM, Tide Gauge Benchmark) y las marcas auxiliares de referencia

Este trabajo ha sido posible a la financiación de los siguientes proyectos, dependientes del Ministerio Español de Ciencia e Innovación, a través del programa de Investigación Antártica y recursos naturales:  SEGAVDEC-GEODESIA (CGL2007-28768-E/ANT), “Seguimiento de la actividad volcánica de la Isla Decepción (Antártida)”  SERTEMANT-GEODESIA (CTM2008-03113E/ANT), “Continuidad y análisis de series temporales geodésicas en la Antártida”  GEOTINANT (CTM2009-07251/ANT), “Investigaciones geodésicas y geotérmicas, análisis de series temporales e innovación volcánica en la Antártida (Islas Shetland del Sur y Península Antártica)” Los autores agradecen a la tripulación del Buque de Investigación Oceanográfica “BIO Las Palmas” y al personal de las Bases Antárticas Españolas “Gabriel de Castilla” y “Juan Carlos I”, por su apoyo en los trabajos de campo para instalación de equipos y toma de datos, en las diferentes campañas. Nuestro agradecimiento a los profesores Oscar Álvarez de la Universidad de Cádiz, por sus comentarios y proporcionar el software UCAD2D y Francisco García de la Universidad del Magdalena (Colombia) por proporcionar el software T-Tide, que nos han servido para comparar y validar nuestros resultados. Al Dr. Andrés Jiménez, de la Universidad de Cádiz, por su ayuda y sugerencias en el tratamiento de las series temporales. Referencias [1]

[2] [3]

216

Álvarez, O., Tejedor B., Tejedor L. and Kagan B.A., A note on seareeze-induced seasonal variability in the K1 tidal constants in Cádiz Bay, Spain. Estuarine Coastal and Shelf Science 58, pp. 805-812, Elsevier Ltd., 2003. DOI:10.1016/S0272-7714(03)00186-0 Berrocoso M., Jiménez Teja Y. and Páez R., Determination of a physical reference frame for Deception Island. Geophysical Research Abstracts. 2006. Berrocoso, M., Ramírez, M.E., Fernández-Ros, A., Torrecillas, C., Enríquez-Salamanca, J.M., Pérez-Peña, A., Páez, R., Jiménez-Teja, Y., González-Fuentes, M.J., Sánchez-Alzola, A., García-García, A., Tárraga, M. y García-García, F., Diseño, desarrollo, objetivos y estado actual de las redes geodésicas establecidas en la Antártida durante las campañas antárticas españolas. VII Simposio de Estudios Polares, Granada, Spain. 2006.[en línea] Disponible en: https://www.uam.es/otros/cn-scar/pdf/simposio_polar.pdf


Jigena-Antelo et al / DYNA 82 (191), pp. 209-218. June, 2015. [4]

Berrocoso, M., Fernández-Ros, A., Torrecillas, C., EnriquezSalamanca, J.M., Ramírez, M.E., Pérez-Peña, A., González-Fuentes, M.J., Páez, R., Jiménez-Teja, Y., García-García, A., Tárraga M. and García-García, F., Geodetic research on Deception Island, Antarctica. In Fütterer, D.K., Damaske, D., Kleinschmidt, G., Miller, H. and Tessensohn, F., Eds. Antarctica: Contributions to Global Earth Sciences. Berlín. Springer, [Online]. pp. 391-396, 2006. Available at: https://www.google.es/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&c ad=rja&uact=8&ved=0CCEQFjAA&url=http%3A%2F%2Flink.spri nger.com%2Fcontent%2Fpdf%2F10.1007%252F3-540-32934X_49.pdf&ei=D1_BVIfTJGsygOA4YLIDA&usg=AFQjCNE73DDTG42zTVL8DhzZKZdw7c V8hQ&sig2=6dYA9SgSkQ5FwtcRrcgt7A [5] Berrocoso, M., Salamanca, J.M., Ramírez, M.E., Fernández-Ros, A. and Jigena, B., Determination of a local geoid for Deception Island, in Antarctica: A keystone in a Changing World, [Online]. Proceedings of the 10th ISAES X, edited by A. K. Cooper and C.R. Raymond et al.., USGS Open-File Report 2007-2007-1047, Extended Abstract 123, 5 P., Poster Sesion 145, 10.3133/of2007-1047.141 p. 2007. Available at: http://pubs.usgs.gov/of/2007/1047/ea/of20071047ea123.pdf [6] Berrocoso, M., Fernández-Ros, A., Ramírez, M.E., EnriquezSalamanca, J.M., Torrecillas, C., Pérez-Peña, A., Páez, R., GarcíaGarcía, A., Jiménez-Teja, Y., García-García, F., Soto, R., Gárate, J., Martín-Dávila, J., Sánchez-Alzola, A., de Gil, A., Fernández-Prada, J.A. and Jigena, B. Geodetic research on Deception Island and its Environment (South Shetland Islands, Bransfield Sea and Antarctic Peninsula) during Spanish Antarctic Campaigns (1987–2007). In: Capra, A. & Dietrich, R., Eds. Geodetic and Geophysical Observations in Antarctica. Springer-Verlag, Berlin, [Online]. pp 97-124. 2008. Available at: file:///C:/Users/Bismarck/Dropbox/PROY%20NMMDEC/Articulos%20Cientificos/0%20Dyna/Articulos%20Referencia/ Book%20Capra%20%253A978-3-540-74882-3.pdf [7] Chelton, D.B. and Enfield., Ocean signals in tide gauge records. Journal of Geophysical Research, [Online]. 91, pp. 9081-9098. 1986. Available at: http://ir.library.oregonstate.edu/xmlui/bitstream/handle/1957/16001/ Chelton_and_Enfield_JGR_1986.pdf?sequence=1 [8] Defant, A., Physical oceanography, 2. New York: Pergamon Press, 598 P, 1961. [9] D’onofrio, E.E., Dragani, W.C., Speroni, J.O. and Fiore, M.E., Propagation and amplification of tide at the north-eastern coast of the Antarctic Peninsula. An observational study. Polar Geoscience, 16, pp. 53-60. 2003. [10] Dragani, W.C., Drabble, M.R., D’onofrio, E.E. and Mazio, C.A., Propagation and amplification of tide at Bransfield and Gerlache straits, northwestern Antarctic Peninsula. An observational study. Polar Geosciences, 17, pp. 156-170, 2004. [11] Fofonoff, N.P. and Millard, Algorithms for computation of fundamental properties of seawater. UNESCO Technical Papers in Marine Science, [Online]. 44, 53 P. 1983. Available at: http://unesdoc.unesco.org/images/0005/000598/059832eb.pdf [12] Foreman, M.G.G., Manual for tidal heights analysis and prediction. Sidney, BC: Institute of Ocean Sciences, Pacific Marine Science Report 77-10, [Online]. 97 P. 1977. Available at: ftp://canuck.seos.uvic.ca/docs/MFTides/heights.pdf [13] García, F., Palacio, C. and García, U., Tide constituents at Santa Marta Bay (Colombia). DYNA, [Online]. 78 (167), pp. 142-150, 2011. Available at:http://www.revistas.unal.edu.co/index.php/dyna/article/view/2577 7/26199 [14] García, M.A., Oceanografía dinámica de un mar Antártico: El Estrecho de Bransfield. Investigación Española en la Antártida, Seminario de la Universidad Internacional Menéndez Pelayo. Santander, 19–23 Julio, 1993. Madrid: Centro de Publicaciones, Ministerio de Educación y Ciencia, pp. 193-208. 1994. [15] Gill, A.E., Atmosphere-Ocean Dynamics, In: International Geophysics, 30. [Online]. Editor Academic Press, 1982. 662 P., San Diego, California. 1982. Available at: http://books.google.fr/books?id=1WLNX_lfRp8C&printsec=frontco

[16] [17]

[18] [19]

[20]

[21]

[22]

[23]

[24]

[25]

[26] [27]

[28] [29]

[30] [31] [32] [33]

217

ver&hl=fr&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=f alse Godin, G., The analysis of tides. Toronto: University of Toronto Press, 264 P. 1972. Jigena, B., Vidal, J. and Berrocoso, M., Determination of the mean sea level at Deception and Livingston Islands, Antarctica. Antarctic Science [Online]. 27 (01) pp. 101-102. 2015. DOI: DOI:10.1017/S0954102014000595 King, M.A. and Padman, L., Accuracy assessment of ocean tide models around Antarctica.Geophysical Research Letters, 32, L23608, DOI:10.1029/2005GL023901. 2005. Lenn, Y.D., Chereskin, T.K. and Glatts, R.C., Seasonal to tidal variability in currents, stratification and acoustic backscatter in an Antarctic ecosystem at Deception Island. Deep-Sea Research II (50), pp. 1665-1683. 2003. DOI:10.1016/S0967-0645(03)00085-7 López, O., García, M.A. y Sánchez-Arcilla, A.S., Marea y circulación en el Estrecho de Bransfield durante el verano austral 92/93. In Cacho, J. and Serrat, D. (Eds). Actas del V Simposio Español de Estudios Antárticos. Madrid. Comisión Interministerial de Ciencia y Tecnología, pp. 389-401, 1993. López, O., García, M.A. and Sánchez-Arcilla, A.S., Tidal and residual currents in the Bransfield Strait, Antarctica. Annales Geophysicae, [Online]. 12, pp. 887-9021994. Available at: http://link.springer.com/article/10.1007/s00585-994-0887-5 López, O., García, M.A., Gomis, D., Rojas, P., Sospedra, J. and Sánchez-Arcilla, A.S., Hydrographic and hydrodynamic characteristics of the eastern basin of the Bransfield Strait (Antartica). Deep-Sea Research I, 46, pp. 1755-1778. 1999. DOI:10.1016/S09670637(99)00017-5 Meredith, M.P., Brandon, A., Wallace, M.I., Clarke, A., Leng, M.J., Renfrew, M.A., van Lipzig, N.P.M. and King, J.C., Variability in the freshwater balance of northern Margarite Bay, Antarctic Peninsula: results from 18 O. Deep-Sea Research II, 55, pp. 309-322. 2008. DOI: 10.1016/j.dsr2.2007.11.005 Muñoz-Pérez, J.J. and Abarca-Molina, J.M., Effect of wind and atmospheric pressure variations on the mean sea level of salt marshes and estuaries. Revista de Obras Públicas, [Online]. 156 (3505), ISSN 0034-8619. 2009. Available at: http://ropdigital.ciccp.es/pdf/publico/2009/2009_diciembre_3505_0 2.pdf Padman, L., Fricker, H.A., Coleman, R., Howard, S. and Erofeeva, L., A new tide model for the Antarctic ice shelves and seas. Annals of Glaciology, 34, pp. 247-254. 2002. DOI: 10.3189/172756402781817752 Rakusa-Suszczewski, S., Mietus, M. and Piasecki, J., Pogoda i klimat In Rakusa-Suszczewwski S., Eds. Zatoka Admiralicji Antarktyki. Dziekanów Les’ny. Poland. Institut Ekologii PAN, pp. 41-50. 1992. Schöne, T., Pohl, M., Zakrajsek, A.F. and Schenke, H.W., Tide gauge measurements, a contribution for the long-term monitoring of the sea level. In: Wiencke, Ferreyra, Arntz, Rinaldi (Eds.): The Potter Cove coastal ecosystem, Antarctica, Berichte zur Polarforschung, [Online]. 299, pp. 12-14, 1998. Available at: http://epic.awi.de/4288/ SCAR (Scientific Committee for Antarctic Research), Antarctic digital database on CD-ROM. Cambridge: SCAR. 1993. Smith, Jr. K.L., Baldwin, R.J., Glatts, R.C., Chereskin, T.K., Ruhl, H. and Lagun, V, Weather, ice, and snow conditions at Deception Island, Antarctica: Long time-series photographic monitoring. Deep-Sea Research II, 50, pp. 1649-1664, 2003. DOI:10.1016/S09670645(03)00084-5 Smithson, M.J., Pelagic tidal constants 3, IAPSO Publication Scientifique 35. Birkenhead: IAPSO, IUGG, 191 P, 1992. Speroni, J.O., Dragani ,W., D’Onofrio, E.E., Drabbl,e M.R. y Mazio, C.A., Estudio de la marea en el borde de la Barrera Larsen, Mar de Weddell Noroccidental, Antártida. GeoActa, 25, pp. 1-11, 2000. Thompson, R.O.R.Y., Low pass filter to suppress inertial and tidal frequencies. Journal of Physical Oceanography, 13, pp. 1077-1083. 1983. DOI:10.1175/1520-0485(1983)013<1077:LPFTSI>2.0.CO;2 UNESCO, Tenth report of the joint panel on oceanographic tables and standards. Technical Papers in Marine Science, [Online]. 36, 25 P. 1981. Available at: http://unesdoc.unesco.org/images/0004/000461/046148eb.pdf


Jigena-Antelo et al / DYNA 82 (191), pp. 209-218. June, 2015. [34] UNESCO, Intergovernmental Oceanographic Commission workshop on sea level measurements in hostile conditions, 28–31 March 1988, Bidston, UK. IOC Workshop Report, [Online]. 54, 81 P. 1988. Available at: http://www.jodc.go.jp/info/ioc_doc/Workshop/081610eo.pdf [35] UNESCO, Intergovernmental Oceanographic Commission manual on sea level measurement and interpretation. II. Emerging technologies. UNESCO Manuals and Guides, [Online]. 14, 77 P. 1994. Available at: http://www.psmsl.org/train_and_info/training/manuals/ioc_14ii.pdf [36] Vidal, M., Berrocoso, M. and Jigena, B., Hydrodynamic modeling of port foster, Deception Island (Antarctica). In: Nonlinear and Complex Dynamics: Applications in Physical, Biological, and Financial Systems. [Online]. pp 193-203, 2011. Available at: http://springer.libdl.ir/chapter/10.1007/978-1-4614-0231-2_16 [37] Vidal, J., Berrocoso, M. and Fernández-Ros, A. Study of tides and sea levels at Deception and Livingston islands, Antarctica. Antarctic Science 24 (2), pp 193-201, 2011. DOI:10.1017/S095410201100068X [38] Walters, R.A. and Heston, C., Removing tidal period variations from time-series data using low-pass digital filters. Journal of Physical Oceanography, 12, pp. 112-115. 1982. DOI: 10.1175/15200485(1982)012<0112:RTPVFT>2.0.CO;2 [39] Willmott, V., Domack, E., Padman, L. and Canals, M., Glaciomarine sediment drifts from Gerlache Strait, Antarctic Peninsula. In: Hambry, M., Christoffersen, P., Glasser, N.F., Hubbard, B. (Eds). Glacial sedimentary processes and products. IAS Special Publication. New York: Blackwells, pp. 67-84. 2007. DOI: 10.1002/9781444304435.ch6. [40] Wunsch, C. and Stammer, C., Atmospheric loading and the ‘‘inverted barometer’’ effect. Reviews of Geophysics, 35, pp. 79-107. 1997. DOI: 10.1029/96RG03037

M. Berrocoso-Domínguez, es licenciado en Ciencias Matemáticas, especialidad de Astronomía, Geodesia y Mecánica Celeste en 1985, por la Universidad Complutense de Madrid, España y Dr. en Matemáticas en 1997, por la Universidad de Cádiz, España. Es profesor titular del área de Astronomía y Geodesia adscrito al Departamento de Matemáticas de la Facultad de Ciencias; investigador principal de proyectos de investigación financiados por el Gobierno de España. Responsable del Grupo de Investigación del área de Recursos Naturales y Medio Ambiente RNM314 Geodesia y Geofísica de la Junta de Andalucía. Líneas de investigación principales: Geodesia clásica y espacial, Satélites GNSS, Deformación tectónica y volcánica, Redes Geodésicas GNSS permanentes.

B. Jigena-Antelo, es Ing. Hidrógrafo de ROA-IHM, España, en 1995, Ing. en Organización Industrial en 2006, Lic. en Náutica y Transporte Marítimo en 2011, Diploma de Estudios Avanzados (DEA) en Ingeniería Geodésica, Cartográfica y Fotogrametría, en 2008, todos por la Universidad de Cádiz, España. MSc. en Ingeniería del Agua de la Universidad de Sevilla, en 2009. Desde 1987 ha trabajado en cargos técnicos, directivos y gerenciales en diferentes empresas y organismos técnicos, en áreas de geodesia, topografía, sistemas de información geográfica y teledetección, navegación e hidrología fluvial. Docente desde 1988 y desde 2006 es profesor investigador en el Departamento de Física Aplicada e investigador del Grupo RNM-314 Geodesia y Geofísica Cádiz y en el Laboratorio de Astronomía, Geodesia y Geofísica e investigador antártico, de la Universidad de Cádiz. Sus líneas de investigación: Modelos geoidales, altimetría y niveles del mar, redes geodésicas y sistemas GPS, Sistemas de información geográfica y teledetección. J. Vidal, es Lic.en Ciencias del Mar en 1994 y Dr. en Ciencias del Mar en 2002 por la Universidad de Cádiz, España. Es profesor del Departamento de Construcciones Navales de la Universidad de Cádiz, España e investigador del grupo RNM-160 (Andalusian Research Plan) adscrito al Centro Andaluz de Ciencia y Tecnologías Marina. Líneas de investigaciones en niveles del mar, modelos hidrodinámicos y aplicaciones medioambientales. Experiencia como técnico especialista y, posteriormente como profesor investigador, en campañas oceanográficas durante más de 15 años. Colaborador e investigador de proyectos antárticos del Laboratorio de Astronomía, Geodesia y Cartografía de la Universidad de Cádiz, España.

218

Área Curricular de Medio Ambiente Oferta de Posgrados

Especialización en Aprovechamiento de Recursos Hidráulicos Especialización en Gestión Ambiental Maestría en Ingeniería Recursos Hidráulicos Maestría en Medio Ambiente y Desarrollo Doctorado en Ingeniería - Recursos Hidráulicos Doctorado Interinstitucional en Ciencias del Mar Mayor información:

E-mail: acia_med@unal.edu.co Teléfono: (57-4) 425 5105


Effect of cellulose nanofibers concentration on mechanical, optical, and barrier properties of gelatin-based edible films Ricardo David Andrade-Pizarro a, Olivier Skurtys b & Fernando Osorio-Lira c a

Facultad de Ingeniería, Universidad de Córdoba, Montería, Colombia. rdandrade@correo.unicordoba.edu.co b Universidad Técnica Federico Santa María, Valparaíso, Chile. olivier.skurtys@usm.cl c Universidad de Santiago de Chile, Santaigo, Chile. fernando.osorio@usach.cl Received: Augusto 30th, de 2014. Received in revised form: November 4th, 2014. Accepted: November 14th, 2014

Abstract The effect of gelatin, glycerol, and cellulose nanofiber (CNFs) concentrations on the mechanical properties, water vapor permeability, and color parameters of films was evaluated. The results indicate that the color is only affected by the gelatin concentration. Mechanical tests indicated that with increasing concentration of gelatin and CNFs, there is an increase in tensile strength, whereas an increase in glycerol concentration causes an increase in elongation, making the films more flexible. An increased concentration of gelatin and glycerol makes the film more permeable to water vapor, while an increase in the concentration of CNFs reduces this property. Finally, the addition of CNFs to gelatin-based films improves their mechanical and barrier properties (water vapor) without affecting the appearance (color) of the films. Keywords: barrier properties; tensile strength; elongation; color.

Efecto de la concentración de nanofibras de celulosa sobre las propiedades mecánicas, ópticas y de barrera en películas comestibles de gelatina Resumen Se evaluó el efecto de la concentración de gelatina, glicerol y nanofibras de celulosa (NFC) sobre las propiedades mecánicas, permeabilidad al vapor de agua, y los parámetros de color de películas a base de gelatina. Los resultados indican que el color es influenciado sólo por la concentración de gelatina. Las pruebas mecánicas indican que al aumentar la concentración de gelatina y NFC hay un aumento en la resistencia a la tracción, mientras que un aumento en la concentración de glicerol provoca un aumento en el porcentaje de elongación, haciendo que las películas sean más flexibles. Un aumento en la concentración de gelatina y glicerol aumenta la permeabilidad al vapor de agua, mientras que un aumento en la concentración de NFC reduce esta propiedad. Finalmente, la adición de NFC en películas a base de gelatina mejora sus propiedades mecánicas y de barrera (vapor de agua) sin afectar a la apariencia (color) de las películas Palabras clave: propiedades de barrera; resistencia a la tracción; elongación, color.

1. Introduction Edible films have been considered for food preservation given their ability to improve food quality. Current interest in edible films is due to the need to develop packaging that is readily degradable and non-aggressive to the environment, as well as the need to improve the availability of new markets for the materials

used in the fabrication of these films [1,2]. Generally, edible films are applied in combination with other technologies (cooling, controlled or modified atmosphere, heat treatment, etc.) in order to improve their quality, safety or to increase their shelf life, since they can act as a barrier, improve mechanical properties, avoid damaging different parts of a food, and even serve as carriers of additives and active components [3,4].

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 219-226. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.45296


Andrade-Pizarro et al / DYNA 82 (191), pp. 219-226. June, 2015.

In the preparation of edible films a variety of biopolymers are used, such as polysaccharides, proteins, and lipids, alone or in combination, to enhance their individual properties. Films based on proteins are often brittle and inflexible and thus require the addition of plasticizers such as glycerol [5], which modify the organization of the polymeric threedimensional protein network, reducing intermolecular attractive forces and increasing the volume, favoring free chain mobility [6]. Moreover, the form of protein is of great importance for the formation of the networks that make up the matrix. The high molecular weight proteins and fibrils, such as gelatin, may form larger networks that improve mechanical properties [5]. However, given their hydrophilic nature, gelatin films have a poor water vapor barrier. Different alternatives have been tested to improve this water vapor barrier property of gelatin-based films, including the addition of hydrophobic compounds such as lipids [7,8], modifying the polymer network via cross-linking protein chains [9,10] and addition of nanocomposites, including cellulose nanofibers [11]. Cellulosic nanofibers (CNF) have been gaining considerable interest as reinforcement as they are more effective than their microsized counterparts in reinforcing polymers as they form a percolated network connected by hydrogen bonds, provided there is a good dispersion of the nanofibers in the matrix [12,13]. Many researchers reported that the addition the CNF improved water vapor barrier properties of biopolymer films, such as chitosan films [14,15], Pullulan films [16], and kappa-carrageenan films [13]. To evaluate the efficacy and quality of edible coatings, different parameters of the coated food in storage can be determined (loss of water, respiration rate, texture, color, pH, etc). Measurements can also be taken directly from the film, including mechanical, thermal, optical, and barrier properties [17,18]. Barrier properties (H2O, O2, and CO2) can greatly influence the stability of foods sensitive to the oxidation of lipids, vitamins, and pigments or substantial loss of water. However, an edible coating with very good barrier properties can become ineffective if its mechanical properties do not allow the integrity of the coating to be maintained during handling, packaging, and transport. The coatings must be resistant to breakage and abrasion in order to reinforce the structure of the food and facilitate handling and/or be sufficiently flexible to accommodate any deformation of the product without tearing. Another important aspect to consider is consumer acceptance, as coating materials can produce sensory changes in the product. The purpose of this work was to investigate the influence of the concentration of gelatin, glycerol, and cellulose nanofibers on the mechanical, optical, and water vapor permeability of edible films. 2. Materials and methods 2.1. Materials

from Sigma (Sigma-Aldrich, Chile). CNFs were provided by the New Materials Research Group (Pontificia Bolivariana University, Colombia), and these were obtained from agroindustrial waste (banana peel) as reported by Zuluaga et al. [19]. 2.2. Preparation of film-forming solution (FFS) FFS were prepared with distilled water. Gelatin was hydrated at room temperature (20 ± 2 ºC) for 30 min and then heated at 50 ºC for 30 min with continuous stirring until it was completely dissolved. Glycerol and cellulose nanofibers were added at different concentrations (based on dry gelatin weight). CNFs were dispersed uniformly with the aid of an ultrasound equipment operating at 40 kHz (Branson Model 2210, USA) for 30 min. FFSs were prepared at concentrations of 0.8 and 2.2%w/v, glycerol concentration varied between 10 and 30% w/w based on gelatin, and CNF concentration varied between 1 and 5% w/w based on gelatin. 2.3. Preparation of films Gelatin-based edible films were obtained by the casting technique, which consists in dehydrating an FFS, which is conveniently applied on a support. 40 mL of the FFS were poured into teflon plates with a diameter of 15.5 cm. The plates were maintained at 22 ºC in a laboratory oven (LDO150F model, LabTech, Korea) for 24 h. After drying, the films were peeled off from the plate surface. 2.4. Edible film thickness The film thickness was measured with a digital micrometer (Mitutoyo Co., Japan) with a sensitivity of 1 m. The thickness was expressed as the average of 10 random measurements of the films cut and adapted for mechanical and water vapor permeability tests. 2.5. Color parameter of edible films The color of the films was determined using a Miniscan XE Plus colorimeter (HunterLab, USA), and the D65 (daylight) CIELab scale was used to determine the parameters L*, a*, and b*, where L* indicates the degree of brightness from 0 for black to 100 for white, a* indicates the position between red (+a*) and green (-a*), and b* indicates the position between yellow (+b*) and blue (-b*). The color of the film was expressed as the color difference (ΔE*) according to eq. (1).

(1)

where ΔL*, Δa*, and Δb* are the differences between the sample color parameter and the white board color standard (L* = 94.8, a* = -0.78, b* = 1.43), which is used as the background for determining the color film [20]. 2.6. Mechanical properties of edible films

Type B gelatin from bovine skin (180 Bloom) was purchased from Rousselot (Brazil), and glycerol was purchased

The mechanical properties of edible films were determined by stress and puncture tests using a universal

220


Andrade-Pizarro et al / DYNA 82 (191), pp. 219-226. June, 2015.

[23]. Four edible films without visual defects were selected (no bubbles or fractures), and these were cut with a diameter of 21.6 mm and placed on a glass cell containing distilled water. A high-vacuum silicone grease (Merck, Germany) was used to seal the film with the cell. The cells were placed in a desiccator containing silica gel (0% RH) and maintained at 22 ºC in a laboratory oven (Labtech LDO-150F, Korea). The cells were weighed on an analytical balance (Precisa ES - DR 225SM, Germany) every 2 hours during the firsts 8 hours and then at 24 hours. The water vapor transmission rate was determined according to Eq. (4).

Figure 1: Diagram of puncture test. Source: Authors

testing machine (Zwick/Roell, Germany). The tensile strength (TS) and elongation (E%) of the films were determined according to the ASTM D882-95 method. Film specimen strips (100 mm × 25 mm) were cut and conditioned in a desiccator containing saturated potassium carbonate (purity ≥ 99%, Sigma-Aldrich) solution with 50% (RH) at 22 ºC for 4 days prior to testing. The initial distance of separation between the tensile grips and velocity were adjusted to 50 mm and 1 mm/s, respectively. TS was calculated by dividing the maximum load at break by the initial specimen’s cross-sectional area (thickness for width (25 mm)). E (%) was calculated by dividing the extension at breaking of the specimen by the initial gage length of the specimen (50 mm) and expressed as a percentage (%). Each test trial per film consisted of seven replicate measurements. The puncture test was run according to Gontard et al. [21]. Each film was mounted on a 46.2-mm-diameter puncture cell and perforated by a smooth-edged cylindrical probe (5 mm in diameter) moving at 1 mm/s (see Fig. 1). The puncture strength (PS) and the percentage deformation (Ep) were calculated by eq. (2) and (3).

%

(2)

∗ 100% (3)

where Fmax is the maximum applied force (N), ACS is the cross-section of the film situated in the cell (ACS = 2rδ), r is the initial radius of the film, δ is the film thickness, and d is the movement of the probe from the point of contact with the film to the breaking point [22].

(4)

where S is the slope of the mass loss of the cells over time and At is the area (m2) of the water vapor transfer rate. Existence of a stagnant air layer inside and above the cup generates significant resistance to water vapor transport; therefore, it was necessary to correct the WVTR value according to the methodology proposed by Gennadios et al. [5]. 2.8. Statistical design and analysis The Box-Behnken statistical screening design was used to statistically develop the model and to study and evaluate the main effects, interaction effects, and quadratic effects of the independent variables (gelatin, glycerol, and CNF concentrations) on the mechanical, optical, and water vapor barrier properties of gelatin-based edible films. The response surface methodology was applied to analyze the effect of independent variables on response variables. A second-order polynomial model (eq. 5) was used to predict the experimental behavior.

∑ ∑

(5)

where Y is the predicted value of the response; β0, βi, βii, and βij are the regression coefficients for interception, linear, quadratic, and interaction effects, respectively; k is the number of independent parameters (k = 3 in this study); and Xi and Xj are the coded levels of the experimental conditions. Analysis of variance (ANOVA) was applied to determine significant effects of gelatin, glycerol, and CNF concentrations on the properties of gelatin-based edible films. The quality of the developed model was determined by the coefficients of determination (R2) and root mean square error (RMSE). This study design was analyzed and threedimensional response surface plots were drawn using JMP 9.0.1 software (SAS Institute). 3. Results and discussion

2.7. Water vapor permeability of films

3.1. Color of edible films

The water vapor transmission rate (WVTR) and water vapor permeability (WVP) were determined gravimetrically at 22 ºC, according to the method proposed by Gontard et al.

Color evaluation is an important quality parameter for potential industrial applications of edible films, because

221


Andrade-Pizarro et al / DYNA 82 (191), pp. 219-226. June, 2015.

Figure 2: Response surface of the color difference of the films as a function of (a) gelatin and glycerol (3% w/w cellulose nanofibers), (b) gelatin and cellulose nanofibers (20% w/w glycerol). Source: Authors

consumers often associate aspects such as brightness and color of food—which can be affected by the coatings and edible films—with food quality. All edible formulations were assessed visually and the films obtained were transparent, with the absence of insoluble particles and generally good appearance. The ANOVA shows that gelatin concentration (p = 0.004) significantly affects the color difference (ΔE*) of edible films, with a significance level at 95%. Fig. 2 shows the response surface of the behavior of the color difference of edible films in terms of concentrations of gelatin–glycerol (Fig. 2a) and gelatin–CNF (Fig. 2b). We can see that in the presence of glycerol and CNFs, ΔE* increases linearly with the concentration of gelatin, irrespective of the other constituents. This increase was approximately 71.6% when the concentration of gelatin rose by 0.8% w/v (ΔE* = 2.23) to 2.2% w/v (ΔE* = 3.83). This increase is due to the greater amount of solids contained in the film, which in turn increases the concentration of gelatin. Vanin et al. [24] reported that the concentration of gelatin-based films (2% w/v) and type of plasticizer (glycerol, diethylene glycol and propylene glycol) do not affect ΔE*. The behavior of the color difference (ΔE*) of edible films may be represented as a function of the concentrations of gelatin (G), glycerol (g), and CNFs (C) according to eq. (6). This equation has values of R2 and RMSE of 0.94 and 0.302, respectively. This indicates the suitability of the proposed model to represent the experimental data.

3.03 0.09 ∗

The ANOVA showed that the linear effects of gelatin (p = 0.0017), NFC concentration (p = 0.0074) and the quadratic effect of glycerol concentration (p = 0.0312) significantly affect the tensile strength (TS) of edible films, with a significance level of 95%. The values for the fracture of edible films agree with those reported by Chambi and Grosso [25] for gelatin-based films (TS ≈ 60 MPa). Gelatin, however, has a loosely organized structure that may be renatured during the film formation process [26], as it is able to reacquire part of the triple helix structure of collagen. According to Siew et al. [27], it increases the chain organization optimized molecular packing, resulting in an increase in mechanical properties. Fig. 3 shows the response surface of the tensile strength of the edible films. An increased gelatin concentration (see Fig. 3a) produces an increase in the value of the tensile strength of the edible films. High gelatin concentrations can cause a greater number of physical connections during the formation of the network, so that greater force is required to rupture the films [28]. Moreover, the addition of CNFs increases the tensile strength (Fig. 3b), which has been observed in different polymeric matrices: chitosan [12] and starch [29]. George and Siddaramaiah [30] reported that for gelatin films (10%) the addition of 4% bacterial cellulose nanocrystals produces an increase of 30% in the tensile strength. The increased tensile strength due to the addition of CNFs suggests a uniform dispersion of the fibers within the gelatin matrix, and good CNF–gelatin adhesion interactions, as suggested by Gardner et al. [31] and Xu et al. [32] for other matrices. The addition of glycerol had a quadratic effect on TS. This was an unexpected result given that although it is a well known fact that increasing glycerol concentration decreases fracture stress, Sobral et al. [33] observed a decrease, shaped like a parabolic segment, in the TS of films made of meat myofibrillar proteins (between 25 and 100g glycerin/100g protein), acidified with acetic acid or lactic acid. Remarkably, of the three components tested, the concentration of gelatin had the most influence on TS, showing an increase of 124% when the gelatin concentration increased from 0.8% w/v (TS = 23.50 MPa) to 2.2% w/v (TS = 52.72 MPa), while an increase in the concentration of CNFs from 1% w/w (TS = 29.05 MPa) to 5% w/w (TF = 47.17 MPa) causes an increase of 62.4% in the TS.

0.80 0.20 0.16 0.01 ∗ 0.03 0.01 ∗ 0.01 0.11 (6)

Considering only factors that significantly influence the color difference of edible films, eq. 6 can be expressed as eq. (7).

3.03

0.80

(7)

3.2. Mechanical properties of edible films 3.2.1. Stress test of edible films Tensile strength (TS) of edible films

Figure 3: Response surface of the tensile strength of films as a function of (a) gelatin and glycerol (3% w/w cellulose nanofibers), (b) gelatin and CNFs (20% w/w glycerol). Source: Authors 222


Andrade-Pizarro et al / DYNA 82 (191), pp. 219-226. June, 2015.

Eq. (8) shows the relationship representing the TS based on the concentration of gelatin (G), glycerol (g), and CNFs (C). In this equation, all factors are encoded independently if they are significant.

38.1 0.7 ∗

14.6 0.8 ∗

% 5.5 0.6 0.7 ∗ 0.1 ∗

1.5 0.1 0.2 ∗ 0.3 0.1 0.1 (9)

3.2.2. Puncture test

0.6 9.0 1.0 ∗ 0.4 1.8 0.1 (8)

The values of R2 (0.96) and RMSE (4.54) indicate the goodness of the quadratic polynomial in the representation of the experimental data. 

Elongation of edible films ANOVA shows that the elongation (%) of the films is significantly influenced (95%) by the glycerol concentration (p = 0.0123). Fig. 4 shows the response surface of elongation of edible films in function of the concentrations of glycerol– gelatin (Fig. 4a) and glycerol–CNF (Fig. 4b). An increase in glycerol concentration from 10% w/w (E = 3.99%) to 30% w/w (E = 6.91%) causes an increase of 73.2% in the percentage elongation. This effect of glycerol on elongation has been widely reported by several authors [22,24,34]. The addition of plasticizer in the preparation of edible films reduces the interactions between the chains of biopolymers [28]. According to Jongjareonrak et al. [35] and Ayala et al. [36], the glycerol molecule is a small chain and is hygroscopic; thus, it is easily inserted between the protein chains, attracting more water into the structure of the film and making it more flexible. Remarkably, Carvalho [37] reported that gelatin concentration does not affect the elongation of films made of gelatin and sorbitol. Besides, in contrast to this study, Azeredo et al. [12] observed that CNF significantly affect the elongation of films. This is possibly explained by a better adhesion of CNF to the matrix used (gelatin) in this study. The elongation percentage (E, %) of the edible films can be represented according to eq. (9) with the factors encoded: gelatin concentration (G), glycerol (g), and CNFs (C). The accuracy of the model (eq. 9) in the representation of the experimental data is shown by the obtained values of R2 = 0.89 and RMSE = 0.84.

Figure 4: Response surface of elongation of the films as a function of (a) glycerol and gelatin (3% w/w cellulose nanofibers), (b) glycerol and CNFs (1.5 %w/v gelatin). Source: Authors

Puncture resistance test The ANOVA shows that the puncture strength (PS) is influenced significantly by the gelatin concentration (p = 0.0498) and CNFs (p = 0.0189). Fig. 5 shows the response surface of the behavior of the PS of edible films in terms of concentrations of gelatin–glycerol (Fig. 5a) and gelatin–CNF (Fig. 5b). An increase in the gelatin concentration from 0.8 to 2.2% w/v causes an increase in PS of 90.2%, whereas when the concentration of CNFs increases from 1 to 5%, the PS value increases by 121.1%. The contribution of these compounds to the mechanical properties was described in Section 3.2.1. [1]

Figure 5: Response surface of the PS of the films as a function of (a) gelatin and glycerol (3% w/w cellulose nanofibers), (b) gelatin and CNFs (20% w/w glycerol). Source: Authors

Puncture deformation The ANOVA shows that the puncture deformation was significantly affected only by the linear factor: glycerol concentration (p = 0.0305). In Figs. 6a and 6b, we can see that increasing the glycerol concentration increases the percentage of puncture deformation by approximately 40%. The addition of plasticizers (glycerol) reduces the intermolecular forces between the protein molecules, thereby increasing the flexibility and extensibility of the films [38,39].

Figure 6: Response surface of percentage of puncture deformation of the films as a function of (a) glycerol and gelatin (3% w/w cellulose nanofibers), (b) glycerol and CNFs (1.5 %w/v gelatin). Source: Authors

223


Andrade-Pizarro et al / DYNA 82 (191), pp. 219-226. June, 2015.

These data corroborate those obtained in the stress test. According to Vanin et al. [24], in gelatin matrix, glycerol has a greater effect on the puncture deformation than ethylene glycol and propylene glycol. The puncture deformation percentage (Ep, %) of the edible films can be represented according to Eq. (10) with the factors encoded: gelatin concentration (G), glycerol (g), and CNFs (C). The accuracy of the model (eq. 10) in the representation of the experimental data is shown by the obtained values of R2 = 0.86 and RMSE = 0.32.

% 1.8 0.05 ∗

0.4 0.3 0.2 0.03 ∗ 0.03 0.07 ∗ 0.1 0.1 (10)

3.3. Water vapor permeability of edible films Table 1 shows the water vapor transfer rate (WVTR) and water vapor permeability (WVP) of edible films of gelatin, glycerol, and CNFs. The measured values of water vapor transfer rate (WVTRm) underestimate the corrected values of water vapor transfer rate (WVTRc), introducing errors of 18 to 37%. The ANOVA shows that with a confidence level of 95%, the water vapor permeability of edible films is influenced significantly by the linear effects of gelatin (p = 0.0038), glycerol (p = 0.0211), and CNFs (p = 0.0214) and the quadratic effect of gelatin concentration (p = 0.0240). Fig. 7 shows the response surface of the behavior of water vapor permeability of edible films in terms of concentrations of gelatin-glycerol (Fig. 7a) and gelatin-CNF (Fig. 7b). An increase in the gelatin concentration from 0.8 to 2.2% w/v causes an increase in WVP of 150%, which may be caused by swelling of the film due to water, which creates Table 1. Water vapor permeability of edible films based on gelatin-glycerol-CNF. Level of WVTRm WVTRc Error% WVP x103

variables

2

x103

x1011

2

G

g

C

g/m s

g/m s

g/m.s. Pa

2.2

20

5

3.08

3.79

18.7

9.05

0.8

30

3

4.66

6.54

28.8

8.95

1.5

30

5

3.42

4.32

20.9

8.11

1.5

20

3

4.40

6.77

35.0

11.50

1.5

30

1

4.36

5.97

26.9

11.50

1.5

20

3

4.13

6.17

33.0

11.70

2.2

20

1

4.07

5.42

25.0

12.69

2.2

10

3

2.85

3.67

22.4

8.39

1.5

10

5

3.07

3.76

18.6

7.03

0.8

20

5

3.75

5.35

29.8

5.15

2.2

30

3

3.52

4.48

21.5

12.15

1.5

10

1

3.92

5.71

31.2

11.47

0.8

10

3

3.42

4.69

27.1

3.81

1.5

20

3

4.62

7.32

36.9

12.2

0.8

20

1

4.58

6.39

28.3

8.23

Source: Authors

Figure 7: Response surface of the water vapor permeability of the films as a function of (a) gelatin and glycerol (3% w/w cellulose nanofibers), (b) gelatin and CNFs (20% w/w glycerol). Source: Authors

Figure 8: Response surface of the thickness of the films as a function of gelatin and CNFs (20% w/w glycerol). Source: Authors

various structures in the film. In addition, increasing the concentration of gelatin produces a linear increase of the thickness (see Fig. 8), which agrees with that reported by Carvalho [37]. Benbettaïeb et al. [40] observed that when the thickness increased from 52 to 159 μm, WVP linearly increased. In ideal polymeric structures, the water vapor permeability is independent of the thickness of the film [38], and does not particularly apply to hydrophilic films. An increase in the film thickness provides a greater resistance to mass transfer through it and consequently the partial pressure of water vapor in equilibrium at the surface of the inner film increases. Significantly, the permeability to water vapor in gelatin-based films is not an inherent property of the films, because the rate of water vapor transmission through hydrophilic films varies nonlinearly with the gradient of partial pressure of water vapor [41]. Moreover, increasing the concentration of glycerol (10 to 30% w/w) increases the WVP by 31%. Several authors have observed this behavior [24,38,39,42]. The addition of glycerol results in a reorganization of the network formed by

224


Andrade-Pizarro et al / DYNA 82 (191), pp. 219-226. June, 2015.

the protein, reducing the intermolecular attractive forces and increasing the free volume and chain mobility; thus, increasing the diffusion coefficient of water [43,44]. On this point, Nemen [45] reported that an increase in the content of glycerol (hydrophilic component) increases the amount of polar groups present in the gelatin film. Finally, of the three components of the formulations, only the CNF concentration decreased the WVP values. The addition of 1 to 5% w/w of CNF causes a 32% decrease in WVP. Several authors have reported that the addition of CNFs decreases the permeability of films prepared from alginate [14], chitosan [12], and gelatin [30]. This could be due to interaction between CNFs and hydrophilic sites of the gelatin and the fact that CNFs have low hygroscopicity. According to Lagaron et al. [46], the presence of crystalline fibers increases tortuosity, which causes slower diffusion processes and hence lower permeability. Eq. (11) shows the relationship representing the water vapor permeability according to the concentration of gelatin (G), glycerol (g), and CNFs (C). In this equation, all factors are encoded independently if they are significant. The values of R2 (0.92) and RMSE (1.27) indicate the goodness of the quadratic polynomial in the representation of the experimental data.

10.0 3.4 0.1 ∗ 0.1 ∗

1.4 1.9 0.2 ∗ 0.3 0.9 2.1 (11)

WVP (x 1011) is the water vapor permeability at g/m s Pa. 4. Conclusion

[1]

[2] [3] [4] [5]

[6]

[7]

[8] [9] [10]

Films made of gelatin, glycerol, and cellulose nanofibers (CNFs) have a good visual appearance, confirming the good dispersion of CNFs in the gelatin matrix. Gelatin concentration increases the color of the films linearly. Determination of the mechanical properties (stress and puncture tests) of edible films shows that the concentration of gelatin and CNFs increases the fracture stress and puncture resistance, suggesting a uniform dispersion in the matrix and CNF and good CNF–gelatin adhesion interactions. Moreover, an increase in glycerol concentration increases the flexibility and elongation of gelatin-based films. Water vapor permeability is also influenced by the concentration of gelatin, glycerol, and CNF. An increase in the concentration of gelatin and glycerol increases the WVP by 150% and 31%, respectively. However, the addition of CNF in the gelatin matrix causes a reduction of WVP of 31%, because the nanofibers provide a more tortuous path to water vapor passing through the film. The addition of CNF to the gelatin matrix can improve the mechanical properties (62.4%) and the water vapor permeability (32%) without affecting the color of the films, which are desirable characteristics in most applications of edible films and coatings. Acknowledgement

The authors would like to thank Fondecyt (project 1130587). O.S is grateful to the Fondecyt (project 1120661) for financial support. References

[11]

[12]

[13]

[14]

[15]

[16]

[17]

225

Chillo, S., Flores, S., Mastromatteo, M., Conte, A. and Gerschenson, L., Del Nobile, M.A. Influence of glycerol and chitosan on tapioca starch-based edible film properties. J Food Eng, 88 (2), pp. 159-168, 2008. DOI:10.1016/j.jfoodeng.2008.02.002 Koelsch, C., Edible water vapor barriers: properties and promise. Trends Food Sci Tech, 5(3), pp. 76-81, 1994. DOI:10.1016/09242244(94)90241-0 Debeaufort, F., Quezada-Gallo, J. and Voilley, A., Edible films and coatings: Tomorrow’s packagings. A review. Crit Rev Food Sci Nutr, 38 (4), pp. 299- 313, 1998. DOI:10.1080/10408699891274219 Pavlath, A. and Orts, W., Edible films and coatings: Why, what, and how?. In Embuscado, M. and Huber, K, Eds, Edible films and coatings for food applications, Springer, 2th edition, pp. 1- 24, 2009. Gennadios, A., McHugh, T., Weller, C. and Krochta, J.M., Edible coatings and films based on proteins. In Krochta, J., Baldwin, E. and Nisperos-Carriedo, M., Eds., Edible coatings and films to improve food quality, Lancaster, PA: Technomic Publishing Company, pp. 201-277, 1994. Chang, C. and Nickerson, M.T., Effect of plasticizer-type and genipin on the mechanical, optical, and water vapor barrier properties of canola protein isolate-based edible films. Eur Food Res Technol, 238 (1), pp. 35-46, 2014. DOI: 10.1007/s00217-013-2075-x Ma, W., Tang, C., Yin, S., Yang, X., Wang, Q., Liu, F. and Wei. Z., Characterization of gelatin-based edible films incorporated with olive oil. Food Res Int, 49 (1), pp. 572-579, 2012. DOI:10.1016/j.foodres.2012.07.037 McHugh, T. and Senesi, E., Apple wraps: A novel method to improve the quality and extend the shelf life of fresh-cut apples. J. Food Sci., 65 (3), pp. 480-485, 2000. DOI: 10.1111/j.1365-2621.2000.tb16032.x Chambi, H. and Grosso, C., Edible films produced with gelatin and casein cross-linked with transglutaminase. Food Res Int, 39 (4), pp. 458-466, 2006. DOI:10.1016/j.foodres.2005.09.009 Mu, C., Guo, J., Li, X., Lin, W. and Li D., Preparation and properties of dialdehyde carboxymethyl cellulose crosslinked gelatin edible films. Food Hydrocolloid, 27 (1), pp. 22-29, 2012. DOI:10.1016/j.foodhyd.2011.09.005 Chang, S, Chen, L., Lin, S. and Chen, H. Nano-biomaterials application: Morphology and physical properties of bacterial cellulose/gelatin composites via crosslinking. Food Hydrocolloid, 27 (1), pp. 137-144, 2012. DOI:10.1016/j.foodhyd.2011.08.004 Azeredo, H., Mattoso, L., Avena-Bustillos, R, Filho, G., Munford, M., Wood, D. and McHugh, T., Nanocellulose reinforced chitosan composite films as affected by nanofiller loading and plasticizer content. J Food Sci, 75 (1), pp. N1–N7, 2010. DOI: 10.1111/j.17503841.2009.01386.x. Savadekar, N.R., Karande, V.S., Vigneshwaran, N., Bharimalla, A.K. and Mhaske, S.T., Preparation of nano cellulose fibers and its application in kappa-carrageenan based film. Int. J. Biol. Macromol, 51 (5), pp. 1008-1013, 2012. DOI:10.1016/j.ijbiomac.2012.08.014 Azeredo, H., Miranda, K., Ribeiro, H., Rosa, M. and Nascimento, D., Nanoreinforced alginate–acerola puree coatings on acerola fruits. J Food Eng, 113 (4), pp. 505-510, 2012. DOI:10.1016/j.jfoodeng.2012.08.006 Wu, T., Farnood, R., O'Kelly, K. and Chen, B., Mechanical behavior of transparent nanofibrillar cellulose–chitosan nanocomposite films in dry and wet conditions. J Mech Behav Biomed, 32, pp. 279-286, 2012. DOI:10.1016/j.jmbbm.2014.01.014 Trovatti, E., Fernandes, S.C.M., Rubatat, L., Perez, D., Freire, C.S.R., Silvestre, A.J.D. and Neto, C.P., Pullulan–nanofibrillated cellulose composite films with improved thermal and mechanical properties. Compos Sci Technol, 72 (13), pp. 1556-1561, 2012. DOI:10.1016/j.compscitech.2012.06.003 Guilbert, S. and Gontard, N., Edible and biodegradable food packaging. In: Ackermann, J. and Ohlsson, T., eds., Foods and


Andrade-Pizarro et al / DYNA 82 (191), pp. 219-226. June, 2015.

[18]

[19]

[20]

[21]

[22]

[23]

[24]

[25]

[26] [27]

[28] [29]

[30] [31] [32] [33]

[34]

Packaging Materials-Chemical Interactions, England: The Royal Society of Chemistry, pp. 159-168, 1995. Wan, Y., Luo, H., He, F., Liang, H., Huang, Y. and Li, X., Mechanical, moisture absorption, and biodegradation behaviours of bacterial cellulose fibre-reinforced starch biocomposites. Compos Sci Technol, 69 (7–8), pp. 1212-1217, 2009. DOI:10.1016/j.compscitech.2009.02.024 Zuluaga, R., Putaux, J., Cruz, J., Vélez, J., Mondragon, I. and Gañán, P., Cellulose microfibrils from banana rachis: Effect of alkaline treatments on structural and morphological features. Carbohyd Polym, 76(1), pp. 51-59, 2009. DOI:10.1016/j.carbpol.2008.09.024 Sobral, P., Santos, J. and García, F., Effect of protein and plasticizer concentrations in film forming solutions on physical properties of edible films based on muscle proteins of a Thai Tilapia. J Food Eng, 70 (1), pp. 93-100, 2005. DOI:10.1016/j.jfoodeng.2004.09.015 Gontard, N., Guilbert, S. and Cuq, J., Edible wheat gluten films: Influence of the main process variables on film properties using response surface methodology. J. Food Sci., 57 (1), pp. 190-195, 1992. DOI:10.1111/j.1365-2621.1992.tb05453.x Sobral, P., Menegalli, F., Hubinger, M. and Roques, M., Mechanical, water vapor barrier and thermal properties of gelatin based edible films. Food Hydrocolloid, 15 (4–6), pp. 423-432, 2001. DOI:10.1016/S0268-005X(01)00061-3 Gontard, N., Guilbert, S. and Cuq, J., Water and glycerol as plasticizers affect mechanical and water vapor barrier properties of an edible wheat gluten film. J. Food Sci., 58 (1), pp. 206-211, 1993. DOI:10.1111/j.1365-2621.1993.tb03246.x Vanin, F., Sobral, P., Menegalli, F., Carvalho, R. and Habitante, A., Effects of plasticizers and their concentrations on thermal and functional properties of gelatin-based films. Food Hydrocolloid, 19 (5), pp. 899-907, 2005. DOI:10.1016/j.foodhyd.2004.12.003 Chambi, H. and Grosso, C., Mechanical and water vapor permeability properties of biodegradables films based on methylcellulose, glucomannan, pectin and gelatin. Food Science and Technology (Campinas), 31 (3), pp. 739-746, 2011. DOI:10.1590/S010120612011000300029 Achet, D. and He, X., Determination of the renaturation level in gelatin films. Polymer, 36 (4), pp. 787-791, 1995. DOI:10.1016/00323861(95)93109-Y Siew, D., Heilmann, C., Easteal, A. and Cooney, R., Solution and film properties of sodium caseinate/glycerol and sodium caseinate/polyethylene glycol edible coating systems. J Agr Food Chem, 47 (8), pp. 3432-3440, 1999. DOI:10.1021/jf9806311 Arvanitoyannis, I., Formation and properties of collagen and gelatin films and coatings. In Gennadios, A., Ed., Protein-based Films and Coatings, Boca Raton: CRC Press, pp. 730-739, 2002. Wan, Y.Z., Luo, H., He, F., Liang, H., Huang, Y. and Li, X.L., Mechanical, moisture absorption, and biodegradation behaviours of bacterial cellulose fibre-reinforced starch biocomposites. Composites Science and Technology, 69 (7– 8), pp. 1212-1217, 2009. DOI:10.1016/j.compscitech.2009.02.024 George, J., Siddaramaiah., High performance edible nanocomposite films containing bacterial cellulose nanocrystals. Carbohydr Polym, 87 (3), pp. 2031- 2037, 2012. DOI:10.1016/j.carbpol.2011.10.019 Gardner, D., Oporto, G., Mills, R. and Samir, M., Adhesion and surface issues in cellulose and nanocellulose. J Adhes Sci Technol, 22 (5-6), pp. 545-567, 2008. DOI: 10.1163/156856108X295509 Xu, Y., Ren, X. and Hanna, M., Chitosan/clay nanocomposite film preparation and characterization. J Appl Polym Sci, 99 (4), pp. 16841691, 2006. DOI:10.1002/app.22664 Sobral, P., Ocuno, D. and Savastano, H., Preparo de proteínas miofibrilares de carne e elaboracao de biofilmes com dois tipos de acidos: propriedades mecanicas. Brazilian Journal of Food Technology, 1, 44-52, 1998. Maria, T., de Carvalho, R., Sobral, P., Habitante, A. and SolorzaFeria, J., The effect of the degree of hydrolysis of the PVA and the plasticizer concentration on the color, opacity, and thermal and mechanical properties of films based on PVA and gelatin blends. J Food Eng, 87 (2), 191-199, 2008. DOI:10.1016/j.jfoodeng.2007.11.026

[35] Jongjareonrak, A., Benjakul, S., Visessanguan, W., Prodpran, T. and Tanaka, M., Characterization of edible films from skin gelatin of brownstripe red snapper and bigeye snapper. Food Hydrocolloid, 20 (4), pp. 492-501, 2006. DOI:10.1016/j.foodhyd.2005.04.007 [36] Ayala, G., Agudelo, A. and Vargas, R., Effect of glycerol on the electrical properties and phase behavior of cassava starch biopolymers. DYNA, 79 (171), pp. 138-147, 2012. [37] Carvalho, R., Desenvolvimento e caracterizaçao de biofilmes a base de gelatina. Master’s Thesis, Universidade Estadual de Campinas, Campinas, Brasil, 1997. [38] Bertuzzi, M., Vidaurre, E.C., Armada, M. and Gottifredi, J., Water vapor permeability of edible starch based films. J Food Eng, 80 (3), pp. 972-978, 2007. DOI:10.1016/j.jfoodeng.2006.07.016 [39] Hanani, Z., McNamara, J., Roos, Y. and Kerry, J., Effect of plasticizer content on the functional properties of extruded gelatin-based composite films. Food Hydrocolloid, 31 (2), pp. 264-269, 2013. DOI:10.1016/j.foodhyd.2012.10.009 [40] Benbettaïeb, N., Kurek, M., Bornaz, S. and Debeaufort, F., Barrier, structural and mechanical properties of bovine gelatin–chitosan blend films related to biopolymer interactions. J Sci Food Agric, 94 (12), pp. 2409-2419, 2014. DOI: 10.1002/jsfa.6570, 2014. [41] McHugh, T., Avena-Bustillos, R. and Krochta, J., Hydrophilic edible films: Modified procedure for water vapor permeability and explanation of thickness effects. J. Food Sci., 58 (4), pp. 899-903, 1993. DOI:10.1111/j.1365-2621.1993.tb09387.x [42] Cerqueira, M., Souza, B., Teixeira, J. and Vicente, A., Effect of glycerol and corn oil on physicochemical properties of polysaccharide films – a comparative study. Food Hydrocolloid, 27 (1), pp. 175-184, 2012. DOI:10.1016/j.foodhyd.2011.07.007 [43] Cuq, B., Gontard, N., Cuq, J. and Guilbert, S., Selected functional properties of fish myofibrillar protein-based films as affected by hydrophilic plasticizers. J. Agric. Food Chem, 45 (3), pp. 622-626, 1997. DOI:10.1021/jf960352i [44] Guilbert, S., Gontard, N. and Cuq, B., Technology and applications of edible protective films. Packag Technol Sci, 8 (6), pp. 339-346, 1995. DOI:10.1002/pts.2770080607 [45] Nemet, N., Soso, V. and Lazic, V., Effect of glycerol content and pH value of film-forming solution on the functional properties of proteinbased edible films. Acta Periodica Technologica, 41, pp. 57-67, 2010. DOI:10.2298/APT1041057N [46] Lagaron, J., Catalá, R. and Gavara, R., Structural characteristics defining high barrier properties in polymeric materials. Mater Sci Tech Ser, 20 (1), pp. 1-7, 2004. DOI:10.1179/026708304225010442 R.D. Andrade-Pizarro, is Chemical Engineering in 1994, at the University of Atlántico, Colombia; completed a Specialization in Food Science and Technology in 2004, from the University of Magdalena, Colombia, and a PhD in Food Science and Technology at the University of Santiago Chile, Chile. Currently, he is an associate professor of the Food Engineering Department at the University of Cordoba, Colombia. Ricardo is member of the Society of Rheology (2009-present) and the International Society of Food Engineering (2008-preent). Also, he is a member of the scientific board of the Journal VITAE. His research interests include: Rheology and texture of foods, food processing and edible coatings by spraying. O. Skurtys, completed a MSc in Aerodynamics, Fluid Mechanics, combustion and thermal sciences in 1999, and a PhD. in Engineering Sciences in 2004; all of them from the Université de Poitiers, École Nationale Supérieure de Mécanique et d'Aérotechnique in the Thermal Studies Laboratory. His research interests include: Fluid mechanics, Granular matter, Soft matter, Surface science, and edible coatings by spraying. F. Osorio-Lira, received a BSc. Eng in Chemical Engineering in 1977, from Universidad de Chile, Chile, a MSc in Food Science in 1985, from Michigan State University, USA, and PhD. dual in Food Engineering and in Food Science in 1989, from Michigan State University, USA. His research interests include: Rheology and texture of foods, flow of non-Newtonian fluids, and edible coatings by spraying.

226


Tribological properties of BixTiyOz films grown via RF sputtering on 316L steel substrates Johanna Parra a, Oscar Piamba a, Jhon Olaya a & José Edgar Alfonso b a

Grupo de Corrosión Energía y Tribología, Universidad Nacional de Colombia, Bogotá, Colombia., jpparrasu@unal.edu.co, jjolayaf@unal.edu.co, oepiambat@unal.edu.co b Grupo de Ciencia de Materiales y Superficies, Departamento de Física, Universidad Nacional de Colombia, Bogotá Colombia. jealfonsoou@unal.edu.co Received: September 2th, de 2014. Received in revised form: November 26th, 2014. Accepted: December 16th, 2014

Abstract In this paper, we present the results obtained in surface chemical analysis, morphological characterization and evaluation of tribological properties of coatings of amorphous bismuth titanate (BixTiyOz) deposited on substrates made of 316L stainless steel using rf sputtering technique. The chemical elemental analysis was performed using Auger electron spectroscopy (AES), the morphology of the coatings was determined by atomic force microscopy (AFM). Measures of friction coefficient and wear rate were obtained by ball on disc test. EEA analyses allowed to establish that the first 10 nm of the coatings are comprised probably of Bi4Ti3O12 and Ti2O3, AFM measurements indicate that the coatings have an average roughness of 22.28nm and grain size of 50nm. Finally, the tribological tests established that the coefficient of friction and wear rate of the coated steel has similar values to the bare steel. Keywords: Amorphous titanate, spectroscopy, tribology.

Propiedades tribológicas de Películas de BixTiyOz producidas por RF sputtering sobre sustratos de acero 316L Resumen En este trabajo se presentan los resultados obtenidos en el análisis químico superficial, la caracterización morfológica y evaluación de las propiedades tribológicas de recubrimientos de titanato de bismuto amorfo (BixTiyOz) depositados sobre sustratos de acero inoxidable 316L utilizando la técnica de pulverización catódica rf. El análisis químico elemental se realizó por medio de espectroscopia de electrones Auger (EEA), la morfología de los recubrimientos se determinó mediante microscopia de fuerza atómica (MFA). Las medidas del coeficiente de fricción y la tasa de desgaste fueron obtenidas mediante pruebas de bola sobre disco. Los análisis de EEA permitieron establecer que los primeros 10 nm de los recubrimientos están formados probablemente por óxidos de Bi4Ti3O12 y Ti2O3, las medidas de AFM indican que los recubrimientos tienen una rugosidad promedio de 22.28nm y un tamaño de grano de 50nm. Finalmente, las pruebas tribológicas establecieron que el coeficiente de fricción y la tasa de desgaste del acero recubierto tiene valores similares al acero desnudo. Palabras clave: Titanato amorfo, espectroscopia, tribología

1. Introduction Ferroelectric thin films are of interest in many fields, such as the electronic and microelectronic industries, and in particular those that deal with bismuth titanate (BIT), which is a ceramic material that has a high Curie temperature, good resistance to fatigue, and interesting photocatalytic

properties, which allow it to be useful in devices such as nonvolatile memories, capacitors, and optical memories [1]. These coatings can be deposited through sputtering to obtain a high density, compact and homogeneous material [2,3]. It is important to note that there are numerous papers in the scientific literature that show the technological and industrial applications of BIT, but there is insufficient data regarding

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 227-230. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.45373


Parra et al / DYNA 82 (191), pp. 227-230. June, 2015.

its tribological behavior. During the last few years many phases of bismuth titanate have been synthesized using various techniques, for instance chemical solution decomposition (CSD)[4], reactive sintering [5], and RF magnetron sputtering [6,7], obtaining diverse stoichiometric of BIT, but in general these types of films grow in an amorphous structure¨ [8]. In the present paper, bismuth titanate (BixTiyOz) was deposited on 316 stainless steel substrates via RF magnetron sputtering. The microstructural characterization showed that they were amorphous films, and their tribological properties such as resistance to wear and the coefficient of friction were determined. The results showed that these films exhibit a protecting effect that generates a reduction of the rate of wear when compared to the substrate. 2. Experimental Method The dimensions of the 316L stainless steel substrate were 19.00mm x 3.00mm; the surface was prepared by mechanical grinding with sandpaper from number 300 to 1200 and Alumina (Al2O3) with a grain size of 0.02µm. The substrates were cleaned using 50 mL of deionized water in ultrasound for 10 minutes. The samples were dipped into 50 mL of acetone in order to remove the organic compounds. Finally, the same process was undertaken using butanol until a clean surface was obtained. The ceramic BIT coatings were prepared from a target of Bi4Ti3O12 (99.9%, Plasma Materials,) using a CIT rf Alcatel HS 2000 sputtering system with a balanced magnetron 101.6 mm in diameter, described in a previous paper [9]. The deposition time for the films was 45 minutes. An rf electrical power of 150W and an argon atmosphere with a working pressure of 7.5x10-3mbar were used. The temperature of the substrate was varied between 300 and 400°C, and a voltage bias of -280V was applied during the deposition. The microstructure of the BIT coating was determined using an MFP-3D-BIO atomic force microscope with a resolution of 0.5 nm. The obtained images were made at 1µm2 in non-contact mode. Igor pro software was used for the acquisition of data about the preciseness and the particle sizes. For the elemental chemical analysis, an Omicron Auger spectrometer was used, working at 3.0 keV.

Figure 1. AES spectrum of BixTiyOz coatings for samples grown at 350°C and 150W. Source: Authors

Table 1. Chemical composition of BiTiO coatings. Element Bismuth Titanium Oxygen Source: Authors

(% at) 8.0 32.0 60.0

The semi-quantitative analysis made from the line intensity and sensitivity factors at 3 keV [10] allowed for establishing that the films have the following chemical composition (see Table 1). According to the values of atomic concentration of the surface, the chemical composition of the coatings was a mixture of Bi4Ti3O12 and Ti2O3. The formation of titanium trioxide is not in accordance with that reported by Humbert [14], since, as mentioned above, this author associated the results found in TiO2 powders.

3. Results and Discussion 3.1. Chemical and morphological characterization Fig. 1 shows the chemical composition of the surface obtained through Auger electron spectroscopy. The spectrum allows us to find peaks belonging to the atomic transitions for BiN6O45O45 [10] at 103.1eV, Ti L3M23M23 and Ti L23M23V at 382.4 eV [11] and 417.0eV [10], respectively, O KL1L23 and O KVV at 491.2[10] and 511.6eV [13], respectively, and CKVV at 266 eV [10]. The energies associated with the oxygen peaks have been reported by Humbert [14] as titanium dioxide (TiO2). Moreover, taking into account that the kinetic energy of metallic bismuth is 102.0eV and that the experimental energy shift is 1.1eV, this could indicate that the Bi was forming an oxide.

Figure 2. AFM micrograph of BixTiyOz coating. Source: Authors.

228


Parra et al / DYNA 82 (191), pp. 227-230. June, 2015.

Figure 5. Width of the wear track on films of BixTiyOz after 2420m. Source: Authors.

Figure 3. Adherence test for the BixTiyOz coating. Source: Authors.

Fig. 2 shows the morphology of the surface of the BixTiyOz coating, where it is possible to see that there is a preferential direction of growth of the grains that formed the coating, which allows us to establish that the growth mechanism is due to the formation of isles. The grain size and roughness obtained in this film were 50nm and 11.28nm, respectively. 3.2. Adherence and coefficient of friction A scratch test was performed under the ASTM C1624-05 standard. Fig. 3 shows the different zones and effects of this test, which had a length of 8mm produced by an applied load of the ascending type. a) is a 200x magnification of an optical microscope image where we see the first fissure (LC1), plastic deformation, and delamination, and b) is a SEM micrograph, where it is possible to observe, in more detail, the delamination (Lc2), plastic deformation, and initial fissures of the coating. The cracks that are observed are due to a normal load and the displacement of the indenter, since this is a ceramic compound that is fractured, causing micro-cracks that are known as buckling of the lining, which occurs at 2N of load. Due to the fact that the application of the load was progressive, it is possible to see a split that occurs approximately at 4N. Fig. 4 shows the curves of the coefficient of friction made at 2500m for the substrate of 316L stainless steel and the bismuth titanate film.

Figure 4. Representative curve of the wear coefficient the BixTiyOz coatings. Source: Authors.

Figura 6. Width of the wear track on substrate SS 316L after 2420m. Source: Authors.

The results allow us to establish that in the first 250m, the coefficient of friction of the substrate was approximately 4.6 times higher than the coefficient of friction of the coating. This result can be explained because the rough edges of the substrate were polished; from 250m to 1100 m, the values of the two coefficients are very similar. After 1200m, the coefficient of friction of the coatings was higher than that of the substrate, which is probably due to the action of the dry lubricant of bismuth. Figs. 5 and 6 show the measurements of the width of the wear track in five different areas. Using these measurements, the average width of the wear track, the volume, and the rate of wear were calculated. These values were determined using ZEN 2011 software. The widths of the tracks allow us to observe that for the same distance travelled, the track of the wear test with the coating is thinner and softer than the track of the substrate. These results show that bismuth acts as a dry lubricating agent. Fig. 7 shows the calculation of the rate of wear.

Figure 7. Calculation of the rate of wear after 2420m. Source: Authors.

229


Parra et al / DYNA 82 (191), pp. 227-230. June, 2015.

The values of these rates allow us to establish that a substrate coated with BIT has a lower rate of wear than an uncoated substrate. This result indicates that BIT exhibits good tribological behavior. 4. Conclusions Bismuth oxide films were grown via RF magnetron sputtering. Chemical analysis showed that the surface of the films was composed of TiO2 and Bi4Ti3O12. Tribological tests showed that the adherence exhibited a critical value at 4N, since at this load, the films were delaminated. Moreover, the coefficient of friction and rate of wear of the films were less than those of the substrate. Acknowledgements The authors are grateful for the financial support of Patrimonio Autónomo Fondo Nacional de Financiamiento para la Ciencia, la Tecnología y la Innovación Francisco José de Caldas. References [1] [2] [3] [4]

[5]

[6] [7] [8]

[9]

[10] [11]

[12] [13]

H.Y., He Processing dependences of microstructure of ferroelectric thin films, Journal Recent Patents on Materials Science, 12 (2), pp. 19–25, 2008. DOI:10.1016/j.cossms.2009.01.003 Olaya, J.J., Rodil, S.E. y Marulanda, D., Recubrimientos de nitruros metálicos depositados con UBM: Tecnología eficiente y ambientalmente limpia, DYNA, 77 (164), pp. 60-68, 2010. Marulanda, D. and Olaya, J.J., Unbalanced magnetron sputtering system for producing corrosion resistance multilayer coatings, DYNA, 79 (171), pp. 74-79, 2012. Zhang. H., Lü. M., Liu. S., Wang. L., Xiu. Z., Zhou. Y., Qiu. Z., Zhang. A. and Ma Q., Preparation and photocatalytic property of perosvskite Bi4Ti3O12 films, Materials chemistry and Physics, 114 (23), pp., 716-721, 2009. DOI:10.1016/j.matchemphys.2008.10.052 Jovalekic, C., Zdujié, M. and Atanasoka, L., Surface analysis of bismuth titanate by Auger and X-ray photoelectron spectroscopy, Journal of Alloys and Compounds, 469, pp. 441-444, 2009. DOI:10.1016/j.jallcom.2008.01.131 Horita, S., Sasaki, S., Kitawa, O. and Horii, S., Ferroelectric properties of epitaxial Bi4Ti3O12 films deposited on epitaxial (1 0 0) Ir and (1 0 0) Pt on Si by sputtering, Vacuum, 66, pp. 427-433, 2002. Yamaguch-Masaki, I and Nagatomo T., Preparation and properties of Bi4Ti3O12 thin films grown at low substrate temperatures, Thin Solid Films, 348, pp. 294-298, 1999. Harjuoja, J., Väyrnen S., Putkonen M., Niinistö L. and Rauhala, E., Crystallization of Bismuth titanate and bismuth silicate grown as thin films by atomic layer deposition, Journal of Crystal Growth, 286, pp. 376-383, 2006. DOI:10.1016/j.jcrysgro.2005.10.020 Alfonso, J.E., Torres, J. and Marco ,J. F., Influence of the substrate bias voltage on the crystallographic structure and surface composition of Ti6A14V thin films deposited by RF magnetron sputtering, Brazilian Journal of Physics 36 (3B), pp.994-996, 2006. Powell, C.J., Recommended auger parameters for 42 elemental solids, Journal of Electron Spectroscopy and Related Phenomena, 185, pp. 1-3, 2012. DOI:10.1016/j.elspec.2011.12.001 Wagner, C.D., Riggs, W.H., Davis, L.E., Moulder, J.F. and Muilenberg, G.E., Handbook of x-ray photoelectron spectroscopy, perkin-Elmer Corporation, Physical Electronics division, Eden Prairie, Minn. 55344, 1979. Militello, M.C. and Simko, S.J., Suface Science Spectra vol., 3, 395 P., 1994. Wagner, C.D., Zatko, D.A. and Raymond, R.H., Analysis Chemical vol 52, 1445 P.,1980.

[14] Humbert, P. and Deville J.P. Oxygen auger spectra of some transitionmetal oxidesrelaxation energies and d-band screening J. Phys. C: Solid State Phys. 20, pp. 4679-4687, 1987. J.P. Parra, completed her BSc. degree in Physical Engineering in 2008, and a Msc degree in Engineering Materials and Processes in 2014, both degrees were obtained from the Universidad Nacional de Colombia, Bogotá, Colombia. She has worked in the area of computational simulation with DFT, production of thin films and high temperature corrosion. J.E. Alfonso, completed his BSc degree in Physics in 1987 and his MSc. degree in Science - Physics in 1991, both degrees were obtained from the Universidad Nacional de Colombia, Bogotá, Colombia. In 1997, he completed his PhD in Science -Physics at the Universidad Autonoma de Madrid, Spain. He has been linked to the Universidad Nacional de Colombia, Bogotá, Colombia, as a full professor since 2000, where his research has focused on material science, particularly on thin film processing as well as performance characterization, studying thin film optical, electrical and mechanical properties. J.J. Olaya, is an associate professor at the Departamento de Ingeniería y Mecatrónica of the Universidad Nacional de Colombia, Bogotá Colombia. He conducts research in the general area of development and applications of thin films deposited by plasma assisted techniques, corrosion and wear. He received his PhD in 2005 from the Universidad Nacional Autonoma de México, Mexico. O. Piamba, is an associate professor at the Departamento de Ingeniería Mecánica y Mecatrónica of the Universidad Nacional de Colombia, Bogotá, Colombia. He conducts research in the general area of development and applications of coatings, corrosion, wear and energy. He received his PhD in 2009 from the Universidad Federal Fluminense, Rio de Janeiro, Brasil.

230

Área Curricular de Ingeniería Geológica e Ingeniería de Minas y Metalurgia Oferta de Posgrados

Especialización en Materiales y Procesos Maestría en Ingeniería - Materiales y Procesos Maestría en Ingeniería - Recursos Minerales Doctorado en Ingeniería - Ciencia y Tecnología de Materiales Mayor información:

E-mail: acgeomin_med@unal.edu.co Teléfono: (57-4) 425 53 68


Analysis of energy saving in industrial LED lighting: A case study Ana Serrano-Tierz a, Abelardo Martínez-Iturbe b, Oscar Guarddon-Muñoz c & José Luis Santolaya-Sáenz d a

Universidad de Zaragoza, Zaragoza, España. anatierz@unizar.es Universidad de Zaragoza, Zaragoza, España. amiturbe@unizar.es c Universidad de Zaragoza, Zaragoza, España. oguarddon@disenossantelices.com d Universidad de Zaragoza, Zaragoza, España. jlsanto@unizar.es b

Received: September 5th, 2014. Received in revised form: April 16th, 2015. Accepted: April 22th, 2015

Abstract The present study shows the economic savings and environmental advantages of LED lighting technology for industrial applications. For this purpose a case study in which metal halide lights were replaced with LED lights has been performed. For validating the replacement of 400W metal halide by 200W LED luminaires, there have been performed lighting simulations and field measurements by lux meter. The results indicate that both luminaires are comparable, obtaining significant energy savings close to 50%. This research demonstrates that LED lighting technology offers high performance lighting solutions that optimize energy saving, while reducing maintenance costs, with a significantly lower total cost of ownership (TCO), increasing the life span of the luminaires. From the environmental point of view it means a significant reduction in CO2 emissions and the disposal of toxic waste such as mercury. Keywords: Industrial LED lighting; High bay LED lighting; LED Vs Metal halide, Total Cost of Ownership (TCO),

Análisis de ahorro energético en iluminación LED industrial: Un estudio de caso Resumen El presente estudio muestra el ahorro económico y las ventajas medioambientales que supone la iluminación industrial con tecnología LED. Se ha planteado un estudio de caso en el que luminarias de halogenuros metálicos han sido sustituidas por luminarias LED. Para validar la sustitución de luminarias con halogenuros metálicos de 400W por LED de 200W, se han efectuado las simulaciones luminotécnicas y mediciones de campo mediante luxómetro. Los resultados obtenidos indican que ambas luminarias son equiparables obteniendo un importante ahorro energético cercano al 50%. Esta investigación demuestra que la tecnología LED ofrece soluciones de iluminación de alto rendimiento que optimizan el ahorro energético reduciendo a su vez costes de mantenimiento con un coste total de propiedad significativamente menor, incrementando la esperanza de vida útil de las luminarias. Desde el punto de vista medioambiental supone una importante reducción en emisiones de CO2 y eliminación de residuos tóxicos como el mercurio. Palabras clave: Iluminación LED industrial; Campanas LED industriales; LED Vs Halogenuro metálico, Coste Total de Propiedad (CTP).

1. Introducción Debido a la evolución en tecnología LED que viene produciéndose desde que en el año 1997 se desarrollase en Japón la iluminación de luz blanca basada en LED [1], el mercado mundial está demandando con mayor intensidad la transformación de las fuentes de iluminación convencional a soluciones más eficientes y duraderas basadas en sistemas de iluminación LED. Este hecho provoca que el esfuerzo en I+D+i de las empresas del sector de iluminación se haya centrado

principalmente en conseguir sistemas de iluminación con buenas prestaciones, altamente eficientes y que resulten asequibles [2]. La posibilidad de ofrecer soluciones con un alto rendimiento desde el punto de vista del ahorro energético eliminando costes de mantenimiento y ofreciendo un sistema duradero en el tiempo, ha convertido la tecnología LED en uno de los motores tecnológicos más competitivos y con mayor proyección de futuro en el sector de la iluminación. De esta manera, la eficiencia energética se concibe como una metodología para el análisis y tratamiento de los problemas del

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 231-239. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.45442


Serrano-Tierz et al / DYNA 82 (191), pp. 231-239. June, 2015.

creciente consumo [3]. Se prevé que para el año 2020 el 75% de la iluminación esté basada en LED [4]. Hay que apuntar también que esta tecnología contribuye directamente a combatir el cambio climático al reducir las emisiones de gases de efecto invernadero de acuerdo con la decisión adoptada por el Parlamento Europeo el 17 de junio de 2010 [5]que ha fijado como objetivo para 2020, ahorrar un 20% de su consumo de energía primaria. En este sentido, la concienciación ciudadana ante el peligro del calentamiento global, sumado a una conducta más responsable en el consumo y la reducción de residuos tóxicos y peligrosos, han favorecido que desde los estratos políticos se impulsen medidas que ayuden a preservar el medioambiente y favorezcan la implantación de tecnología más ecológica [6] que resulte eficiente gracias al ahorro en el uso de los recursos naturales y la reducción de emisiones de CO2 [7]. La eficiencia energética constituye una pieza clave en el desarrollo de las economías en los mercados globales. El sector de la iluminación podría ahorrar un 45% de la energía eléctrica consumida gracias a la utilización profesional de la tecnología LED [8]. Se considera que la iluminación LED en aplicaciones industriales supondría un gran ahorro energético, por la potencia, superficie a iluminar y horas de uso. Por este motivo, el número de empresas que en la actualidad están sustituyendo los sistemas de iluminación tradicional por este tipo de tecnología es cada vez mayor. Hasta la aparición del LED la iluminación industrial había utilizado principalmente lámparas de halogenuros metálicos y fluorescencia. La importancia de introducir la iluminación LED en el sector industrial viene determinada por la necesidad de optimizar los costes de operación con el objeto de aumentar su competitividad. Según la Oficina de Eficiencia Energética y Energías Renovables de los Estados Unidos, el cambio a tecnología LED en iluminación podría suponer en las próximas dos décadas, un ahorro de $250 billones en costes de energía y reduciría el consumo eléctrico en iluminación en torno al 50%, evitando la emisión de 1800 millones de toneladas métricas de emisiones de dióxido de carbono [9]. Hasta el momento, la mayor parte de las investigaciones desarrolladas en iluminación LED para la sustitución de tecnologías tradicionales, se han centrado en la iluminación urbana [10-13]. Aunque existen estudios genéricos de sustitución de fuentes de iluminación tradicionales por LED [14], sin embargo, son muy escasas las investigaciones científicas que traten el sector industrial. Puesto que la reducción de costes en el sector industrial es uno de los principales campos de acción para aumentar la competitividad de las empresas, la presente investigación tiene por objeto evaluar el ahorro económico y las ventajas medioambientales que supone la iluminación industrial basada en tecnología LED mediante un estudio de caso en el que se ha sustituido luminarias de halogenuros metálicos por luminarias LED. 2. Metodología del estudio 2.1. Comparación de tecnologías y criterios de análisis El método de evaluación de la iluminación LED respecto a los sistemas tradicionales consiste en la comparación de

140 120 100 80 60 40 20 0

Eficiencia lm/W

HM

Inducción

TL5

LED

Figura 1. Comparación de la eficiencia (lm/W) de cuatro tecnologías comercialmente disponibles: HM, Inducción, TL5, LED. Fuente: Elaboración propia.

parámetros técnicos relevantes como: a) La eficiencia b) La luminosidad c) La vida útil de la luminaria y d) La dependencia con la temperatura. Con este objetivo se ha procedido a comparar varios tipos de lámparas comerciales de halogenuros metálicos (400W HPI plus Philips), fluorescencia con tubos TL5 (4x80W Master TL5 HO 80W/840 1SL Philips), fluorescencia por inducción (250W Icetron Sylvania-Osram) y LED (200W Luxeon Rebel ES Philips). Se han descartado las lámparas de vapor de sodio por su bajo índice de reproducción cromática (CRI) que limita su uso a iluminación de exteriores. 2.1.1. Comparativa de la eficiencia La Fig. 1 muestra la eficiencia (lm/W) de cuatro tecnologías disponibles en el mercado. Los valores de la figura muestran una diferencia significativa de más del 50% entre la eficiencia lumínica del LED respecto al resto de tecnologías analizadas. Actualmente tan solo algunos tipos de lámparas de vapor de sodio pueden alcanzar eficiencias equiparables al LED, aunque su aplicación principal es la iluminación vial. 2.1.2. Comparativa de luminosidad incluyendo luminarias Es necesario distinguir entre los lúmenes emitidos por una lámpara (bombilla) y los que realmente se disponen cuando esta misma lámpara está dentro de una luminaria. Esta diferencia resulta importante para poder efectuar un análisis correcto de la eficiencia luminosa obtenida finalmente. La distribución radial de la luz en las lámparas de halogenuro metálico y fluorescencia, precisa de reflectores para redirigir el flujo luminoso hacia los puntos de interés. En función del tipo de luminaria, eficiencia del reflector y las condiciones de operación del LED, se ha definido un coeficiente de eficiencia lumínica LOR (light output ratio). Este coeficiente puede variar ampliamente en las lámparas con distribución radial desde un 60-75% del flujo luminoso mientras que en el LED, por su distribución direccional, el aprovechamiento es óptimo con LOR próximo a 1. Los lúmenes iniciales obtenidos de la luminaria se calculan multiplicando los lúmenes iniciales de la lámpara por el coeficiente LOR de la luminaria LluminariaI=LlámparaI*LOR, siendo los lúmenes de la

232


Serrano-Tierz et al / DYNA 82 (191), pp. 231-239. June, 2015.

lámpara dependientes del tiempo. Además del coeficiente de eficiencia lumínica de la luminaria, se ha analizado en la Fig. 2 la evolución del flujo luminoso en función del tiempo, a partir de las gráficas de depreciación lumínica de las lámparas proporcionadas por los fabricantes. La Fig. 2 muestra que todas las tecnologías experimentan una depreciación de la luminosidad con el tiempo. La tecnología de halogenuros metálicos (HM) es la que experimenta un decaimiento más acentuado y menor duración. Tanto en la fluorescencia TL5 como en la inducción, se produce un fuerte decaimiento inicial que posteriormente reduce su pendiente de caída. Al llegar a las 20kh se produce el fin de vida del TL5. En el caso de 200W LED y 4xTL5 80W partiendo de una luminosidad similar se observa que la mayor pendiente inicial de caída en el TL5 origina que el flujo luminoso se reduzca antes que en el LED, cuya pendiente es menor. Las dos tecnologías más duraderas son la inducción y el LED. La gran desventaja de la inducción es que no mejora la eficiencia de la fluorescencia TL5, solo

se obtiene una mejora en la duración al prescindir de cátodos para su encendido. En comparación con el resto de tecnologías analizadas, el LED se comporta de un modo uniforme con mejor eficiencia, menor decaimiento y mayor duración. La Tabla 1 complementa la información de la Fig. 2 del siguiente modo: a) Tipos de lámparas y potencia utilizadas en el estudio. b) Lúmenes iniciales de la lámpara (LlámparaI). c) Coeficientes típicos de eficiencia lumínica en la luminaria (LOR). d) Lúmenes iniciales de la luminaria (LluminariaI). e) Lúmenes a las 5.000h de uso de la luminaria. f) Lúmenes a las 10.000h de uso de la luminaria. g) 80% de la luminosidad inicial en la luminaria como referencia para el tiempo de servicio. h) Tiempo de servicio calculado para una depreciación luminosa del 80% de la inicial.

30000 HM 25000 Inducción 250W

20000 15000

4xTL5 80W

10000

LED 200W

5000 LED 250W 0

Figura 2. Gráfica comparativa de depreciación lumínica (lm) entre luminarias. Fuente: Elaboración propia.

Tabla 1. Depreciación lumínica entre luminarias con lámparas de diferentes tecnologías. Coef. típico de Tipo de lámpara Lúmenes Lúmenes eficiencia iniciales lámpara iniciales luminaria luminaria (LlámparaI) (LOR) (LluminariaI) MH 400W 32500 0,75 24375 Inducción 250W 20000 0,75 15000 TL5 4x80W 26200(4x6550) 0,76 19912 LED 200W@ 0,7A 22400 0,89 19800 LED 250W @ 0,7A 28000 0,90 25200 Fuente: Elaboración propia.

Los valores que aparecen en la Tabla 1 para los tipos de lámpara MH 400W y LED 200W hacen referencia al estudio de caso planteado.

Lúmenes a las 5000h luminaria

Lúmenes a las 10000h luminaria

Lúmenes al 80% luminaria

Tiempo de depreciación lúmenes al 80%

19500 13200 18518 19206 24444

17062 12750 18119 18810 23940

19500 12000 15930 15840 20160

5000 h 20000 h 22000 h 45000 h 45000 h

2.1.3. Comparativa de la influencia de la temperatura entre luminarias Las lámparas de halogenuros metálicos alcanzan altas temperaturas de más de 250°C. Esto provoca un alto

233


Serrano-Tierz et al / DYNA 82 (191), pp. 231-239. June, 2015.

calentamiento del ambiente de trabajo, lo cual puede llegar a ser un problema. Un aumento de la temperatura ambiente aumentará la tensión y la potencia en la lámpara. También aumentará el grado de envejecimiento y corrosión de los componentes y puede llegar a producir un fallo prematuro de la lámpara. Las lámparas de inducción o fluorescentes T5 utilizan amalgamas y gas con mercurio que son fuertemente dependientes de la temperatura ambiente, ya que alcanzan su máximo flujo luminoso en torno a los 35°C. Temperaturas superiores a 50°C provocan un descenso acusado de la luminosidad por la reabsorción ultra violeta en el mercurio. Con temperaturas bajas, se reduce drásticamente la luz emitida, siendo la mínima temperatura de trabajo -5°C a baja frecuencia y -10°C a alta frecuencia. Las luminarias LED empleadas en los ensayos se han diseñado de forma que disipen el calor emitido por los LED al ambiente, manteniendo temperaturas de operación seguras de la unión semiconductora Tj<85°C, con objeto de evitar el decaimiento lumínico por exceso de temperatura y reducir el envejecimiento del LED. Un diseño térmico eficiente garantizará una temperatura adecuada en las condiciones ambientales especificadas. El uso del LED a bajas temperaturas favorece su eficiencia y duración, en ese caso la fuente de alimentación debe diseñarse para operar en dichas condiciones.

resultados obtenidos con dos tipos de lámparas: a) Halogenuros metálicos de 400W y b) LED de 200W. a) Halogenuros metálicos de 400W A continuación se detallan los datos técnicos de la luminaria con bombilla de halogenuros metálicos HPI Philips de 400W. Fabricante: Philips. Modelo: HPK888 P-MB 1xHPI-P400W-BUS R-L reflector MB. Flujo luminoso de la lámpara a las 5000h: 32500x0.8=26000 lm. Ratio de eficiencia lumínica: (Light output ratio) LOR= 0.77. Flujo luminoso inicial de la luminaria: 20020 lm. Consumo total: 428W. Condiciones de simulación: Luminaria centrada y suspendida a 6m de altura en un techo de 10m. Distancia a paredes: 7,5m. Grado de reflexión: GR techo 70% GR pared 50% GR suelo 20%. Factor de mantenimiento: 0,8. En la Tabla 2 y Fig. 3 se muestran los resultados de la simulación de la luminaria descrita de halogenuros metálicos.

2.2. Cálculos luminotécnicos

Tabla 2. Resultados simulación 400W HM. Emin (lx) Emax (lx) Em (lx)

Con objeto de efectuar la simulación luminotécnica se ha utilizado el programa Dialux 4.12.0.1 de libre disposición [15] al que se le ha incorporado los modelos proporcionados por los fabricantes. De esta forma se han confrontado los

73 5,96 Fuente: Elaboración propia.

327

Emin/ Em

Emin/ Emax

0,081

0,018

Figura 3: a) Izda. Gráfica de distribución lumínica (lux) de 400W halogenuros metálicos mediante isolíneas en el plano del suelo (15x15m). b) Dcha. Gráfica de distribución lumínica (lux) de 400W halogenuros metálicos mediante gradación de colores (0-500 lux). Fuente: Elaboración propia.

234


Serrano-Tierz et al / DYNA 82 (191), pp. 231-239. June, 2015.

La Fig. 3, derecha, muestra una representación tridimensional de la escena a iluminar (anchura, profundidad, altura, 15m*15m*10m). El suelo muestra en escala de colores la iluminación media (lux) obtenida sobre el mismo. La Fig. 3, izquierda, muestra una distribución de líneas isolux sobre el plano del suelo. La Tabla 2 muestra el valor medio de la iluminación, los valores extremos, la relación máximo/medio y mínimo /máximo. b) Luminaria con LED 200W La simulación se realiza con una luminaria que posee chips LED Philips Lumileds SMD Rebel ES 200W. Fabricante: DSLED. Modelo: CL14200 5000k reflector 120°. Flujo luminoso de la lámpara: 22400 lm.

Ratio de eficiencia lumínica: (Light output ratio) LOR= 0.884. Flujo luminoso inicial de la luminaria: 19800 lm. Consumo total: 220W. Condiciones de simulación idénticas al caso anterior. En la Tabla 3 y la Fig. 4 se muestran los resultados de la simulación de una luminaria con LED de 200W. Tabla 3. Resultados simulación 200W LED. Em (lx) 73

Emin (lx) 4,37

Emax (lx)

Emin/ Em

Emin/ Emax

0,060

0,007

593

Fuente: Elaboración propia.

Figura 4: a) Izda. Gráfica de distribución lumínica (lux) de 200W LED mediante isolíneas en el plano del suelo (15x15m). b) Dcha. Gráfica de distribución lumínica (lux) de 200W LED mediante gradación de colores (0-500 lux). Fuente: Elaboración propia.

La Fig. 4, derecha, muestra una representación tridimensional de la escena a iluminar (anchura, profundidad, altura, 15m*15m*10m). El suelo muestra en escala de colores la iluminación media (lux) obtenida sobre el mismo. La Fig. 4, izquierda, muestra una distribución de líneas isolux sobre el plano del suelo. La Tabla 3 muestra el valor medio de la iluminación, los valores extremos, la relación máximo/medio y mínimo /máximo. Comparando los resultados de simulación a) y b) se observa que la distribución lumínica está más concentrada en la luminaria LED, consiguiendo un nivel de iluminación en el centro de casi el doble que en la de halogenuros metálicos. Esto se debe a la mayor direccionalidad del flujo en el LED respecto a las bombillas de halogenuros metálicos que emiten con un ángulo de 360º y necesitan que el flujo luminoso sea redireccionado con reflectores específicos. Así mismo, se observa que el nivel de iluminancia media Em es idéntico en ambas simulaciones, lo cual permite equiparar ambas soluciones.

2.3. Estudio de caso El objetivo de este apartado es cuantificar el ahorro económico que supone cambiar un sistema de iluminación existente con tecnología HM por otro de tecnología LED. En primer lugar se muestra cómo las prestaciones luminosas de una luminaria con lámpara de HM de 400W son equiparables a las de una luminaria con lámpara LED de 200W. Posteriormente, se analiza la incidencia que tiene la sustitución del sistema de alumbrado en el ahorro energético y en los costes. 2.3.1. Equivalencia entre luminarias Para analizar la sustitución de una luminaria de halogenuros metálicos de 400W por una de 200W LED, es equivalente en términos lumínicos, se han probado ambas sobre la misma escena. La escena consiste en una habitación de las siguientes dimensiones: anchura, profundidad, altura,

235


Serrano-Tierz et al / DYNA 82 (191), pp. 231-239. June, 2015.

15m*15m*10m. La luminaria está centrada en la habitación y colocada a una altura de 6m. Se han tomado mediciones de la luminosidad mediante un luxómetro marca Cablematic LX-1010BS. Las medidas en la luminaria de 400W se han efectuado cuando esta tiene 5000 horas de uso. Las mediciones se han realizado sobre el plano del suelo. La primera de ellas directamente en su eje vertical y en puntos

pertenecientes a ejes ortogonales intersectando con circunferencias concéntricas de 1 a 5 metros de diámetro con un paso de 1m. La iluminancia media de cada zona (Em) se ha calculado promediando las cuatro mediciones situadas en los puntos del círculo descritos anteriormente. Los resultados se recogen en la Tabla 4. En esta tabla también se han incluido los resultados de simulación.

Tabla 4. Medidas en simulación y experimentales de la iluminancia media obtenida en el plano del suelo con luminaria HM de 400W y LED de 200W. Em R=1m Em R=2m Em R=3m Em R=4m Em R=5m Luminaria Em max(lux) 400W HM 317 275 235 180 110 68 400W simulación 327 292 251 195 120 72 200W LED 605 450 285 185 110 70 200W simulación 593 430 275 176 107 67 Fuente: Elaboración propia.

En la Tabla 4 se observa que los valores de iluminancia media de la luminaria de halogenuros metálicos son ligeramente inferiores al modelo simulado, mientras que en el caso de la luminaria LED son superiores. Las causas de esta ligera desviación promedio del 5% se atribuyen a la tolerancia usual en la fabricación de componentes y bombillas. No se ha tenido en cuenta que el número de encendidos en la lámpara de halogenuros metálicos puede afectar a su vida útil y prestaciones lumínicas. Esto es un aspecto a favor de la tecnología LED que no está afectada por el número de encendidos. Los resultados de simulación son similares a las mediciones reales, por lo que se validan los modelos de simulación utilizados. En la Tabla 4 se observa que para radios iguales o superiores a 2m, los valores de la iluminancia media son del mismo orden. En la Tabla 1 se observa que los lúmenes totales a las 5000h son equiparables. La iluminancia media de ambas luminarias será por tanto equiparable. Dependiendo del tipo de luminaria analizada, tipo de reflector y envejecimiento de la misma, se obtendrán diferentes distribuciones e iluminancias medias. En este caso se comprueba mediante las mediciones efectuadas (Tabla 4), que es posible sustituir una luminaria de halogenuros metálicos de 400W con una utilización media de 5000h de uso, por una luminaria LED de 200W obteniendo un importante ahorro energético cercano al 50%. También se obtendrá una mayor duración de las luminarias LED por su lento envejecimiento en comparación con las de halogenuros metálicos que sufren una rápida degradación. Según datos del fabricante, la vida útil de la luminaria LED supera las 50.000h por lo que la inversión se amortizaría con un periodo de retorno en función de las horas de utilización y posteriormente se obtendría un beneficio hasta el fin de vida de la misma. 2.3.2. Ahorro energético La diferencia de consumo entre las luminarias anteriormente descritas de halogenuros metálicos y LED es de 208W, se obtiene por tanto, un 48,6% de ahorro en el consumo energético. Es posible que en función de la fuente de alimentación y balasto utilizados, varíe ligeramente la eficiencia de la luminaria y por lo tanto la tasa de ahorro. Tabla 5.

Cálculo del ahorro energético obtenido en la sustitución de MH400W por LED200W. P(W) MH400W

428

Hora s/ día 24

LED200W

220

24

Ahorro energético

208

kWh/ día 10,27

Días laborables 220

Hora s/ año 5280

kWh/ año 2259,8

5,28

220

5280

1161,6

4,99

1098,2

Fuente: Elaboración propia.

En la Tabla 5 se muestra el ahorro energético por luminaria obtenido para una instalación con 5280h. de funcionamiento anual. Este ahorro es cercano al 50%. Según datos publicados por el Instituto para la Diversificación y Ahorro de la Energía (IDAE), el sector industrial consume el 31% de la energía en España [16]. Con referencia al consumo eléctrico en iluminación industrial, este representa el 15% del mismo [17]. La implementación de la iluminación LED en industria podría reducir el consumo eléctrico de las empresas en torno a un 7,5%. Teniendo en cuenta que en el periodo 2003-2013 el precio de la energía eléctrica en España se ha incrementado un 77,39% y que el precio de la energía en Europa duplica al de USA [17], se considera prioritario aplicar acciones que supongan un ahorro energético que mejore la competitividad del sector industrial español. 2.3.3. Ahorro económico El ahorro económico de la inversión se calcula a partir de los factores que se muestran en la Tabla 6. Estos factores son: El ahorro energético, el ahorro en bombillas y el ahorro en mantenimiento. El principal factor de ahorro es el energético, aunque los costes de mantenimiento y reposición de bombillas pueden ser elevados en función del tipo de instalación; especialmente en aquellas de difícil acceso por su ubicación. Los cálculos de la Tabla 6 utilizan el ahorro energético obtenido en la Tabla 5, valorado con un coste de 0,14 €/kWh más impuestos y una estimación de los costes de bombillas y mantenimiento en función del periodo de reposición definido por su vida útil.

236


Serrano-Tierz et al / DYNA 82 (191), pp. 231-239. June, 2015. Tabla 6. Cálculo del ahorro anual promedio de la sustitución de MH400W por LED200W. Ahorro Año1 Promedio 10 años (IPC 5%) Ahorro anual energético Ahorro anual en bombillas Ahorro anual mantenimiento Ahorro anual total

195,53 €

245,94 €

22,70 €

28,56 €

47,24 €

59,42 €

265,47 €

333,91 €

Fuente: Elaboración propia.

Tabla 8. Cálculo del TCO para LED200W y HM400W. Coste acumulado durante la vida LED 200W de servicio (10 años) Coste acumulado de compra de 601,92 luminaria Coste acumulado de consumo 2601,24 energético Coste acumulado de bombillas 0

227,04 5297,05 285,52

Coste acumulado de mantenimiento

321,49

915,67

Coste total de propiedad (TCO)

3524,65

6725,28

Fuente: Elaboración propia.

Ahorro de mantenimien to promedio 18% Ahorro en bombillas promedio 8%

HM 400W

8000,00 7000,00 6000,00 5000,00

Ahorro energético promedio 74%

4000,00

Coste total de propiedad (TCO)

3000,00 2000,00 1000,00 0,00 LED 200W

Figura 5. Distribución porcentual del ahorro total promedio obtenido. Fuente: Elaboración propia.

Tabla 7. Parámetros de la inversión de LED200W. Parámetros de la inversión Coste de la inversión LED200W Vida útil LED200W Ahorro total anual promedio Periodo de retorno de la inversión Beneficio de la inversión en 10 años Fuente: Elaboración propia.

HM 400W

Figura 6. Comparación del coste total de propiedad para LED200W y HM400W. Fuente: Elaboración propia.

570 € 50.000 h 333,91 € 1,71 años 2.592,03 €

Coste de luminaria 17%

Coste de mantenimiento 9% Coste de bombillas 0%

Tal y como se muestra en la Fig. 5 se aprecia la distribución porcentual de cada tipo de ahorro respecto al total obtenido. En la Tabla 7 se calcula el periodo de retorno de la inversión en LED200W tomando el ahorro anual promedio obtenido en la Tabla 6. El beneficio total de la inversión tiene en consideración la vida útil de la luminaria para su cálculo. Un análisis en detalle del ahorro a largo plazo de una instalación, requiere la cuantificación del Coste Total de Propiedad (TCO= Total Cost of Ownership). Este indicador define el coste total asociado a la compra y mantenimiento del producto durante el periodo de utilización. El cálculo del mismo depende de diversos factores directos e indirectos tales como [18]: a) Coste total de compra de la luminaria y bombillas durante los años de funcionamiento de la instalación. b) Coste total de mantenimiento, limpieza, sustitución de equipos y plataformas elevadoras.

Coste de consumo energético 74%

Figura 7. Distribución porcentual del coste total de propiedad para LED200W. Fuente: Elaboración propia.

c) Consumo energético y potencia contratada. d) Impuestos por huella de carbono y reciclado. Además del ahorro energético, otro factor destacable es la gran reducción del Coste Total de Propiedad en comparación con las tecnologías de iluminación tradicionales que pueden producir retornos de la inversión inferiores a dos años [19].

237


Serrano-Tierz et al / DYNA 82 (191), pp. 231-239. June, 2015.

Coste de bombillas 4%

amortización es inferior a dos años. Desde el punto de vista medioambiental, la tecnología de iluminación LED contribuye en la reducción de las emisiones de CO2 y en la eliminación de residuos tóxicos como es el mercurio.

Coste de luminaria 3%

Coste de mantenimiento 14%

Referencias

Coste de consumo energético 79%

[1]

[2] Figura 8. Distribución porcentual del coste total de propiedad para HM400W. Fuente: Elaboración propia.

[3]

[4]

En la Tabla 8 se calcula el TCO de una luminaria LED200W y otra HM400W estimando los costes referidos a 10 años de servicio y 5.280h. de funcionamiento anual. La diferencia en el coste total de propiedad para LED200W y MH400W que se muestra en la Fig. 6 supone un 47,6% de ahorro en la instalación. Comparando los porcentajes de coste que aparecen en las Figs. 7 y 8 entre el LED200W y HM400W se aprecia que salvo en el coste de la luminaria, en el resto, el LED200W tiene un menor coste, tanto de consumo energético como de mantenimiento.

[5] [6] [7]

[8]

2.3.4. Ventajas medioambientales La sustitución de una luminaria de 400W de halogenuros metálicos por otra de 200W LED supone un ahorro energético aproximado de 208W, que expresado a niveles de reducción de las emisiones de CO2 son 85,28 g de CO2 por cada hora de funcionamiento al tomar como referencia la media europea de 0,41kg de CO2/kWh [17]. Otro aspecto importante es la no emisión de residuos tóxicos peligrosos (RTP) como el mercurio. Por ejemplo cada bombilla Philips MASTER HPI Plus 400W/645 BU-P E40 1SL contiene 67,2 mg. 3. Conclusiones

[9] [10]

[11]

[12]

El reciente desarrollo de tecnología LED para aplicaciones de alta potencia en iluminación industrial ofrece la posibilidad de optimizar los costes de iluminación reduciendo en torno a un 50% el consumo energético. En el estudio de caso realizado se comprueba que la sustitución de una luminaria de 400W de halogenuros metálicos por otra de 200W LED produce una iluminancia media equiparable. El ahorro a largo plazo en coste total de propiedad (TCO) que proporciona el cambio a tecnología LED gracias a su larga vida útil hace posible a las empresas ser más competitivas consiguiendo importantes beneficios en la reducción de costes fijos de la instalación. En el caso de una instalación con servicio continuo de 24h el periodo de

[13]

[14]

[15]

[16]

238

Taguchi, T., Present status of energy saving technologies and future prospect in white LED lighting. IEEJ Transactions on Electrical and Electronic Engineering, 3 (1), pp. 21-26, 2008. DOI: 10.1002/tee.20228. Faranda, R., Guzzetti, S., Lazaroiu, C. and Leva, S., LEDs lighting: Two case studies. UPB Sci. Bull. Ser. C. 73 (1), pp. 199-210, 2011. Carrillo, G., Andrade, J., Barragán, A. and Astudillo, A., Impact of electrical energy efficiency programs, case study: Food processing companies in Cuenca, Ecuador. DYNA, 81 (184), pp. 41-48, 2014. DOI: 10.15446/dyna.v81n184.40821 Philips., Convierta la oficina en un lugar agradable Soluciones LED, para crear lugares de trabajo estimulantes y ecológicos [En línea]. [Fecha de consulta, 9 de enero de 2014]. Disponible en: http://www.lighting.philips.es/connect/tools_literature/assets/pdfs/Le ds%20en%20oficinas.pdf Parlamento Europeo, Directiva 2012/27/UE del Parlamento Europeo y del Consejo, [en línea]. [fecha de consulta: 9 de enero de 2014]. Disponible en: http://www.boe.es/doue/2009/140/L00136-00148.pdf European Comission., Recast of the RoHS Directive [Online]. [Date 2014]. Available at: of reference: april 2nd of http://ec.europa.eu/environment/waste/rohs_eee/ Ministerio de Industria, Energía y Turismo, Gobierno de España., Plan de Acción de Ahorro y Eficiencia Energética 2011-2020. [En línea]. [fecha de consulta: 15 de enero de 2014]. Disponible en: http://www.idae.es/index.php/id.663/mod.pags/mem.detalle Ministerio de Industria, Energía y Turismo, Gobierno de España., El sector de la iluminación podría ahorrar un 45% de la energía eléctrica consumida, con una utilización profesional de la nueva tecnología leds. [En línea]. [fecha de consulta: 15 de enero de 2014]. Disponible en: http://www.idae.es/index.php/id.99/mod.noticias/mem.detalle ENERGY.GOV. Office of Energy Efficiency & Renewable Energy., Why SSL. [Online]. [Date of reference: January 15th of 2014]. Available at: http://energy.gov/eere/ssl/why-ssl Orzáez, M.J.H. y de Andrés-Díaz, J.R., Análisis comparativo y justificativo para el cambio a leds en instalaciones con lámparas de halogenuro metálico: Un paso más hacia la eficiencia energética en iluminación urbana. DYNA, 89 (2), pp. 165-171, 2014. Nuttall, D.R., Shuttleworth, R. and Routledge, G., Design of a LED street lighting system. Power Electronics, Machines and Drives, 2008. PEMD 2008. 4th IET Conference, York, UK, Institution of Engineering and Technology, 2008, pp. 436-440. DOI: 10.1049/cp:20080559 Long, X., Liao, R. and Zhou, J., Development of street lighting system-based novel high-brightness LED modules. IET Optoelectronics, 3 (1), pp. 40-46, 2009. DOI: 10.1049/ietopt:20070076 Ylinen, A.M., Tähkämö, L., Puolakka, M. and Halonen, L., Road lighting quality, Energy Efficiency, and Mesopic Design–LED Street Lighting Case Study. Leukos, 8 (1), pp. 9-24, 2011. DOI: 10.1582/LEUKOS.2011.08.01.001 Cheng, Y.K. and Cheng, K.W.E., General study for using LED to replace traditional lighting devices. In: Power Electronics Systems and Applications. ICPESA '06. 2nd International Conference on IEEE, Hong Kong, Hong Kong Polytechnic University, 2006, pp. 173-177. DOI: 10.1109/PESA.2006.343093 DIAL. DIALux. Versión actual: edificios completos, Luz diurna, planificación de espacios, planificación de viales - más fácil y potente. [En línea]. [Fecha de consulta, 15 de enero de 2014]. Disponible en: www.dial.de/DIAL/es/dialux/download.html Ministerio de Industria, Energía y Turismo, Gobierno de España., Industria. Sector Industria y el consumo de Energía. [En línea]. [Fecha de consulta, 3 de marzo de 2014]. Disponible en:


Serrano-Tierz et al / DYNA 82 (191), pp. 231-239. June, 2015. http://www.idae.es/index.php/idpag.20/relmenu.337/mod.pags/mem. detalle [17] Oficina Verde. Universidad de Zaragoza. Energía en la industria. Jornadas de eficienaica energética y mercados nenergétcos. [En línea]. [Fecha de consulta, 17 de febrero de 2014]. Disponible en: http://oficinaverde.unizar.es/sites/oficinaverde.unizar.es/files/users/o fiverde/Ahorro%20de%20energ%C3%ADa%20en%20la%20industr ia%20%5BModo%20de%20compatibilidad%5D.pdf [18] PHILIPS. Calculadora Total Cost of Ownership. [En línea]. [Fecha de consulta, 15 de enero de 2014]. Disponible en: http://www.philipstco.com/tco/?l=es [19] Schratz, M., Gupta, C., Struhs, T.J. and Gray, K., Reducing energy and maintenance costs while improving light quality and reliability with led lighting technology. Pulp and Paper Industry Technical Conference (PPIC), Conference Record of 2013 Annual IEEE, United States, IEEE, 2013, pp. 43-49. DOI: 10.1109/PPIC.2013.6656043

J.L. Santolaya-Sáenz, es Dr. Ing. Industrial por la Universidad de Zaragoza, España. Desarrolla su actividad docente como Profesor en el Departamento de Ingeniería de Diseño y Fabricación de la Universidad de Zaragoza, España. Su labor investigadora la desarrolla en el grupo GediX, grupo emergente de investigación dedicado al estudio y aplicación de metodologías innovadoras en el diseño de producto. Colabora actualmente con otros grupos de investigación de la Escuela de Ingeniería y Arquitectura de la Universidad de Zaragoza en proyectos de diseño de prototipos e instalaciones experimentales. ORCID: orcid.org/0000-0001-7041-483X

A. Serrano-Tierz, es Dra. por la Universidad de Zaragoza, España y miembro activo del Instituto Universitario de Investigación en Ingeniería de Aragón, España, es Profesora Universidad de Zaragoza. Desarrolla su actividad investigadora en el campo de la aplicación de metodologías aplicadas al diseño de producto. Colabora desde hace más de 10 años con empresas del sector de la iluminación y posee 37 registros de diseño industrial en este campo. ORCID: orcid.org/0000-0002-5169-7042 A. Martínez-Iturbe, es Catedrático de Tecnología Electrónica de la Universidad de Zaragoza, España y miembro del Grupo de Electrónica de Potencia y Microelectrónica del Instituto de Investigación en Ingeniería de Aragón. Ha participado en: 38 proyectos de financiación pública y 29 contratos de investigación. Tiene 21 publicaciones, 9 patentes, ha participado en 111 congresos y ha dirigido 7 Tesis doctorales. Ha sido Director de Área del Instituto Tecnológico de Aragón, España, Secretario del Departamento de Ingeniería Eléctrica Electrónica e Informática y Subdirector de la Escuela de Ingenieros de Zaragoza. Su área de investigación está dentro de la Electrónica de Potencia y los Accionamientos Eléctricos. ORCID: orcid.org/0000-0002-8797-0813 O. Guarddon, es Ing. Industrial con mención en electrónica por la Universidad de Zaragoza, España y desempeña su actividad profesional dirigiendo el Departamento de I+D+i de la empresa de iluminación Diseños Santelices. Su perfil investigador se centra en el diseño de iluminación LED para aplicaciones técnicas y decorativas. Ha colaborado en distintos proyectos de investigación con la Universidad de Zaragoza.

239

Área Curricular de Ingeniería Eléctrica e Ingeniería de Control Oferta de Posgrados

Maestría en Ingeniería - Ingeniería Eléctrica Mayor información:

E-mail: ingelcontro_med@unal.edu.co Teléfono: (57-4) 425 52 64


Matrix multiplication with a hypercube algorithm on multi-core processor cluster José Crispín Zavala-Díaz a, Joaquín Pérez-Ortega b, Efraín Salazar-Reséndiz b & Luis César Guadarrama-Rogel b a

Facultad de Contaduría, Administración e Informática, Universidad Autónoma del Estado de Morelos, Cuernavaca, México. crispin_zavala@uaem.mx b Departamento de Ciencias Computacionales, Centro Nacional de Investigación y Desarrollo Tecnológico, Cuernavaca, México. jpo_cenidet@yahoo.com.mx, efras.salazar@cenidet.edu.mx, cesarguadarrama@cenidet.edu.mx Received: September 10th, 2014. Received in revised form: March 9th, 2015. Accepted: March 17th, 2015

Abstract The algorithm of multiplication of matrices of Dekel, Nassimi and Sahani or Hypercube is analysed, modified and implemented on multicore processor cluster, where the number of processors used is less than that required by the algorithm n3. 23, 43 and 83 processing units are used to multiply matrices of the order of 10x10, 102x102 and 103X103. The results of the mathematical model of the modified algorithm and those obtained from the computational experiments show that it is possible to reach acceptable speedup and parallel efficiencies, based on the number of used processor units. It also shows that the influence of the external communication link among the nodes is reduced if a combination of the available communication channels among the cores in a multi-core cluster is used. Keywords: Hypercube algorithm; multi-core processor cluster; Matrix multiplication

Multiplicación de matrices con un algoritmo hipercubo en un cluster con procesadores multi-core Resumen Se analiza, modifica e implementa el algoritmo de multiplicación de matrices de Dekel, Nassimi y Sahani o hipercubo en un cluster de procesadores multi-core, donde el número de procesadores utilizado es menor al requerido por el algoritmo de n3. Se utilizan 23, 43 y 83 unidades procesadoras para multiplicar matrices de orden de magnitud de 10X10, 102X102 y 103X103. Los resultados del modelo matemático del algoritmo modificado y los obtenidos de la experimentación computacional muestran que es posible alcanzar rapidez y eficiencias paralelas aceptables, en función del número de unidades procesadoras utilizadas. También se muestra que la influencia del enlace externo de comunicación entre los nodos disminuye si se utiliza una combinación de los canales de comunicación disponibles entre los núcleos en un cluster multi-core. Palabras Clave: Algoritmo Hipercubo, cluster de procesadores multi-core, Multiplicación de matrices

1. Introduction The multicore clusters are formed by either nodes or processors, often heterogeneous, connected with architecture of dynamic configuration, high speed bus; it allows point to point connections among each processing unit. The nodes are composed of processors and the processors in turn are composed of multiple cores, where each of them has the ability to run more than one process at a time [1]. These features allow us to propose solutions to various problems using computing parallel and concurrent [2,3], such as the one presented in this work: The

matrix multiplication with the DNS algorithm (Dekel, Nassimi, Sahani or hypercube) on a multicore cluster. The execution time of the DNS or Hypercube algorithm for multiplying nxn matrices is polylogarithmic T∞(logn) and requires a polynomial number of processors Hw(n3), these two parameters classify it as an efficiently parallelizable algorithm [4]. But, in its implementation in a multicore cluster the following limitations are presented: First, the number of processing units available is finite and is lesser than those required by the hypercube algorithm for matrice sizes given Hw(n3); Second, the processing units are spread over various nodes and processors.

© The author; licensee Universidad Nacional de Colombia. DYNA 82 (191), pp. 240-246. June, 2015 Medellín. ISSN 0012-7353 Printed, ISSN 2346-2183 Online DOI: http://dx.doi.org/10.15446/dyna.v82n191.45513


Zavala-Díaz et al / DYNA 82 (191), pp. 240-246. June, 2015.

Where the speed of the communication links among the cores is different, they can be on the same processor, in the same node, either in different nodes or processors. Similar modifications [5] are proposed to implement the hypercube algorithm on multicore clusters. In the modification proposed in this work the number of processing units remains constant, leaving as a variable the grain size or the number of the subarrays that divides the nxn matrices. Contrary to what is proposed by [5], where the number of processing units is determined by the size of the input. The implementation of the algorithm is made by the library Message Passing Interface MPI, where MPI processes are assigned to the cores. If an MPI process is assigned to a core, then it will be parallel computation; but if more than one MPI process is assigned to the same core, then it will be concurrent computation. Computational experiments took place in the ioevolution multicore cluster. Where nxn matrices are multiplied by 8, 64 and 512 processing units, the matrix's size is a multiple of 72, from 72 x 72 up to 2304 x 2304 items. Theoretical and experimental results show that the solution proposed in this work allows to multiply matrices of size multiples 1,000X1,000 in multi-core cluster, using a smaller number of processors than required processors by original algorithm; with modifications proposed, it is possible to achieve acceptable speedup and good parallel efficiencies, depending on the number of processing units used; the developed mathematical model of the modified DNS algorithm predicts the behavior of the parallel implementation on a multi-core cluster. The article consists of the following parts: in the second the methodology and foundations of the Hypercube algorithm are presented; in the same section the analysis of its running time on a MIMD computer, of memory and distributed computation, is presented. Based on this analysis, the modifications to multiply matrices on a multi-core cluster are presented. The third part contains the computational experiments and analysis of results. Finally, in the fourth section the conclusions that come out of this work are presented. 2. Methodology Algorithms for matrix multiplication based on the model Simple Instruction Multiple Data (SIMD) has been developed since the beginning of the parallel computation, such as: Systolic Array or Mesh of Processors, Cannon, Fox, DNS method (Dekel, Nassimi, Sahani) or hypercube and meshes of trees [48]. In all of them, except for DNS, the parallel process of solution consists of sequential steps of data distribution and computing. In general, in the first stage the data between processor units are distributed. In the second, each processor performs the multiplication of elements allocated. In the third, the first and second steps are repeated until all columns and rows of the matrices are rotated and multiplied. In the fourth stage the resulting elements are added. In contrast, in either the DNS or Hypercube algorithm, the distribution and multiplying of elements run only once, the first and second stages, and similarly to the other algorithms the fourth stage is executed once. Then the DNS or Hypercube algorithm is described.

Parameter q { matrix size is 2q X 2q} Global l Local a, b, c, s, t Begin {Phase 1: Broadcast matrices A and B} for l ←3q-1 down to 2q do for all Pm where BIT(m, l) = 1 do t ← BIT.COMPLEMENT(m, l) a  [t]a b  [t]b end for end for for l ← q-1 down to 0 do for all Pm, where BIT(m,l)≠BIT(m, 2q+l) do t ← BIT.COMPLEMENT(m, l) a  [t]a end for end for for l ←2q-1 down to q do for all Pm do t ← BIT.COMPLEMENT(m, l) b  [t]b end for end for {Phase 2: Do the multiplications in parallel} For all Pm do c←axb end for {Phase 3: Sum the products} for l ←2q to 3q-1 do for all Pm do t ←BIT.COMPLEMENT(m, l) s  [t]c c←c+s end for end for End Figure 1. Pseducodigo Hypercube Algorithm for matrix multiplication (Hypercube SIMD). Source Quinn, Michael J., [4]

2.1. Hypercube algorithm The algorithm is based on multiplying two nxn matrices using the model Simple Instruction Multiple Data (SIMD), the execution time is (logn) and requires n3 processors. This consists of q steps to distribute and rotate the data among the processor units, the number of stages is given by 5q = 5 logn and the matrix size by n3 = 23q [4]. The solution strategy is: distribute and rotate the elements of the matrices (aik, bik) in the computer nodes, connected by hypercube architecture. Once the rotation and distribution of elements is performed, each processing unit has the pair (aik, bki), then they are multiplied. After the multiplication, the products are sent to be added and get the cij elements. The matrix multiplication algorithm SIMD Hypercube comprises three phases: In the first, the elements are distributed and rotated in the processing units; in the second phase the elements are multiplied; and in the third, the products are summed to obtain the solution matrix. 2.2. Runtime hypercube algorithm in a multi-core cluster For the execution time on a multi-core cluster it is

241


Zavala-Díaz et al / DYNA 82 (191), pp. 240-246. June, 2015. Table 1. Execution time of hypercube algorithm in a computer of distributed memory and communication cost. Phase Loop time (processor operations) Tcomm1=g(2C(3q-1– 2q)) First First Initial distribution = g2C(q-1) of (aij, bij) Tcomm2 = g(1C(q-1 - 0))= gC(q-1) Second rotation (aij) Tcomm3 = g(1C(2q – 1 - q) Third = gC(q-1) rotation (bij) Tcal = 1 Second Unique Product (aijxbji) Tcomm4 = g(1C(3q-1 -2q)) Third Unique = gC(q-1) Add the (cij) Tcal + Tcomm1 + Tcomm2 + Tcomm3 + Total Tp Tcomm4 = 1+ 5gC(q-1) Source: The authors

necessary to introduce the factors that influence its implementation, such as: time used: calculation T cal , communications Tcomm, wait Twait and local operations Tlocal [9]. The Twait and Tlocal times are assumed to be zero for the following reasons: First, each processing unit receives the data and immediately executes the next operation (Twait = 0); Second, the processor units do not perform local operations for the distribution of the data (Tlocal = 0). Since the beginning of the process, the units neighbouring for sending are defined. Therefore, parallel execution time is determined by: Tp = Tcal + Tcomm

(1)

To be able to add the Tcal and Tcomm times, they must be expressed in the same units: runtime or in the number of operations of the processing unit, so it is necessary to standardize the communication time. To this end the constant g, eq. (2) [10], is introduced.

(2)

The constant calculates the communication time depending on processor operations [10]. Therefore, the communication time is determined by the expression: Tcomm = gC

(3)

To calculate the execution time in a parallel computer of distributed memory and distributed process the Tcal and Tcomm times are entered in the algorithm of Fig. 1. The execution time of the phases of Hypercube algorithm is shown in Table 1. Whereas q = logn time parallel execution is: Tp(n) = Tcal+Tcomm = 1+5gC(q-1) < 1+5gCq = 1+5gClogn

(4)

If eq. (4) is compared with equation unmodified Tp(n) =

Table 2. Time for calculations and communication of modified hypercube algorithm on a multi-core computer. Phase Loop time (processor operations) First First 1 2 Second

1

Third Second

Unique

Third

Unique

Total

1

1 5

1

Source: The authors

1 + 5logn [4], it shows that the time of the parallel execution, eq. (4), grows approximately gC multiple, because normally g > 1 and C is always greater than 1. This indicates that the parallel algorithm is faster than the sequential for the matrix's certain size. For example, if g = 2 and C is equal to 64 bits, the matrix of size 14x14 is needed for which the modified hypercube parallel algorithm is going to be faster than the sequential. Additionally, 2,744 processing units are required to implement it. Multiplying matrices of dimensions 1,000X1,000 in a multi-core cluster with this algorithm has the following drawback: the number of processing units is not available. Consequently, it needs to be modified in order to multiply matrices of this size, because the number of available processing units is less than n3. 2.3. Amendments to the DNS algorithm to it runs on a multi-core computer The modification is to increase the grain size, as follows: to replace the scalar by sub matrices and that they follow the same procedure for the distribution and rotation of elements aij and bij. Consequently, the distribution and rotation of sub matrices will be a function of the q variable equal to logn. The value of the variable is determined by the number that divides the matrices to multiply. If the n dimension of the matrix is divided into k parts to generate submatrices , then q is equal to logk. The number of submatrices generated is 23q = k3 and the number of processors is given by p = k3. The k variable can define the size of the submatrices and the number of processing units that are required. The necessary modifications to the algorithm in Fig. 1 below: - Replace to aij by Aij - Replace to bij by Bij - Replace to cij by Cij ⁄ ⁄ ⁄ - Replace to aikbkj by ∑ ∑ ∑ The computation and communication times of the modified algorithm are shown in Table 2. The execution time of the parallel algorithm is determined by:

242


Zavala-Díaz et al / DYNA 82 (191), pp. 240-246. June, 2015.

5

1

Equation 6 results g = 1, 8

(5)

g = 1, 512

g = 2, 8

g = 2, 64

g = 2, 512

110

5

100 90

Whereas p = k3 and q = log k the expression (5) is reduced to: ⁄

80

(6)

70

Sp

5

g = 1, 64

50 40 30 20 10 0 0

1

2

3

4

5 Matrix size

6

7

8

9

10

Figure 2. Speedup parallel of algorithm modified DNS for a multi-core cluster. Source: The authors

Parallel efficiency, C = 32 8, g=1

64, g =1

512, g = 1

8, g = 2

64, g = 2

512, g = 2

100 90 80 70 60

Ep

The C variable, in the expression (6), is the type of data that is sent and g is given by eq. (2). Table 3 shows the results of eq. (6) for a value of g = 1 and 2, C = 32 and a number of processing units 8, 64 and 512 (k = 2, 4 and 8). The results are presented in Table 3 based on the parallel speedup Sp = (T1/Tp). The results of the eq. (6) shows that when g = 1 runtimes are smallest. This indicates that the algorithm is sensitive to the characteristics of the communication link. Regarding the parallel speedup, it is best using eight processing units for multiplying matrices smaller than 1,000X1,000, from 64 to matrices of smaller sizes than 10,000X10,000 and 512 for multiplying matrices over 10,000X10,000. Fig. 2 plots the data of Table 3. As seen in Fig. 2, the largest differences occur in large arrays, the multiplication matrix is faster when more processing units are used. When using as a reference the parallel efficiency Ep = (Sp/p)X100, the better efficiencies are obtained when less processing units are used, as shown in Fig. 3. As is shown in Fig. 3, the best efficiency is attained when the minor number of parallel processing units are used, about 100% for the larger matrices. However, the parallel efficiency decreases when more processing units are used, at best by 20%. Although the trend is that the efficiency will increase as the size of the matrix increases. This indicates that for 512 parallel processing units will reach its asymptotic value for a given size of matrices.

60

50 40 30 20 10

Table 3. Results of the eq. (6) to C = 32 (elaborated by us) Matrix size Sp g=1 8 64 72X72 0.685714 0.372816 144X144 1.263158 0.741313 288X288 2.181818 1.465649 576X576 3.428571 2.865672 1152X1152 4.8 5.485714 2304X2304 6 10.10526 4608X4608 6.857143 17.45455 9216X9216 7.384615 27.42857 18432X18432 7.68 38.4 36864X36864 7.836735 48 Sp Matrix size g=2 8 64 72X72 0.358209 0.186952 144X144 0.685714 0.372816 288X288 1.263158 0.741313 576X576 2.181818 1.465649 1152X1152 3.428571 2.865672 2304X2304 4.8 5.485714 4608X4608 6 10.10526 9216X9216 6.857143 17.45455 18432X18432 7.384615 27.42857 36864X36864 7.68 38.4 Source: The authors

0 0

1

2

3

4

5

6

7

8

9

10

Matrix size

512 0.249878 0.499512 0.998051 1.992218 3.968992 7.876923 15.51515 30.11765 56.88889 102.4

512 0.124969 0.249878 0.499512 0.998051 1.992218 3.968992 7.876923 15.51515 30.11765 56.88889

Figure 3. Parallel efficiency of DNS algorithm modified for multi-core cluster. Source: The authors

The parallel efficiency low is related to the time spent on communications, it grows exponentially when the number of processing units is higher and, consequently, the data sent concurrently among processing units. 3. Computational experiments in the multi-core cluster The computational experiment was conducted on a multicore cluster with a finite number of processors and memory units. In consequence the size of the matrices of the computational experiment was limited. In view of the fact that the cores can be programmed as independent units [1]. The algorithm was encoded in ANSI C and for sending data the MPI library was used. The MPIprocesses are assigned to the cores. If only one of them is assigned to a core, then it will work with parallel computing. But, if more than one is assigned, then it will work with parallel and concurrent computing. All nodes in the multi243


Zavala-Díaz et al / DYNA 82 (191), pp. 240-246. June, 2015. Table 4. Configuring multi-core cluster ioevolution. Total Cores

Processors per node

Cores per Processor

Intel Xeon CPU X3430 a 2394 MHz

4

4

16

1

Intel Xeon CPU X3430 a 2394 MHz

4

4

16

2

Intel Xeon CPU X3430 a 2394 MHz

4

4

16

3

Intel Xeon CPU E5645 a 2400.067 MHz

12

6

72

Node

Model

0

Total

Table 5. Timing and speedup of the parallel implementation of the modified DNS algorithm in multi-core cluster ioevolution. Time in seconds Matrix size 1 8 64 512 72x72 0 0.008281 0.039979 1.488810 144x144 0.03 0.024755 0.069789 1.645584 288x288 0.234 0.105216 0.198098 1.971476 576x576 1.556667 0.484652 0.910284 3.854292 1152x1152 18.405667 3.279813 3.940699 7.879904 2304x2304 161.591667 31.395923 19.687011 30.453499 Sp Matrix size 1 8 64 512 72x72 -0 0 0 144x144 1 1.21190087 0.42986594 0.01823061 288x288 1 2.22399846 1.18123433 0.11869277 576x576 1 3.21193063 1.71008850 0.40387880 1152x1152 1 5.61180378 4.67066058 2.33577306 2304x2304 1 5.14689969 8.20803438 5.30617719 Source: The authors

120 Comparison 8 processing units

Source: The authors

Experimental

T g = 1

T g = 1.3

6

core cluster are used in the experimentation, each node has the same load in all tests, for which the same number of MPIprocesses to each processing unit is assigned.

5

4

Sp

3.1. Description of multi-core cluster ioevolution The multi-core cluster used has the following characteristics: Features cluster shows that there are different g constants. The first is when two processes are assigned to the same core; the second is when the cores are in the same processor; the third corresponds to the communication between processors in the same node; and the fourth is for communication between nodes in the cluster. Of these, it is considered the external link between nodes, it is by means of fibre optics with a bandwidth of 1 Gbit per second, and with the speed of nuclei the g ≈ 2.4 is obtained.

3

2

1

0 0

1

2

3

4

5

6

Matrix size

Figure 4. Comparison of parallel speedup, experimental vs. theoretical, of eight processing units. Source: The authors

Comparison 64 processing units

3.2. Computational test on cluster ioevolution

Experimental

T g = 1

T g = 1.3

12

10

8

Sp

The computational tests consist of multiplying matrices without gaps and in different sizes. Its elements are real, 32bit floating number, and are generated randomly. During each execution of the program different matrices are used. The sizes of the matrices are multiples of the number 72, because this is a number divisible by k values (2, 4 and 8). Table 5 shows the average execution time obtained in 30 runs and the speedup of the parallel implementation is presented. If the theoretical results of Table 3 and experimental results of Table 5 are compared, it is noted that the experimental values of speedup are closer to those calculated when g = 1. In order to determine the value of the g constant that most approximates the theoretical results to results experimentally, tests were performed with different values of g, of them the closest to the experimental is when g = 1.3. This is shown in Figs. 4, 5 and 6.

6

4

2

0 0

1

2

3

4

5

6

Matrix size

Figure 5. Comparison of parallel speedup, experimental vs. theoretical, 64 processing units. Source: The authors

244


Zavala-Díaz et al / DYNA 82 (191), pp. 240-246. June, 2015. Comparison 512 processing units Experimental

t, g = 1

t, g = 1.3

t, g = 2.4

8

7

6

Sp

5

4

3

2

1

0 0

1

2

3

4

5

6

Matrix size

Figure 6. Comparison of parallel speedup, experimental vs. theoretical, 512 processing units. Source: The authors

The three Figs. 4, 5 and 6 show experimental results, they are closer to the theoretical when g = 1.3, this value is lower than the g calculated by the external link (g = 2.4). The obtained value of g = 1.3 indicates that the use of the combination of the different communication channels of multi-core cluster improves the parallel implementation. The most approximate experimental results to the theoretical are when 8 and 64 processing units are used. However, when 512 units of processing are used, the more approximate experimental results to the theoretical is when the matrix's size is higher than 1,000X1,000. In Fig. 6 the values obtained with g = 2.4 are also plotted. It is observed that for matrices with a size smaller than 1,000X1,000 the experimental speedup is lower than those calculated using the external link. In contrast to the matrix size bigger than 1,000X1,000, the experimental speedup is better than the theoretical considering that same link. At this size the experimental results follow the trend of theoretical with a g = 1.3. This shows that there are factors that influence in the reduction of parallel speedup. Establishing communication among the processing units is one of the factors influencing the parallel speedup. It follows the above with the following, for the same size of matrices and different number of processor units the changes made are: the size of the sub matrices and the number of simultaneous communications. The subarrays are bigger when eight processing units are used and a lesser number of concurrent communications are performed. In contrast, the subarrays are the smallest with 512 processing units and the largest number of simultaneous communications of the three tested cases are performed. The time difference, between the theoretical and the experimental, increases as the number of simultaneous communications increases, although smaller matrices are sent, Figs. 4 and 6. This reveals that in sending data from one processing unit to another there is a mechanism for communication, which requires processor runtime. Another factor that influences the running time is the number of processes that are calculated in each core. When 8

and 64 processing units are used, a process is assigned to each core and all processes are communicated simultaneously. The communication channel will be a function of where the process will be allocated on either the same processor or node. On the other hand, when 512 processes are assigned to the cores, each one of them runs more of a process. In this case the transmission of data among processes may be sequential. Sequentially because there is a means of communication among the cores, and it is used for various processes. Therefore, when a process sends its data using the communication channel, other processes assigned to the same core have to wait their turn. This sequentiality is reflected in the experimental results, they follow the trend given by g = 1.3 and not given by g = 2.4. It shows that from size matrices 1,152X1,152 the speedup maximum is obtained when eight processor units are used, for bigger matrices the speedup parallel rapidly diminishes. With this value a parallel efficiency of 70.15% is obtained. For the other tested cases, 64 and 512, the execution time of parallel implementation continued to improve. This coincides with the theoretical results, the best parallel efficiencies are obtained with 8 processors and matrices smaller than 1,000X1,000 4. Conclusions It is concluded that:  With the proposed changes to DNS or hypercube algorithm can multiply matrices of different orders of magnitude in a multi-core cluster, using a number of processors lesser than n3.  The influence of the external communication link between nodes in the cluster decreases, if a combination of communication channels available among the cores of a multi-core cluster is used.  The amendment proposed in this paper, the number of processing units, is a function of the number of submatrices in that the matrix is divided.  For larger problems it was shown that the influence of data access between processors affects parallel efficiency, rather than smaller problems. It is expected that new designs of processors [11, 12] will optimize access to data and consequently the best parallel efficiencies will be obtained. References [1] [2]

[3] [4]

245

Rauber, T. and Rünger, G., Parallel programming for multicore and cluster systems. Springer Heidelberg Dordrecht London New York 2010. DOI 10.1007/978-3-642-04818-0. Muhammad, A. I., Talat, A. and Mirza, S.H., Parallel matrix multiplication on multi-core processors using SPC3 PM, Proceedings of International Conference on Latest Computational Technologies (ICLCT'2012), Bangkok, March 17-18, 2012 L’Excellent, J.-Y. and Sid-Lakhdar, W.M., A study of sharedmemory parallelism in a multifrontal solver, Parallel Computing 40 (3-4), pp 34-46, 2014. DOI 10.1016/j.parco.2014.02.003 Quinn, M.J., Parallel computing (2nd ed.): Theory and practice. McGraw-Hill, Inc. New York, NY, USA 1994.


Zavala-Díaz et al / DYNA 82 (191), pp. 240-246. June, 2015. [5]

Gupta, A. and Kumar, V., Scalability of parallel algorithms for matrix multiplication, Proceedings of International Conference on Parallel Processing, 3, pp. 115-123, 1991. DOI 10.1109/ICPP.1993.160. [6] Alqadi, Z.A.A., Aqel, M. and El Emary, I.M.M., Performance analysis and evaluation of parallel matrix multiplication algorithms, World Applied Sciences Journal 5 (2), pp. 211-214, 2008. [7] J. Choi, A new parallel matrix multiplication algorithm on distributedmemory concurrent computers, Technical Report CRPC-TR97758, Center for Research on Parallel Computation Rice University, [On line] 1997. Aviable at http://citeseerx.ist.psu.edu/viewdoc/download?. DOI=10.1.1.15.4213&rep=rep1&type=pdf. [8] Solomonik, E. and Demmel, J., Communication-optimal parallel 2.5D matrix multiplication and LU factorization algorithms, Proceeding Euro-Par'11 Proceedings of the 17th international conference on Parallel processing, Heidelberg, Volume Part II, Springer-Verlag Berlin, Heidelberg, 2011. [9] Zavala-Díaz, J.C., Optimización con cómputo paralelo, teoría y aplicaciones, México, Ed. AmEditores, 2013. [10] Sánchez, J. and Barral, H., Multiprocessor implementation models for adaptative algorithms, Journal IEEE Transactions on Signal Processing, USA, 44 (9), pp. 2319-2331, 1996. [11] Un Nuevo Proyecto Financiado con Fondos Europeos trata de Lograr los Primeros Chips de Silicio para RAM óptica de 100 Gbps.Dyna 87 (1), pp. 24, 2012. [12] La Politécnica de Valencia Patenta Mejoras en Procesadores. Dyna, 83 (8), pp. 155 2008.

E. Salazar-Reséndiz, received the MSc. degree in Computational Science at National Research and Technological Development (CENIDET) in 2014. he works in Research Electrical Institute (IIE). C. Guadarrama-Rogel, received the MSc. degree in Computational Science at National Research and Technological Development (CENIDET) in 2014. he works in Research Electrical Institute (IIE).

J.C. Zavala-Díaz, received the PhD. degree in Computational Science at Monterrey Institute of Technology and Higher Education (ITESM), Mexico, in 1999, since 1999 he is a research professor at the Autonomous University of the State of Morelos, Mexico. He worked in Research Electrical Institute (IIE), México 1986-1994. His research interests include: parallel computing; modeling and problem solving of discrete and linear optimization; and optimization using metaheuristics. J. Pérez-Ortega, received the PhD. degree in Computational Science at Monterrey Institute of Technology and Higher Education (ITESM), Mexico in 1999, since 2001 he is a research of National Research and Technological Development (CENIDET). He worked in Research Electrical Institute (IIE), México 1985-2001. His research interests include: optimization using metaheuristics; NP-problems; combinatorial optimization; distributed systems; and software engineering.

246

Área Curricular de Ingeniería de Sistemas e Informática Oferta de Posgrados

Especialización en Sistemas Especialización en Mercados de Energía Maestría en Ingeniería - Ingeniería de Sistemas Doctorado en Ingeniería- Sistema e Informática Mayor información: E-mail: acsei_med@unal.edu.co Teléfono: (57-4) 425 5365


tailingss

DYNA

82 (191), June, 2015 is an edition consisting of 250 printed issues which was finished printing in the month of February of 2015 in Todograficas Ltda. MedellĂ­n - Colombia The cover was printed on Propalcote C1S 250 g, the interior pages on Hanno Mate 90 g. The fonts used are Times New Roman, Imprint MT Shadow


• New publishing rules for journals edited and published by the Facultad de Minas • A new synthesis procedure for TOPSIS based on AHP • Capacitated vehicle routing problem for PSS uses based on ubiquitous computing: An emerging markets approach • Structural analysis for the identification of key variables in the Ruta del Oro, Nariño Colombia • Intuitionistic fuzzy MOORA for supplier selection • A relax and cut approach using the multi-commodity flow formulation for the traveling salesman problema • A genetic algorithm to solve a three-echelon capacitated location problem for a distribution center within a solid waste management system in the northern region of Veracruz, Mexico • Short-term generation planning by primal and dual decomposition techniques • Technical efficiency of thermal power units through a stochastic frontier • Optimization of the distribution of steel pipes using a mathematical model • Effects of management commitment and organization of work teams on the benefits of Kaizen: Planning stage • A framework to evaluate over-costs in natural resources logistics chains • The role of sourcing service agents in the competitiveness of Mexico as an international sourcing región • Modeling of CO2 vapor-liquid equilibrium in Colombian heavy oil using SARA analysis • R&D best practices, absorptive capacity and project success • Technologies for the removal of dyes and pigments present in wastewater. A review • Compromise solutions in mining method selection - case study in colombian coal mining • MGE2: A framework for cradle-to-cradle design • CrN coatings deposited by magnetron sputtering: Mechanical and tribological properties • Weibull accelerated life testing analysis with several variables using multiple linear regression • State of the art of ergonomic costs as criterion for evaluating and improving organizational performance in industry • Study of the behavior of sugarcane bagasse submitted to cutting • Contribution of the analysis of technical efficiency to the improvement of the management of services • Evaluation of the kinetics of oxidation and removal of organic matter in the self-purification of a mountain river • Comfort perception assessment in persons with transfemoral amputation • The water budget and modeling of the Montes Torozos' karst aquifer (Valladolid, Spain) • Determination of the tide constituents at Livingston and Deception Islands (South Shetland Islands, Antarctica), using annual time series • Effect of cellulose nanofibers concentration on mechanical, optical, and barrier properties of gelatin-based edible films • Tribological properties of BixTiyOz films grown via RF sputtering on 316L steel substrates • Analysis of energy saving in industrial LED lighting: A case study • Matrix multiplication with a hypercube algorithm on multi-core processor cluster

DYNA

Publication admitted to the Sistema Nacional de Indexación y Homologación de Revistas Especializadas CT+I - PUBLINDEX, Category A1

• Nuevas reglas editoriales para las revistas editadas y publicadas por la Facultad de Minas • Un nuevo procedimiento de síntesis para TOPSIS basado en AHP • Problema de enrutamiento de vehículos basado en su capacidad para SPS utilizando cómputo ubicuo: Un enfoque de mercados emergentes • Análisis estructural para la identificación de variables claves en la Ruta del Oro, Nariño Colombia • MOORA versión difuso intuicionista para la selección de proveedores • Un enfoque relax and cut usando una formulación de flujo multiproductos para el problema del agente viajero • Algoritmo genético para resolver el problema de localización de instalaciones capacitado en una cadena de tres eslabones para un centro de distribución dentro de un sistema de gestión de residuos sólidos en la región norte de Veracruz, México • Planeación de la generación a corto plazo mediante técnicas de descomposición primal y dual • Eficiencia técnica de unidades de generación termoeléctrica mediante una frontera estocástica • Optimización de la distribución de tubería de acero mediante un modelo matemático • Efectos del compromiso gerencial y organización de equipos de trabajo en los beneficios del Kaizen: Etapa de planeación • Un marco de referencia para evaluar los extra-costos en cadenas logísticas de recursos naturales • El papel de los proveedores de servicios de abasto para la competitividad de México como una región de abastecimiento internacional • Modelado del equilibrio líquido-vapor CO2-crudo pesado colombiano con análisis SARA • Buenas prácticas en la gestión de proyectos de I+D+i, capacidad de absorción de conocimiento y éxito • Technologies for the removal of dyes and pigments present in wastewater. A review • Compromise solutions in mining method selection - case study in colombian coal mining • MGE2: Un marco de referencia para el diseño de la cuna a la cuna • Recubrimientos de CrN depositados por pulverización catódica con magnetrón: Propiedades mecánicas y tribológicas • Análisis de pruebas de vida acelerada Weibull con varias variables utilizando regresión lineal múltiple • Estado del arte de los costos ergonómicos como un criterio para la evaluación y mejora del desempeño organizacional en la indústria • Estudio del comportamiento del bagazo de caña de azúcar sometido a corte • Contribución del análisis de la eficiencia técnica a la mejora en la gestión de servicios • Evaluación de la cinética de oxidación y remoción de materia orgánica en la autopurificación de un río de montaña • Valoración de la percepción de confort en personas con amputación transfemoral • Balance y modelización del acuífero karstico de los Montes Torozos (Valladolid, España) • Determinación de las constituyentes de marea en las Islas Livingston y Decepción (Islas Shetland del Sur, Antártida), usando series temporales anuales • Efecto de la concentración de nanofibras de celulosa sobre las propiedades mecánicas, ópticas y de barrera en películas comestibles de gelatina • Propiedades tribológicas de Películas de BixTiyOz producidas por RF sputtering sobre sustratos de acero 316L • Análisis de ahorro energético en iluminación LED industrial: Un estudio de caso • Multiplicación de matrices con un algoritmo hipercubo en un cluster con procesadores multi-core

Red Colombiana de Revistas de Ingeniería


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.