WATCC

Page 1

IV WORKSHOP ASPECTOS TEÓRICOS DE CIENCIA DE LA COMPUTACIÓN - WATCC -


XIX Congreso Argentino de Ciencias de la Computación - CACIC 2013 : Octubre 2013, Mar del Plata, Argentina : organizadores : Red de Universidades con Carreras en Informática RedUNCI, Universidad CAECE / Armando De Giusti ... [et.al.] ; compilado por Jorge Finochietto ; ilustrado por María Florencia Scolari. - 1a ed. - Mar del Plata : Fundación de Altos Estudios en Ciencias Exactas, 2013. E-Book. ISBN 978-987-23963-1-2 1. Ciencias de la Computación. I. De Giusti, Armando II. Finochietto, Jorge, comp. III. Scolari, María Florencia, ilus. CDD 005.3 Fecha de catalogación: 03/10/2013


AUTORIDADES DE LA REDUNCI Coordinador Titular De Giusti Armando (UNLP) 2012-2014 Coordinador Alterno Simari Guillermo (UNS) 2012-2014 Junta Directiva Feierherd Guillermo (UNTF) 2012-2014 Padovani Hugo (UM) 2012-2014 Estayno Marcelo (UNLZ) 2012-2014 Esquivel Susana (UNSL) 2012-2014 Alfonso Hugo (UNLaPampa) 2012-2013 Acosta Nelson (UNCPBA) 2012-2013 Finochietto, Jorge (UCAECE) 2012-2013 Kuna Horacio (UNMisiones) 2012-2013 Secretarias Secretaría Administrativa: Ardenghi Jorge (UNS) Secretaría Académica: Spositto Osvaldo (UNLaMatanza) Secretaría de Congresos, Publicaciones y Difusión: Pesado Patricia (UNLP) Secretaría de Asuntos Reglamentarios: Bursztyn Andrés (UTN)


AUTORIDADES DE LA UNIVERSIDAD CAECE Rector Dr. Edgardo Bosch Vicerrector Académico Dr. Carlos A. Lac Prugent Vicerrector de Gestión y Desarrollo Educativo Dr. Leonardo Gargiulo Vicerrector de Gestión Administrativa Mg. Fernando del Campo Vicerrectora de la Subsede Mar del Plata: Mg. Lic. María Alejandra Cormons Secretaria Académica: Lic. Mariana A. Ortega Secretario Académico de la Subsede Mar del Plata Esp. Lic. Jorge Finochietto Director de Gestión Institucional de la Subsede Mar del Plata Esp. Lic. Gustavo Bacigalupo Coordinador de Carreras de Lic. e Ing. en Sistemas Esp. Lic. Jorge Finochietto


COMITÉ ORGANIZADOR LOCAL Presidente Esp. Lic. Jorge Finochietto Miembros Esp. Lic. Gustavo Bacigalupo Mg. Lic. Lucia Malbernat Lic. Analía Varela Lic. Florencia Scolari C.C. María Isabel Meijome CP Mayra Fullana Lic. Cecilia Pellerini Lic. Juan Pablo Vives Lic. Luciano Wehrli

Escuela Internacional de Informática (EII) Directora Dra. Alicia Mon Coordinación CC. María Isabel Meijome


comité académico Universidad

Representante

Universidad de Buenos Aires

Echeverria, Adriana (Ingeniería) – Fernández

Universidad Nacional de La Plata

Slezak, Diego (Cs. Exactas)

Universidad Nacional del Sur

De Giusti, Armando

Universidad Nacional de San Luis

Simari, Guillermo

Universidad Nacional del Centro de la

Esquivel, Susana

Provincia de Buenos Aires

Acosta, Nelson

Universidad Nacional del Comahue

Vaucheret, Claudio

Universidad Nacional de La Matanza

Spositto, Osvaldo

Universidad Nacional de La Pampa

Alfonso, Hugo

Universidad Nacional Lomas de Zamora

Estayno, Marcelo

Universidad Nacional de Tierra del Fuego

Feierherd, Guillermo

Universidad Nacional de Salta

Gil, Gustavo

Universidad Nacional Patagonia Austral

Márquez, María Eugenia

Universidad Tecnológica Nacional

Leone, Horacio

Universidad Nacional de San Juan

Otazú, Alejandra

Universidad Autónoma de Entre Ríos

Aranguren, Silvia

Universidad Nacional Patagonia San Juan

Buckle, Carlos

Bosco Universidad Nacional de Entre Ríos

Tugnarelli, Mónica

Universidad Nacional del Nordeste

Dapozo, Gladys

Universidad Nacional de Rosario

Kantor, Raúl

Universidad Nacional de Misiones

Kuna, Horacio

Universidad Nacional del Noroeste de la

Russo, Claudia

Provincia de Buenos Aires Universidad Nacional de Chilecito

Carmona, Fernanda

Universidad Nacional de Lanús

García Martínez, Ramón


comité académico Universidad

Representante

Universidad Nacional de Santiago del Estero

Durán, Elena

Escuela Superior del Ejército

Castro Lechstaler Antonio

Universidad Nacional del Litoral

Loyarte, Horacio

Universidad Nacional de Rio Cuarto

Arroyo, Marcelo

Universidad Nacional de Córdoba

Brandán Briones, Laura

Universidad Nacional de Jujuy

Paganini, José

Universidad Nacional de Río Negro

Vivas, Luis

Universidad Nacional de Villa María

Prato, Laura

Universidad Nacional de Luján

Scucimarri, Jorge

Universidad Nacional de Catamarca

Barrera, María Alejandra

Universidad Nacional de La Rioja

Nadal, Claudio

Universidad Nacional de Tres de Febrero

Cataldi, Zulma

Universidad Nacional de Tucumán

Luccioni, Griselda

Universidad Nacional Arturo Jauretche

Morales, Martín

Universidad Nacional del Chaco Austral

Zachman, Patricia

Universidad de Morón

Padovani, Hugo René

Universidad Abierta Interamericana

De Vincenzi, Marcelo

Universidad de Belgrano

Guerci, Alberto

Universidad Kennedy

Foti, Antonio

Universidad Adventista del Plata

Bournissen, Juan

Universidad CAECE

Finochietto, Jorge

Universidad de Palermo

Ditada, Esteban

Universidad Católica Argentina - Rosario

Grieco, Sebastián

Universidad del Salvador

Zanitti, Marcelo

Universidad del Aconcagua

Gimenez, Rosana

Universidad Gastón Dachary

Belloni, Edgardo

Universidad del CEMA

Guglianone, Ariadna

Universidad Austral

Robiolo, Gabriela


comité científico Coordinación Armando De Giusti (UNLP) - Guillermo Simari (UNS) Abásolo, María José (Argentina) Acosta, Nelson (Argentina) Aguirre Jorge Ramió (España) Alfonso, Hugo (Argentina) Ardenghi, Jorge (Argentina) Baldasarri Sandra (España) Balladini, Javier (Argentina) Bertone, Rodolfo (Argentina) Bría, Oscar (Argentina) Brisaboa, Nieves (España) Bursztyn, Andrés (Argentina) Cañas, Alberto (EE.UU) Casali, Ana (Argentina) Castro Lechtaller, Antonio (Argentina) Castro, Silvia (Argentina) Cechich, Alejandra (Argentina) Coello Coello, Carlos (México) Constantini, Roberto (Argentina) Dapozo, Gladys (Argentina) De Vicenzi, Marcelo (Argentina) Deco, Claudia (Argentina) Depetris, Beatriz (Argentina) Diaz, Javier (Argentina) Dix, Juerguen (Alemania) Doallo, Ramón (España) Docampo, Domingo Echaiz, Javier (Argentina) Esquivel, Susana (Argentina) Estayno, Marcelo (Argentina) Estevez, Elsa (Naciones Unidas) Falappa, Marcelo (Argentina) Feierherd, Guillermo (Argentina) Ferreti, Edgardo (Argentina) Fillottrani, Pablo (Argentina) Fleischman, William (EEUU) García Garino, Carlos (Argentina) García Villalba, Javier (España) Género, Marcela (España) Giacomantone, Javier (Argentina) Gómez, Sergio (Argentina) Guerrero, Roberto (Argentina) Henning Gabriela (Argentina)

Janowski, Tomasz (Naciones Unidas) Kantor, Raul (Argentina) Kuna, Horacio (Argentina) Lanzarini, Laura (Argentina) Leguizamón, Guillermo (Argentina) Loui, Ronald Prescott (EEUU) Luque, Emilio (España) Madoz, Cristina (Argentina) Malbran, Maria (Argentina) Malverti, Alejandra (Argentina) Manresa-Yee, Cristina (España) Marín, Mauricio (Chile) Motz, Regina (Uruguay) Naiouf, Marcelo (Argentina) Navarro Martín, Antonio (España) Olivas Varela, José Ángel (España) Orozco Javier (Argentina) Padovani, Hugo (Argentina) Pardo, Álvaro (Uruguay) Pesado, Patricia (Argentina) Piattini, Mario (España) Piccoli, María Fabiana (Argentina) Printista, Marcela (Argentina) Ramón, Hugo (Argentina) Reyes, Nora (Argentina) Riesco, Daniel (Argentina) Rodríguez, Ricardo (Argentina) Roig Vila, Rosabel (España) Rossi, Gustavo (Argentina) Rosso, Paolo (España) Rueda, Sonia (Argentina) Sanz, Cecilia (Argentina) Spositto, Osvaldo (Argentina) Steinmetz, Ralf (Alemania) Suppi, Remo (España) Tarouco, Liane (Brasil) Tirado, Francisco (España) Vendrell, Eduardo (España) Vénere, Marcelo (Argentina) Villagarcia Wanza, Horacio (Arg.) Zamarro, José Miguel (España)


IV Workshop Aspectos Te贸ricos de Ciencia de la Computaci贸n - WATCC -

ID

Trabajo

Autores

5756

A Complexity Lower Bound Based On Software Engineering Concepts

Andr茅s Rojas Paredes (UBA)

5822

On Aggregation Process in Linguistic Decision Making Framework

M. Gim茅nez, S. Gramajo (UTN)


A Complexity Lower Bound Based On Software Engineering Concepts Andr´es Rojas Paredes Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Departamento de Computaci´ on. Pabell´ on 1, Ciudad Universitaria, Buenos Aires, Argentina. arojas@dc.uba.ar

Abstract. We consider the problem of polynomial equation solving also known as quantifier elimination in Effective Algebraic Geometry. The complexity of the first elimination algorithms were double exponential, but a considerable progress was carried out when the polynomials were represented by arithmetic circuits evaluating them. This representation improves the complexity to pseudo–polynomial time. The question is whether the actual asymptotic complexity of circuit– based elimination algorithms may be improved. The answer is no when elimination algorithms are constructed according to well known software engineering rules, namely applying information hiding and taking into account non–functional requirements. These assumptions allows to prove a complexity lower bound which constitutes a mathematically certified non–functional requirement trade–off and a surprising connection between Software Engineering and the theoretical fields of Algebraic Geometry and Computational Complexity Theory. Keywords: Non-functional requirement trade–off, information hiding, arithmetic circuit, complexity lower bound, polynomial equation solving, quantifier elimination in algebraic geometry

1

Introduction

The main issue of this paper is to describe the Software Engineering aspects of the mathematical computation model introduced in [9]. This model captures the notion of a circuit–based elimination algorithm in order to solve a thirty years old problem in algebraic complexity theory (see e.g. [8], [10]): in arithmetic circuit– based effective elimination theory the elimination of a single existential quantifier block in the first order theory of algebraically closed fields of characteristic zero is intrinsically hard (i.e. it has an exponential complexity lower bound). This conclusion may also be expressed in terms of a trade–off between two non– functional requirements: on one hand we have a complexity requirement and on the other a property of mathematical functions called geometrical robustness. This complexity lower bound in terms of software engineering concepts appears

1310


2

A. Rojas Paredes

for first time in [5] in the context of polynomial interpolation. In this work we study a more general case in the context of quantifier elimination. Complexity lower bounds are undoubtedly theoretical research. But there is also a practical aim behind that. Consider the process in software design where a software architecture is developed in order to solve a certain computational problem. Assume also that one of the non–functional requirements of the software design project consists of a restriction on the run time computational complexity of the program which is going to be developed (this was the case during the implementation of the polynomial equation solver Kronecker by G. Lecerf, see [11]). Our practical aim is to provide the software engineer with an efficient tool which allows him to answer the question whether his software design process is entering at some moment in conflict with the given complexity requirement. If this is the case, the software engineer will be able to change at this early stage his design and may look for an alternative software architecture. The following example illustrates this description. Example 1 (Finite Set). Suppose that our task is to implement a finite set S of cardinality n, e.g. a subset of the natural numbers N, and that we have to satisfy the requirement that membership to the finite set S is decided using only O(log n) comparisons. If the set S is implemented by an unordered array, we will be unable to satisfy our complexity requirement. So we are forced to think in alternative implementations of the abstract concept of a finite set, e.g. by ordered arrays, special trees or any other data type which is well suited for our task. Example 1 represents a case where it may be impossible to satisfy a given complexity requirement by means of a previously fixed software architecture. Our aim is to formalise such impossibility by means of a complexity lower bound which is usually difficult to infer when the number of components of the system under consideration is large or when the predicate to decide or the function to compute becomes more sophisticated like in polynomial equation solving. This leads to the idea to fix in advance only a small selection of architectural features, e.g. the abstraction levels or part of the language of our system (not the algorithms themselves). The computation model we are going to explain in following sections takes into account these considerations. This work is organised as follows: in Section 2 we introduce quantifier elimination as the subject of our complexity studies and the algorithmic approach which is based on the transformation of arithmetic circuits. In Section 3 we describe the tool used to obtain the announced complexity lower bound. Our tool is a computation model which captures the notion of non–functional requirement in circuit–based elimination algorithms. Finally we present the new result in this work: we make the following question: What does it happen if our algorithms are not circuit–based and we found a representation which is more efficient than circuits? The answer is that our complexity results are valid for arbitrary continuous representations if the algorithms follow the principle of information hiding. We illustrate this conclusion with a relevant example from the theory of Abstract Data Types (see, e.g. [13] and [12]).

1311


A Complexity Lower Bound Based On Software Engineering Concepts

3

In the rest of the paper we shall use notions and notations from algebraic geometry and algebraic complexity theory which are all standard (see for example [14] and [3]).

2 2.1

Quantifier elimination and its implementation Quantifier Elimination

We start with the subject of our complexity studies. The subject is quantifier elimination in the particular case of elementary algebraic geometry over C. Let Φ be an existentially quantified formula. In general terms, the quantifier elimination problem consists in obtaining a quantifier free formula Ψ which is logically equivalent to Φ (this means that Ψ and Φ define the same set). In the particular case of elementary algebraic geometry over C, the formulas Φ and Ψ are composed by polynomial equations. In this context we are going to consider exclusively the polynomials of these equations. Let n and r be natural numbers. Let T , U := (U1 , . . . , Ur ) be parameters and X := (X1 , . . . , Xn ) be variables subject to quantification. We focus our attention to polynomials G1 (X), . . . , Gn (X) and H(T, U, X) which belong to C[X] and to C[T, U, X] respectively. These polynomials constitute a so called Flat Family of Elimination Problems given by the polynomial equation system G1 = 0, . . . , Gn = 0 and the polynomial H (see, e.g. [4] and [9] for details). In general terms this system represents the quantified formula Φ : (∃X1 )(∃Xn )(G1 = 0 ∧ . . . ∧ Gr = 0 ∧ Y − H = 0). On the other hand, there exists a polynomial F ∈ C[T, U, Y ] of minimal degree, called the associated Elimination Polynomial, such that the equation F = 0 represents a quantifier–free formula Ψ which is equivalent to Φ. Thus, we arrive to a functional requirement where the flat family of elimination problems given by G1 = 0, . . . , Gn = 0 and H becomes transformed into the elimination polynomial F . This transformation is carried out by a mathematical function f as Fig. 1 illustrates.

Φ : (∃X1 ) . . . (∃Xn )(G1 = 0 ∧ . . . ∧ Gn = 0 ∧ H − Y = 0) | {z } Quantified Formula

Ψ | :F {z = 0} Quantifier–free Formula

Fig. 1: Quantifier elimination problem.

At this abstract level we do not know, for example, how the polynomials are implemented in the computer. We define now these implementation details. An implementation option is to represent polynomials by their coefficients. Unfortunately the coefficient representation in some elimination polynomials

1312


4

A. Rojas Paredes

may conduce to complexity blow ups, e.g. the Pochhammer polynomial Y (Y − j) 0≤j<2n

which has 2n terms (see [7] for open questions in complexity theory related to this polynomial). This circumstance suggests to represent in elimination algorithms polynomials not by their coefficients but by arithmetic circuits. This idea became fully realised by the “Kronecker” algorithm for the resolution of polynomial equation systems over algebraically closed fields. The algorithm was anticipated in [6] and implemented in a software package of identical name (see [11]). The following example illustrates the notion of arithmetic circuit. Example 2 (Horner scheme). Let Fig. 2: Arithmetic circuit and Horner a1 , a2 , a3 be constants and X be a scheme. variable. Consider the polynomial q(X) := a1 + (a2 + a3 X)X p(X) = a1 + a2 X + a3 X 2 and the Horner scheme of this polynomial Abs which is q(X) = a1 + (a2 + a3 X)X. From this scheme we have a directed + acyclic graph where each node is an arithmetic operation +, ∗, a constant ∗ a1 , a2 , a3 or a variable X. This arith+ metic circuit is a concrete object implementing the abstract object q(X). ∗ Fig. 2 illustrates the relation between q(X) and its implementing circuit by a1 a2 a3 X means of an abstraction function Abs. 2.2

Implementation of quantifier elimination

To understand the role of arithmetic circuits in elimination algorithms we fix the notion of polynomials in terms of abstract data types and classes implementing them. Here we follow the terminology in [13]. Suppose that we have an abstract data type specification for polynomials in terms of query and creator functions (observers and constructors in the terminology of [12]). Thus the elimination problem of Fig. 1 may be expressed as a specification in terms of abstract data types. Consider now the classes implementing the abstract data type of polynomials. We have a class for polynomials and a class for circuits. The connection between these two classes is that the class of circuits is a private part of the class of polynomials. This private part is used to implement the interface of the class of polynomials in terms circuits. In this context polynomials are encapsulated circuits which are mapped into instances of the abstract data type of polynomials by an abstraction function Abs. Now recall our functional requirement: transform en elimination problem given by polynomials G1 , . . . , Gn and H into an elimination polynomial F . Since

1313


A Complexity Lower Bound Based On Software Engineering Concepts

5

polynomials become implemented by circuits, an elimination algorithm works directly with circuits taking care of satisfy class invariants and the abstraction function Abs. In this sense, an elimination algorithm A transforms an input circuit β representing G1 , . . . , Gn , H into an output circuit γ representing the elimination polynomial F as Fig. 3 illustrates. f

G1 , . . . , Gn , H −−−−−−−−−−−−−→ x  Abs  β

F x   Abs 

A

−−−−−−−−−−−−−→ γ

Fig. 3: Elimination problem and its implementation.

The transformation of β into γ is carried out by means of circuit operations, e.g. join of circuits which mimics the composition of functions (see also union of circuits and recursive routine in [9]). If we require the algorithm A to be branching parsimonious (see Section 3.1 below), then A captures all known circuit–based elimination algorithms including the polynomial equation solver Kronecker. At this point the question is how we measure the complexity of algorithm A. We shall mainly be concerned with the size of the output circuit γ. Here we refer with “size” to the number of internal nodes which count for the given complexity measure. Our basic complexity measure is the non–scalar one (also called Ostrowski measure) over the ground field C. This means that we count, at unit costs, only essential multiplications and divisions (see [3] for details).

3 3.1

Software engineering–based approaches to complexity lower bounds A circuit–based computation model

The polynomials G1 , . . . , Gn , H and F described before belong to mathematical structures C[X], C[T, U, X] and C[T, U, Y ] respectively. In these mathematical structures polynomials have a natural property called geometrical robustness which interpreted as a non–functional requirement constitutes a key ingredient in our complexity result (see Theorem 1 below). This property is invisible if we only consider abstract data type specifications in the sense of [13]. Thus, in order to include geometrical robustness in the specification of elimination problems we model the notion of abstract data type of polynomials with the corresponding mathematical structure and we call this structure an abstract data type. For example, the polynomials G1 , . . . , Gn , H will be instances of the abstract data type O ⊂ C[T, U, X] and F will be an instance of the abstract

1314


6

A. Rojas Paredes

data type O∗ ⊂ C[T, U, Y ] and the elimination problem will be specified by a geometrically robust map f : O → O∗ . Geometrical robustness The map f is a function (mathematical application) which we require to be constructible, i.e. definable by a boolean combination of polynomial equations. The mapping is called geometrically robust if it is continuous (see [9] and [5] for an algebraic characterisation of robustness). Since geometrical robustness is a property belonging to the specification level of our elimination task, we have to describe how this non–functional property is realised by the circuit–based algorithms implementing the elimination. Branching parsimoniousness The intuitive meaning of geometrical robustness is reflected by the algorithmic notion of branching parsimoniousness. We call an algorithm branching parsimonious if it avoids unnecessary branchings. We may restrict branchings by means of only considering division–free circuits, or circuits where divisions by zero were replaced by suitable limits and divisions may only involve parameter nodes (nodes without variables). In this sense our circuits are essentially division–free and will be called robust if all intermediate results (functions represented by each node) are geometrically robust. The notion of branching parsimoniousness as a tactic In the context of software architecture, the satisfaction of quality attributes requires techniques which are called tactics. For example, a system is easily modified when it is structured, modularised and well documented. A tactic is, according to [2], a design decision that influences the control of a quality attribute response. Considering this definition we may describe branching parsimoniousness as a tactic for elimination algorithms. We require an algorithm to be branching parsimonious in order to achieve the non–functional requirement of geometrical robustness. In this sense we say that branching parsimoniousness is a tactic to achieve geometrical robustness. For example, the reader may identify branching parsimoniousness with modularity which is a tactic to achieve the modifiability quality attribute. Now recall our elimination algorithm A in Fig. 3 which transform the circuit β (representing G1 , . . . , Gn , H) into circuit γ (representing F ). The elimination algorithm A implements the additional property of geometrical robustness if we require A to be banching parsimonious. Thus in the input we have an essentially division–free, robust parameterized arithmetic circuit β of size O(n) with basic parameters T , U := U1 , . . . , Un and input X := X1 , . . . , Xn which computes polynomials G1 , . . . , Gn ∈ C[X] and H ∈ C[T, U, X] constituting a flat family of zero–dimensional elimination problems with associated elimination polynomial F ∈ C[T, U, Y ]. The branching parsimoniousness allows to affirm that each circuit operation gives as result a robust circuit. Thus we conclude that the property of geometrical robustness is transmitted from the input β to the output γ. Then γ := A(β) is an essentially division–free, robust parameterized arithmetic circuit with basic parameters T, U1 , . . . , Un and input Y representing the elimination polynomial F.

1315


A Complexity Lower Bound Based On Software Engineering Concepts

7

These notations and assumptions, in particular the property of robustness in the output γ, allows to conclude the following theorem. Theorem 1 ([9], Theorem 10). The circuit γ has, as ordinary arithmetic circuit over C, non–scalar size at least Ω(2n ). Theorem 1 corresponds to circuit–based algorithms, now we ask what does it happen if we found a representation which is more efficient than arithmetic circuits? We argue that Information Hiding–based algorithms have the same complexity status. This implies that our complexity results are valid for arbitrary continuous representations. This is part of future work but we give preliminary results in the following section. 3.2

Towards an Information Hiding–based computation model

Since polynomials G1 , . . . , Gn , H and F are objects belonging to suitable abstract data types, we may define the function f of Fig. 3 in terms of query and creator functions (observers and constructors) of the given abstract data type specification obtaining a transformation which does not involve circuits directly because they become encapsulated. To illustrate this kind of transformation consider the following example. Example 3. Suppose a case where f is the identity function of binary trees. In this context let us consider the following abstract functions of the corresponding abstract data type specification: root(), lef t(), right() and isN il?() as query functions (observers) and bin() and nil() as creator functions (constructors). Then, we propose the following definition for f :

f (X) =

nil() if isN il?(X) bin(root(X), id(lef t(X)), id(right(X))) otherwise

(1)

This specification of function f may be implemented in such a way that, at an abstract level the implementation is the identity function of binary trees, whereas at a concrete level the implementation is a transformation of the representation of binary trees (compare this with the transformation of circuits in elimination algorithm A). This hidden transformation is carried out by the classes implementing the abstract data type of binary trees where the implementation of f may be called f. Notice that we write the implementation in verbatim font in order to distinguish the difference with abstract data type expressions which we write in cursive font. Let Tree<E> be a class implementing the abstract data type of binary trees. Let Tree1<E> and Tree2<E> be subclasses of class Tree<E> with the following property: Tree1<E> implements trees as arrays (the internal representation of trees is given by arrays) and Tree2<E> implements trees as nested nodes. Let root, left, right and isNil be routines in the class Tree<E> implementing the corresponding query functions (observers) in the abstract data type specification.

1316


8

A. Rojas Paredes

Let p1 be a variable of type E and p2 y p3 be variables of type Tree2<E>, then Tree2<E>() and Tree2<E>(p1,p2,p3) are constructors of the class Tree2<E> implementing the creator functions nil() and bin() respectively. Then the implementation in java code is as follows: Tree<E> f(Tree<E> t){ if(t.isNil()) return new Tree2<E>(); else return new Tree2<E>(t.root(), (Tree2<E>) f(t.left()), (Tree2<E>) f(t.right()) ); }

(2)

Notice that the effective transformation of the representation is carried out when an instance of Tree1<E> is passed as parameter and the constructor of the other class is applied, say the constructor of Tree2<E>. Equation 2 illustrates the definition of an algorithm in terms of observers and constructors. In the case of elimination problems such an algorithm has a similar structure but we do not exhibit an example here. This is left for a future work (see [1]) where the notion of information hiding is modelled in full detail. Such a model allows to conclude the following: – if the complexity measure is given by the number of parameters instead of the size of circuits, we obtain an exponential complexity lower bound for this quantity which implies the result in Theorem 1, – this allows to conclude that elimination algorithms programmed with information hiding, i.e. hiding the circuits or any other representation of polynomials, have the same complexity status. Final comments The circuit–based computation model described in Section 3.1 corresponds to the tool for the software engineer we described at the introduction. Of course this model cannot be applied to any software project since it is restricted to the particular case of elimination. However, it gives the key ingredients for the definition of a computation model suitable for complexity questions where another non–functional requirement must be considered. On the other hand, our description of an Information Hiding–based computation model in Section 3.2 constitutes an stronger result which together with Theorem 1, allows to conclude that the Kronecker algorithm is asymptotically optimal. This suggest that the Kronecker is a good option to use in applications of scientific computing where polynomial equation solving is needed. Finally, a computation model which captures algorithms constructed in a professional way, namely applying software engineering concepts, in combination with the complexity lower bound obtained in Section 3.1 allows to conclude the following idea which we repeat from [9]: neither mathematicians nor software engineers, nor a combination of them will ever produce a practically satisfactory, generalistic software for elimination tasks in Algebraic Geometry. This is a job for hackers which may find for particular elimination problems specific efficient solutions.

1317


A Complexity Lower Bound Based On Software Engineering Concepts

9

Acknowledgements The author thanks Joos Heintz for his insistent encouragement to finish this work and Pablo Barenbaum, Gast´ on Bengolea Monz´ on, Mariano Cerrutti, Carlos Lopez Pombo, Hvara Ocar and Alejandro Scherz, Universidad de Buenos Aires, for discussions about the topic of this paper and/or comments and ideas on earlier drafts.

References 1. Bank, B., Heintz, J., Pardo, L.M., Rojas Paredes, A.: Quiz games: A new approach to information hiding based algorithms in scientific computing, manuscript Universidad de Buenos Aires (2013) 2. Bass, L., Clements, P., Kazman, R.: Software Architecture in Practice. Addison– Wesley, Boston, MA, 2. edn. (2003) 3. B¨ urgisser, P., Clausen, M., Shokrollahi, M.A.: Algebraic Complexity Theory. Grundlehren der mathematischen Wissenschaften, vol. 315. Springer Verlag (1997) 4. Castro, D., Giusti, M., Heintz, J., Matera, G., Pardo, L.M.: The hardness of polynomial equation solving. Foundations of Computational Mathematics 3(4), 347–420 (2003) 5. Gim´enez, N., Heintz, J., Matera, G., Solern´ o, P.: Lower complexity bounds for interpolation algorithms. Journal of Complexity 27, 151–187 (2011) 6. Giusti, M., Heintz, J., Morais, J., Morgenstern, J., Pardo, L.: Straight-line programs in geometric elimination theory. Journal of Pure and Applied Algebra 124, 101–146 (1998) 7. Heintz, J., Morgenstern, J.: On the intrinsic complexity of elimination theory. Journal of Complexity 9, 471–498 (1993) 8. Heintz, J., Sieveking, M.: Absolute primality of polynomials is decidable in random polynomial time in the number of variables. Automata, languages and programming (Akko, 1981). Lecture Notes in Computer Science 115, 16–28 (1981) 9. Heintz, J., Kuijpers, B., Rojas Paredes, A.: Software engineering and complexity in effective algebraic geometry. Journal of Complexity (2012) 10. Kaltofen, E.: Greatest common divisors of polynomials given by straight–line programs. J. Assoc. Comput. Mach. 35(1), 231–264 (1988) 11. Lecerf, G.: Kronecker: a Magma package for polynomial system solving. Web page. http://lecerf.perso.math.cnrs.fr/software/kronecker/index.html 12. Liskov, B., Guttag, J.: Program development in Java: Specification, and Object– Oriented Design. Addison-Wesley, 3. edn. (2001) 13. Meyer, B.: Object-Oriented Software Construction. Prentice-Hall, 2. edn. (2000) 14. Shafarevich, I.R.: Basic algebraic geometry: Varieties in projective space. Springer, Berlin Heidelberg, New York (1994)

1318


On Aggregation Process in Linguistic Decision Making Framework M. Gimenez, S. Gramajo Artificial Intelligence Research Group. National Technological University, French 414, Resistencia, 3500, Argentina manuelego1@gmail.com, sergio@frre.utn.edu.ar

Abstract. When solving a problem, human beings must face situations in which they should choose among different alternatives by means of reasoning and mental processes. Many of these decision problems are under uncertain environments including vague, imprecise and subjective information that is usually modeled by fuzzy linguistic approach. This approach uses linguistic information or natural language words and its relation to mental reasoning processes of the experts when expressing their assessments. In a decision process multiple criteria can be evaluated which involving multiple experts with different degrees of knowledge. Such process can be modeled by using Multi-granular Linguistic Information (MGLI) and Computing with Words (CW) processes to solve the related decision problems. Once decision makers (experts) provided their opinions, it is necessary to combine all these opinions to obtain a single overall result that can be interpreted. An aggregation operator allows accomplishing this objective calculating a global value in different ways. In this paper we study the use of aggregation operators in multi-criteria decision-making processes comparing them and obtaining conclusions about their use in our framework. Furthermore, we propose a new aggregation operator taking into account the criteria importance to evaluate the alternatives, and then an illustrative example shows its outcomes. Keywords: Multi-granular Linguistic Information, Computing with Words, Aggregation operator, Decision Making.

1

Introduction

The decision making is a day-to-day activity for human beings. The multiple facets of real world decision problems are well addressed by Multi-Criteria Decision Making (MCDM) [1]. The crucial point of interest within the MCDM is the analysis and the modeling of the multiple decision makers’ preferences giving rise to Multi-Expert Decision Making (MEDM) [2]. In many situations, context involves vague and probably incomplete information. In these cases, information cannot be assessed precisely in a quantitative form; experts may feel more comfortable employing other approaches. To overcome this problem, information is normally modeled by using a fuzzy linguistic approach [3][4][5]

1319


allowing the experts to express their opinions with words rather than numbers (e.g. when evaluating the comfort or design of a car, terms like good, medium, bad can be used). Therefore, the fuzzy linguistic approach is a technique that represents qualitative information as linguistic values by means of linguistic variables [3], that is, variables whose values are no numbers but words or sentences in a natural language. Each linguistic value is characterized by its syntax (label) and semantic (meaning). The label is a word or a sentence belonging to a linguistic term set and the meaning is a fuzzy subset in a universe of discourse. The concept of linguistic variables provides an estimated measure since words are less precise than numbers. This is more effective because the experts may feel more comfortable using words they really know and understand in accordance with the context of use of these words. Also, when offering different expression domains or different linguistic term sets (multi-granular information) to the experts, this solution would be suitable to adjust the degree of experience of each one [6][7]. An important aspect of the MCDM is the aggregation process. In order to obtain a unique final result, the assessments of each expert involved must be taken in account. An aggregation operator allows accomplishing this objective calculating a global value. The aggregation is the operation that transforms a set of elements, such as individual opinions on a set of alternatives, into a single element that is representative of the whole. Different ways of carrying out the combination of preferences have led many authors to study and design different aggregation operators. Depending on the problem different types of aggregation operators can be used. In this paper, we focus on the aggregation process when dealing with complex decisions under uncertainty using decision analysis process. We will study the results of applying different aggregation operators on the same decision problem in order to obtain relevant conclusions about their use in complex decision systems. This paper is organized as follows. Section 2 reviews basic concepts about linguistic background that will be used to model uncertain information and multi-granular information in our framework. Section 3 presents the phases in order to analyze decisions, with special emphasis on aggregation process. Then, section 4 proposes an example of use on investment decisions in a company. Finally, section 5 shows some conclusions.

2

Preliminaries

Normally the decision analysis depends highly on subjective, vague and ill-structured information must have a model to manage this kind of information. Therefore, we consider the use of the fuzzy linguistic approach [3] to model and manage the inherent uncertainty in this kind of problems and the 2-tuple linguistic model to represent linguistic information [8]. Additionally, it is useful to manage multiple linguistic scales (multi-granular information) giving more flexibility to the different experts involved in the problem and, to manage this, we use Extended Linguistic Hierarchies (ELH) method. For this reason, in this section we review in short the concepts and

1320


methods such as the fuzzy 2-tuple linguistic model, ELH and its computational method (aggregation process). 2.1

The 2-tuples linguistic model

When using linguistic information to solve a problem it is necessary the use of computing with words CW. The main limitation with this approach is the “loss of information” suffered in the most used computational techniques that implies the lack of precision in the final results. These computational models are: The semantic model [9] and the symbolic model [10]. In these two models an approximation process must be developed to express the result in the initial expression domain, here is when the information gets lost. The 2-tuples linguistic model [11] is a representation model that overcomes the loss of information. It represents the linguistic information with a pair of values, that we call 2-tuple, composed by a linguistic term and a number. Definition 1. The Symbolic Translation of a linguistic term si  S  {s0 ,..., sg } is a numerical value assessed in  0.5,0.5 that supports the “difference of information”

between an amount of information   0, g  and the closest value in 0,..., g that indicates the index of the closest linguistic term in S  si  , being  0, g  the interval of granularity of S . From this concept a new linguistic representation model was developed, which represents the linguistic information by means of a linguistic 2-tuple. It consists of a pair of values namely, (si ,  ) S  S  [0.5,0.5) , being si  S a linguistic term and

  [0.5,0.5) a numerical value representing the symbolic translation. This representation model defined a set of transformation functions between numeric values and linguistic 2-tuples to facilitate linguistic computational processes. Definition 2. Let S  {s0 ,..., sg } be a linguistic terms set and  [0, g ] a value supporting the result of a symbolic aggregation operation. The 2-tuple set associated with S is defined as S  S  [0.5,0.5) . A 2-tuple that expresses the equivalent information to  is then obtained as follow:

 :  0, g   S si , i  round (  )  (1)  (  )  ( si ,  ), with      i,   [0.5, 0,5)  being round (·) the usual round operation, i the index of the closest label, si , to “  ”, and “  ”the value of the symbolic translation. It is noteworthy to point out that  is a one to one mapping and 1 : S  [0, g ] is defined by 1 (si ,  )  i   . In this way the 2-tuple of S is identified by a numerical value in the interval  0, g  .

1321


Remark 1. The transformation of a linguistic term into a linguistic 2-tuples consists of adding value 0 as symbolic translation: si  S  (si ,0)  S . On other hand,

(i)  (si ,0) and 1 (si ,0)  i,i0,1,.., g .

If   3.25 is the value representing the result of a symbolic aggregation operation on the set of labels, S  {s0  Nothing, s1  VeryLow, s2  Low, s3  Mediums, s4  High, s5  VeryHigh, s6  Perfect}

, then the 2-tuple that expresses the equivalent information to  is (medium,.25) . This model has a linguistic computational technique based on the functions  and  1 , for a further detailed see Ref. [12]. 2.2

Extended Linguistic Hierarchies:

A flexible expression domain with several linguistic scales is necessary to express the assessments for experts according to their degree of knowledge about the problem. Different approaches dealing with multi-granular linguistic information have been proposed. In this paper shall use the ELH [13] approach to model and manage multigranular linguistic information because of its features of flexibility and accuracy in the processes of computing with words (CW) in multi-granular linguistic contexts. An ELH is a set of levels, where each level represents a linguistic term set with different granularity from the remaining levels of the ELH. Each level belongs to an ELH is denoted as l (t , n(t )) being t a number that indicates the level of the ELH and n(t ) the granularity of the terms set of the level t . To build an ELH have been proposed a set of extended hierarchical rules: Rule 1: A finite set of levels, l (t , n(t )) with t  1,.., m , that defines the multigranular linguistic context required by experts to express their assessments are included. Rule 2: to obtain an ELH a new level, l (t * , n(t * )) with t *  m  1 , should be added. This new level must have the following granularity:

n(t*)  ( L.C.M .(n(1) 1,..., n(m) 1))  1

(2)

being L.C.M. the Least Common Multiple. ELH building process then consists of two processes: i) It adds m linguistic scales used by the experts to express their information. And ii) then it adds the term set l (t*, n(t*)) , with t  m  1 , according to Eq. (2). Therefore, the ELH is the union of all levels required by the experts plus the new level l (t * , n(t * )) .

ELH 

t  m 1

(l (t , n(t )))

t 1

The use of multi-granular linguistic information makes the processes of CW more complex. ELH computational model needs to make a three-step process.

1322


1. Unification phase. The multi-granular linguistic information is conducted into only one linguistic term set, that in ELH is always S n (t *) , by means of a transformation function TFba (·) : Definition 3. Let S n ( a )  {s0n ( a ) ,..., snn((aa))1} and S n(b)  {s0n(b) ,..., snn((bb))1} be two linguistic term sets, with a  b . The linguistic transformation function is defined as: TFba : S n ( a )  S n (b )

  1 ( s nj ( a ) , n ( a ) )·( n(b) 1)  T Fba ( s nj ( a ) , nj ( a ) )  S  (2)   n( a )  1    ( skn (b ) , kn (b ) ) 2. Computational process. Once the information is expressed in only one expression domain S n (t *) , the computations are carried out by using the linguistic 2-tuple model. 3. Expressing results. In this step the results might be transformed into any level, t , of ELH in a precise way by using Eq. (3) to improve the understanding of the results if necessary. Remark 2. In the processes of CW with information assessed in an ELH, the linguistic transformation function, TFba , performed in the unification phase, a , might be any level in the set {t  1,.., m} and the computational processes are carried out in the level b that it is always the level t * (See Eq. (3)). It was proved in [13] that the transformation functions between linguistic terms in different levels of the Extended Linguistic Hierarchy are carried out without loss information. 2.3

Aggregation process:

Aggregation operators allow accomplishing a global value from a set of values in order to obtain a unique final value. Here we have analyzed four kinds of aggregation operators, Geometric Mean Aggregation Operator (GMAO), Arithmetic Mean Aggregation Operator (AMAO) Weighted Aggregation Operator (WAO). WAO is based on the weight of the experts (WAO) or criteria (WAOC). Definition 6. GMAO. Let ((l1 , 1 ),..,(lm ,  m )) S m be a 2-tuples linguistic vector, geometric mean operator is defined as follows: G : S m  S 1

1

 m m  m m (3) G : ((l1 , 1 ),.., (lm ,  m ))    1 (li ,  i )    i   i 1   i 1  Definition 5. AMAO: Let ((l1 , 1 ),..,(ln ,  n )) S n be a 2-tuples linguistic vector,

arithmetic mean operator is defined as follows: G : S n  S n 1 1 n G[(l1 , 1 ),.., (ln ,  n )]  (  1 (rj ,  j ))  (   j ) n j 1 j 1 n

(4)

1323


A rational assumption about the resolution process could be associating more importance to the experts who have more “knowledge” or “experience”. These values can be interpreted as importance degree, competence, knowledge or ability of the experts. In addition some experts could have some difficulties in giving all their assessments due to lack of knowledge about part of the problem. Besides the use of different scales, the expert should be carried out in different way with weighted aggregation operator. Definition 6. WAO: Let ((l1 , 1 ),..,(lm ,  m ) S m be a vector of linguistic 2-tuples, and

w   w1 ,..., wm [0,1]m be a weighting vector such that  i 1 wi  1 . The 2-tuple WAO m

associated with w is the function G w : S

m

 S defined by

 m  G w [(l1 , 1 ),.., (lm ,  m )]   s   wi i   i 1 

(5)

In the same way that experts have importance, criteria also may have it. In this sense we use the process of obtaining the importance of criteria based on the potencies method. This method takes in account the importance for each criterion in the problem solution using a vector of importance with defined values for every criterion involved. When working with linguistic information we just don’t have a method for comparing criteria in order to obtain this vector of importance. According to this, it is necessary to obtain the comparison matrix between criteria and then calculate the weighted vector based on criteria importance. The matrix  Anxn that represents the matrix comparison between criteria is obtained from the experts judgments about criteria. Then the weighted vector  that represents the weight held by each criterion in the decision process and is obtained using  Anxn as explained in [14]. Definition 7. WAOC: Let ((l1 , 1 ),..,(ln ,  n ) S n be a vector of linguistic 2-tuples,

and   1 ,..., n [0,1]n be a weighting vector based on the criteria importance

such that  j 1 j  1 . The 2-tuple aggregation operator associated with  is the funcn

tion G : S

n

 S defined by:

 n  G [(l1 , 1 ),.., (ln ,  n )]   s   w j  j   j 1 

3

(6)

Decision analysis process

Linguistic decision analysis process consists of several phases described below: Phase 1. Data definition: It defines the evaluation context in which experts will express their preferences. Linguistic descriptors and their semantics are chosen as well as each problem potential solution (alternative) is identified. It also determines the criteria to evaluate every alternative and the experts who are involved in decision

1324


process. In order to allow different expression domain for multiple experts, linguistic terms sets used are organized into an ELH. Therefore, let consider: A finite set of alternatives X  {xk , k  1,..., q} . A finite set of criteria C  {c j , j  1,..., n} . A finite set of experts E  {ei , i  1,..., m} that express their assessments by using different linguistic scales of information in ELH. Phase 2. Information gathering: Experts provide their linguistic assessments in utility vectors for each criterion of the evaluated alternatives. The experts express their assessments on every criterion considering every alternative using their linguistic term set in ELH. Due to the fact that our Framework will use linguistic 2-tuple computing model the linguistic preferences provided by the experts will be transformed into linguistic 2-tuples according to the Remark 1. Phase 3. Computational process: This phase consists of three steps to obtain a global value for each alternative: -Unification of MGLI. Due to experts provide their assessments in different linguistic scales; it is necessary to transform each assessment in a unique expression domain so called t * whose granularity is given by Eq. (2). Thus, transformation must be the last level of the ELH according to Eq. (3). Once the information has been unint  fied, it will be expressed by means of linguistic 2-tuples in S . In order to obtain the global value for each alternative the information must be aggregated. In our framework we use four different aggregation operators and the process is performed in two levels: - Expert Aggregation Level: The first aggregation step it obtains a collective value for all experts’ assessments. Here is possible to choose between GMAO and WAO. - Criteria Aggregation Level: The second one computes a global value for each alternative from results obtained in previous step. Here is possible to choose between AMAO and WAOC. Figure 1 shows the possible combinations of operators. *

Expert aggregation level

Criteria aggregation level

GMAO

AMAO

GMAO

WAO BASED ON CRITERIA

WAO BASED ON EXPERTS

AMAO

WAO BASED ON EXPERTS

WAO BASED ON CRITERIA

Fig. 1. Framework aggregation operators

Phase 4. Results presentation: Final values are presented in an ordered scale as a ranking of preferences from the most suitable to the less convenient alternative.

1325


4

Illustrative example

We consider the decision process to acquire software in one organization. The decision from where get software implies decide the supply channel option. There are advantages and drawbacks for particular acquire channels and experts many times do not reach to an agreement. In order to satisfy this need, the CEO has arranged a meeting with the three main experts in software solution in the organization: CIO, Head of development department and Head of data management department. The objective of this meeting is to determine which one of the three channels available for software supplying is the most suitable for the company. There are three main channels to obtain software: internal development, external development and buy a standard packet. In the Internal development, the organization IT department builds the needed software solution. The External development means acquire by external software development consulting. Buy a standard package. One of the fastest way for satisfying software needs is by acquiring a standard software package of general purpose. To obtain software 4 criteria should be evaluated, how well it meets the necessary requirements, ease of changes and growths and development time. Therefore, in phase 1 we have the following:  E1  CIO, E2  Head of development department,  A set of experts     E3  Head of data management department 

 A1  Internal development, A2  External development,    A3  Buy a standard package 

A set of alternatives  

C1  Satisfied requirements, C2  Facility implementing changes,  A set of criteria    C3  Development time  An ELH with two linguistic term sets:

S1  VB  Very Bad, B  Bad, M  Medium, G  Good, VG  Very Good

W  Worst, VB  Very Bad, B  Bad, M  Medium, G  Good,  S2    VG  Very Good, E  Excellent  Besides a new level, t * , in accordance with Eq. (2). Table 1. Phase 2. Information gathering Experts

Assessments

A1

A2

A3

C1

C2

C3

C1

C2

C3

C1

C2

C3

E1

(E,0)

(VG,0)

(B,0)

(VG,0)

(M,0)

(VG,0)

(M,0)

(W,0)

(E,0)

E2

(VG,0)

(VG,0)

(VB,0)

(E,0)

(G,0)

(M,0)

(M,0)

(B,0)

(VG,0)

E3

(VG,0)

(G,0)

(VB,0)

(G,0)

(M,0)

(M,0)

(M,0)

(VB,0)

(VG,0)

1326


Table 2. Criteria comparison

E1

E2

E3

C1

C2

C3

C1

C2

C3

C1

C2

C3

C1

1

5

1/3

1

7

5

1

7

4

C2

1/5

1

1/9

1/7

1

1/2

1/7

1

1/2

C3

3

9

1

1/5

2

1

1/4

2

1

Table 3. Criteria weight vector

Table 4. Experts weight vector

ď ˇ vector

w vector

Criterion

Weight

Expert

Weight

C1

0.2654

E1

0.5

C2

0.0629

E2

0.3

C3

0.6716

E3

0.2

Table 5. GMAO and AMAO results

Table 6. WAO with experts weighting and AMAO results

Alternative

Percentage

Alternative

Percentage

A2

38,54%

A2

37,12%

A1

33,67%

A1

33,73%

A3

27,79%

A3

29,15%

Table 7. GMAO and WAO based on criteria importance

Table 8. WAO with experts weighting and WAO based on criteria importance

Alternative

Percentage

Alternative

Percentage

A3

44,34%

A3

42,62%

A2

38,25%

A2

35,98%

A1

17,42%

A1

21,41%

Bearing in mind the first step of aggregation, our framework allows use GMAO and WAO in accordance with the weighting vector showed in Table 4. Then, the second aggregation steps we use AMAO and WAO based on criteria importance (see Table 3). From Table 5 to Table 8 results are expressed in percentage way to better understanding. When it compute GMAO and AMAO (see Table 5) the results are similar to the Table 6 that uses WAO with experts weighting and AMAO (see Table 6). However, a slight difference it can be seen between both but the order of importance is the same. Weighted operator (WAO) introduces a new parameter, the weight of importance of experts, allowing greater differentiation between the final results to elimi-

1327


nate equal importance between opinions. Thus, decision makers have more accurate values with better differentiation between them. On the contrary, in Tables 7 and 8, the operator for the second level of aggregation used was the weighted vector based on criteria importance. Here, the priority ranking changes significantly. It is because importance vector modifies last criteria aggregation step, allocating highest values for the most important criterion and reducing values for the others. Furthermore, obtained values in Table 8 take into account the weight of the experts.

5

Conclusion

Aggregation refers to the process of combining several values into a single one, so that the final result of aggregation takes into account in a given manner all the individual values. Such an operation is used in many well-known disciplines such as Multi-Criteria Multi-Expert Decision Making. In order to reach good results for decision process, classical synthesizing functions have been proposed: arithmetic mean, geometric mean, median and many others. In this papers we present a linguistic framework developed that allows analyze different decision results by using several aggregation operators. In this regard we also propose compute criteria importance based on the potencies method with Saaty scale. Currently, the framework computation capability is expanded by using different aggregation operators such as Ordered Weighted Averaging (OWA) aggregation operators’ family. In addition, we are comparing different methodologies and decision making approaches such as Analytic Hierarchy Process (AHP).

References

1. Figueira, J., Greco, S., Ehrgott, M.: Multiple Criteria Decision Analysis: State of the Art Surveys. Kluwer Academic Publishers, Boston/Dordrecht/London (2005) 2. Clemen, R.: Making Hard Decisions. An Introduction to Decision Analisys. Duxbury Press (1995) 3. Zadeh, L.: The concept of a linguistic variable and its applications to approximate reasoning. Information Sciences, Part I, II, III (8,9) (1975) 199–249,301–357,43–80 4. Dong, Y., Xu, Y., Yu, S.: Linguistic multiperson decision making based on the use of multiple preference relations. Fuzzy Sets and Systems 160(5) (2009) 603–623 5. Delgado, M., Verdegay, J., Vila, M.: Linguistic decision-making models. International Journal of Intelligent Systems 7(5) (1992) 479–492 6. Herrera, F., Herrera-Viedma, E., Martínez, L.: A fusion approach for managing multigranularity linguistic term sets in decision making. Fuzzy Sets and Systems 114(1) (2000) 43–58 7. Herrera, F., Herrera-Viedma, E., Verdegay, J.: A linguistic decision process in group decision making. Group Decision and Negotiation 5 (1996) 165–176 8. Martínez, L., Herrera, F.: An overview on the 2-tuple linguistic model for computing with words in decision making: Extensions, applications and challenges. Information Sciences 207 (2012) 1–18

1328


9. Degani, R., Bortolan, G.: The problem of linguistic approximation in clinical decision making. International Journal of Approximate Reasoning 2 (1988) 143–162 10. Delgado, M., Verdegay, J., Vila, M.: On aggregation operations of linguistic labels. International Journal of Intelligent Systems 8(3) (1993) 351–370 11. Herrera, F., Martínez, L.: A 2-tuple fuzzy linguistic representation model for computing with words. IEEE Transactions on Fuzzy Systems 8(6) (2000) 746–752 12. Herrera, F., Martínez, L.: The 2-tuple linguistic computational model. Advantages of its linguistic description, accuracy and consistency. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 9 (2001) 33–48 13. Espinilla, M., Liu, J., Martínez, L.: An extended hierarchical linguistic model for decisionmaking problems. Computational Intelligence 27(3) (2011) 489–512 14. Saaty, R. W.: The analytic hierarchy process: What it is and how it is used. Math Modelling, Vol. 9, No. 3-5, pp. 161-176 (1987).

1329


Declarado de InterĂŠs Municipal por el Honorable Concejo Deliberante del Partido de General Pueyrredon


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.