VOLUMEN 16 NÚMERO 1 ENERO A JUNIO DE 2012 ISSN: 1870-6525
Morfismos Departamento de Matem´ aticas Cinvestav
Chief Editors - Editores Generales • Isidoro Gitler • Jes´ us Gonz´ alez
Associate Editors - Editores Asociados • Ruy Fabila • Ismael Hern´ andez • On´esimo Hern´ andez-Lerma • H´ector Jasso Fuentes • Sadok Kallel • Miguel Maldonado • Carlos Pacheco • Enrique Ram´ırez de Arellano • Enrique Reyes • Dai Tamaki • Enrique Torres Giese
Apoyo T´ecnico • Juan Carlos Castro Contreras • Irving Josu´e Flores Romero • Omar Hern´ andez Orozco • Roxana Mart´ınez • Laura Valencia Morfismos est´ a disponible en la direcci´ on http://www.morfismos.cinvestav.mx. Para mayores informes dirigirse al tel´efono +52 (55) 5747-3871. Toda correspondencia debe ir dirigida a la Sra. Laura Valencia, Departamento de Matem´ aticas del Cinvestav, Apartado Postal 14-740, M´exico, D.F. 07000, o por correo electr´ onico a la direcci´ on: morfismos@math.cinvestav.mx.
VOLUMEN 16 NÚMERO 1 ENERO A JUNIO DE 2012 ISSN: 1870-6525
Morfismos Departamento de Matem´aticas Cinvestav
Morfismos, Volumen 16, N´ umero 1, enero a junio de 2012, es una publicaci´on semestral editada por el Centro de Investigaci´on y de Estudios Avanzados del Instituto Polit´ecnico Nacional (Cinvestav), a trav´es del Departamento de Matem´ aticas. Av. Instituto Polit´ecnico Nacional No. 2508, Col. San Pedro Zacatenco, Delegaci´ on Gustavo A. Madero, C.P. 07360, D.F., Tel. 55-57473800, www.cinvestav.mx, morfismos@math.cinvestav.mx, Editores Generales: Drs. Isidoro Gitler Golwain y Jes´ us Gonz´ alez Espino Barros. Reserva de Derechos No. 04-2012-011011542900-102, ISSN: 1870-6525, ambos otorgados por el Instituto Nacional del Derecho de Autor. Certificado de Licitud de T´ıtulo No. 14729, Certificado de Licitud de Contenido No. 12302, ambos otorgados por la Comisi´ on Calificadora de Publicaciones y Revistas Ilustradas de la Secretar´ıa de Gobernaci´ on. Impreso por el Departamento de Matem´aticas del Cinvestav, Avenida Instituto Polit´ecnico Nacional 2508, Colonia San Pedro Zacatenco, C.P. 07360, M´exico, D.F. Este n´ umero se termin´o de imprimir en septiembre de 2012 con un tiraje de 50 ejemplares. Las opiniones expresadas por los autores no necesariamente reflejan la postura de los editores de la publicaci´ on. Queda estrictamente prohibida la reproducci´on total o parcial de los contenidos e im´ agenes de la publicaci´ on, sin previa autorizaci´on del Cinvestav.
Information for Authors The Editorial Board of Morfismos calls for papers on mathematics and related areas to be submitted for publication in this journal under the following guidelines: • Manuscripts should fit in one of the following three categories: (a) papers covering the graduate work of a student, (b) contributed papers, and (c) invited papers by leading scientists. Each paper published in Morfismos will be posted with an indication of which of these three categories the paper belongs to. • Papers in category (a) might be written in Spanish; all other papers proposed for publication in Morfismos shall be written in English, except those for which the Editoral Board decides to publish in another language. • All received manuscripts will be refereed by specialists.
• In the case of papers covering the graduate work of a student, the author should provide the supervisor’s name and affiliation, date of completion of the degree, and institution granting it. • Authors may retrieve the LATEX macros used for Morfismos through the web site http://www.math.cinvestav.mx, at “Revista Morfismos”. The use by authors of these macros helps for an expeditious production process of accepted papers. • All illustrations must be of professional quality.
• Authors will receive the pdf file of their published paper.
• Manuscripts submitted for publication in Morfismos should be sent to the email address morfismos@math.cinvestav.mx.
Informaci´ on para Autores El Consejo Editorial de Morfismos convoca a proponer art´ıculos en matem´ aticas y ´ areas relacionadas para ser publicados en esta revista bajo los siguientes lineamientos: • Se considerar´ an tres tipos de trabajos: (a) art´ıculos derivados de tesis de grado de alta calidad, (b) art´ıculos por contribuci´ on y (c) art´ıculos por invitaci´ on escritos por l´ıderes en sus respectivas ´ areas. En todo art´ıculo publicado en Morfismos se indicar´ a el tipo de trabajo del que se trate de acuerdo a esta clasificaci´ on. • Los art´ıculos del tipo (a) podr´ an estar escritos en espa˜ nol. Los dem´ as trabajos deber´ an estar redactados en ingl´ es, salvo aquellos que el Comit´ e Editorial decida publicar en otro idioma. • Cada art´ıculo propuesto para publicaci´ on en Morfismos ser´ a enviado a especialistas para su arbitraje. • En el caso de art´ıculos derivados de tesis de grado se debe indicar el nombre del supervisor de tesis, su adscripci´ on, la fecha de obtenci´ on del grado y la instituci´ on que lo otorga. • Los autores interesados pueden obtener el formato LATEX utilizado por Morfismos en el enlace “Revista Morfismos” de la direcci´ on http://www.math.cinvestav.mx. La utilizaci´ on de dicho formato ayudar´ a en la pronta publicaci´ on de los art´ıculos aceptados. • Si el art´ıculo contiene ilustraciones o figuras, ´ estas deber´ an ser presentadas de forma que se ajusten a la calidad de reproducci´ on de Morfismos. • Los autores recibir´ an el archivo pdf de su art´ıculo publicado.
• Los art´ıculos propuestos para publicaci´ on en Morfismos deben ser dirigidos a la direcci´ on morfismos@math.cinvestav.mx.
Lineamientos Editoriales Morfismos, revista semestral del Departamento de Matem´ aticas del Cinvestav, tiene entre sus principales objetivos el ofrecer a los estudiantes m´ as adelantados un foro para publicar sus primeros trabajos matem´ aticos, a fin de que desarrollen habilidades adecuadas para la comunicaci´ on y escritura de resultados matem´ aticos. La publicaci´ on de trabajos no est´ a restringida a estudiantes del Cinvestav; deseamos fomentar la participaci´ on de estudiantes en M´exico y en el extranjero, as´ı como de investigadores mediante art´ıculos por contribuci´ on y por invitaci´ on. Los reportes de investigaci´ on matem´ atica o res´ umenes de tesis de licenciatura, maestr´ıa o doctorado de alta calidad pueden ser publicados en Morfismos. Los art´ıculos a publicarse ser´ an originales, ya sea en los resultados o en los m´etodos. Para juzgar ´esto, el Consejo Editorial designar´ a revisores de reconocido prestigio en el orbe internacional. La aceptaci´ on de los art´ıculos propuestos ser´ a decidida por el Consejo Editorial con base a los reportes recibidos. Los autores que as´ı lo deseen podr´ an optar por ceder a Morfismos los derechos de publicaci´ on y distribuci´ on de sus trabajos. En tal caso, dichos art´ıculos no podr´ an ser publicados en ninguna otra revista ni medio impreso o electr´ onico. Morfismos solicitar´ a que tales art´ıculos sean revisados en bases de datos internacionales como lo son el Mathematical Reviews, de la American Mathematical Society, y el Zentralblatt MATH, de la European Mathematical Society.
Morfismos
Editorial Guidelines Morfismos is the journal of the Mathematics Department of Cinvestav. One of its main objectives is to give advanced students a forum to publish their early mathematical writings and to build skills in communicating mathematics. Publication of papers is not restricted to students of Cinvestav; we want to encourage students in Mexico and abroad to submit papers. Mathematics research reports or summaries of bachelor, master and Ph.D. theses of high quality will be considered for publication, as well as contributed and invited papers by researchers. All submitted papers should be original, either in the results or in the methods. The Editors will assign as referees well-established mathematicians, and the acceptance/rejection decision will be taken by the Editorial Board on the basis of the referee reports. Authors of Morfismos will be able to choose to transfer copy rights of their works to Morfismos. In that case, the corresponding papers cannot be considered or sent for publication in any other printed or electronic media. Only those papers for which Morfismos is granted copyright will be subject to revision in international data bases such as the American Mathematical Society’s Mathematical Reviews, and the European Mathematical Society’s Zentralblatt MATH.
Morfismos
Contents - Contenido The Mathematical Life of Nikolai Vasilevski Sergei Grudsky . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Sistemas de funciones iteradas por partes Sergio Mabel Ju´ arez V´ azquez y Flor de Mar´ıa Correa Romero . . . . . . . . . . . . 9
Uniqueness of the Solution of the Yule–Walker Equations: A Vector Space Approach Ana Paula Isais-Torres and Rolando Cavazos-Cadena . . . . . . . . . . . . . . . . . . . 29
Morfismos, Vol. 16, No. 1, 2012, pp. 1–8 Morfismos, Vol. 16, No. 1, 2012, pp. 1–8
The Mathematical Life of The Mathematical Life of Nikolai Vasilevski Nikolai Vasilevski Sergei Grudsky Sergei Grudsky 2010 Mathematics Subject Classification: 30C40, 46E22, 47A25, 47B10, 2010 Mathematics 47B35, 47C15,47L15,Subject 81S10.Classification: 30C40, 46E22, 47A25, 47B10, 47B35, 47C15,47L15, Keywords and phrases: 81S10. C ∗ -algebras, multidimensional singular integral op∗ Keywords and phrases: Coperators, -algebras,Toeplitz-Fock multidimensional singular integral operators, Toeplitz-Bergman operators, Berezin quanerators,procedure. Toeplitz-Bergman operators, Toeplitz-Fock operators, Berezin quantization tization procedure. Nikolai Vasilevski’s creative attitude — not only to mathematics, but Nikolai Vasilevski’s attitude a—great not only to mathematics, but to life in general — had creative already received stimulus in Odessa High to life 116. in general had selective already received a great stimulus in Odessa High School This — highly high school accepted talented children School 116. the This highly accepted talented children from all over city, beingselective famous high for itsschool selection of quality teachers. A from all nonstandard, over the city, yet being famous for itsapproach selectionto of teaching quality teachers. creative, highly personal combinedA creative, nonstandard, yet towards highly personal to teaching with a demanding attitude students.approach The school was alsocombined famous with demanding attitude towards students. Thequite school was also for its asystem of self-government by the students, unusual by famous tradifor itsSoviet system of self-government by the that students, quite from unusual traditional standards. Many students graduated thisbyschool tional Sovietwell-known standards. scientists Many students that graduated from this school later became and productive researchers. later became well-known scientists and productive researchers. Upon graduation in 1966, Nikolai became a student at the Department Upon graduation in 1966, Nikolai became student atInthe of Mathematics and Mechanics of Odessa State aUniversity. hisDepartment third year Mathematics andhis Mechanics of Odessa Statework University. In his third year ofofstudies he began serious mathematical under the supervision studies he began his serious mathematical work under theLitvinchuk. supervision ofofthe well known Soviet mathematician Georgiy Semenovich of the wellwas known Sovietteacher mathematician Georgiy Semenovich Litvinchuk a gifted and scientific adviser, with a Litvinchuk. talent for Litvinchukhis was a gifted teacher and long scientific adviser, a talent for fascinating students with problems interesting andwith yet up-to-date. fascinating his students with interesting and yet up-to-date. The weekly Odessa seminar onproblems boundarylong value problems, chaired by Prof. The weeklyforOdessa seminar on boundary value problems, chaired by Prof. Litvinchuk more than 25 years, deeply influenced Nikolai Vasilevski as Litvinchuk more than 25 years, deeply influenced Nikolai Vasilevski as well as otherfor students. well as other students. Thus Nikolai started to work on the problem of developing Fredholm Thus started to operators work on the of developing theory for Nikolai a class of integral withproblem nonintegrable integralFredholm kernels. for the a class of integral with nonintegrable integral Intheory essence, integral kernel operators was the Cauchy kernel multiplied by akernels. logaIn essence, theIntegral integraloperators kernel wasofthe multiplied by aintelogarithmic factor. thisCauchy type liekernel in between singular rithmic factor. Integral operators of this in between singularities. singular integral operators and those whose kernels havetype weaklie(integrable) operators those whose kernels have weak (integrable) singularities. Agral famous Sovietand mathematician, F. D. Gakhov, posed this problem in the A famous Soviet mathematician, F. D. Gakhov, posed this problem in the 1 1
2
Sergei Grudsky
early 1950s, and it remained open for more than 20 years. Nikolai managed to provide a complete solution in a much more general setting than the original. While working on this problem, Nikolai revealed a key trait of his mathematical talent: his ability to penetrate deeply into the core of the problem, and to see rather unexpected connections between different theories. For instance, in order to solve Gakhov’s problem, Nikolai utilized the theory of singular integral operators with coefficients having discontinuities of first kind, and the theory of operators whose integral kernels have fixed singularities — both theories having just appeared at that time. The success of the young mathematician was well recognized by a broad circle of experts working in the area of boundary value problems and operator theory. Nikolai was awarded the prestigious M. Ostrovskii Prize in 1971, given to young Ukrainian scientists with the best research work. Due to his solution of the famous problem, Nikolai quickly entered the mathematical community, and became known to many prominent mathematicians of that time. In particular, he was influenced by regular interaction with outstanding mathematicians such as M. G. Krein and S. G. Mikhlin. Nikolai graduated from Odessa State University in 1971, obtaining his Master degree. After two years he defended his Ph.D. thesis, and in the same year he became an Assistant Professor at the Department of Mathematics and Mechanics of Odessa State University, where he was later promoted first to the rank of Associate Professor and then to Full Professor. Having received the degree, Nikolai continued his active mathematical work. He quickly displayed yet another facet of his talent in approaching mathematical problems: his vision and ability to use general algebraic structures in operator theory, which, on the one hand, simplify the problem, and on the other, can be applied to many different problems. We will briefly describe two examples of this. The first example is the method of orthogonal projections. In 1979, studying the algebra of operators generated by the Bergman projection together with the operators of multiplication by piecewise continuous functions, N. Vasilevski gave a description of the C ∗ -algebra generated by two self-adjoint elements s and n satisfying the properties s2 + n2 = e and sn + ns = 0. A simple substitution p = (e + s − n)/2 and q = (e − s − n)/2 shows that this algebra is also generated by two self-adjoint idempotents (orthogonal projections) p and q (and the identity element e). During the last quarter of the past century, the latter algebra was rediscovered by many authors around the world. Among all algebras generated by orthogonal projections, the algebra generated by two projections is the only tame algebra (excluding the trivial case of the algebra with identity generated by one orthogonal projection). All algebras generated by three or more orthogonal
The Mathematical Life of Nikolai Vasilevski
3
projections are known to be wild, even when the projections satisfy some additional constraints. Many model algebras arising in operator theory are generated by orthogonal projections, and thus any information about their structure essentially broadens the set of operator algebras admitting a reasonable description. In particular, two and more orthogonal projections naturally appear in the study of various algebras generated by the Bergman projection and by piecewise continuous functions having two or more distinct limiting values at a point. Although these projections, say, P , Q1 , . . . , Qn , satisfy the extra condition Q1 + · · · + Qn = I, they still generate, in general, a wild C ∗ -algebra. However, it was shown that the structure of this algebra is determined by the joint properties of certain n positive injective contractions Ck , k = 1, . . . , n, satisfying the identity k=1 Ck = I; the structure is therefore determined by the structure of the C ∗ -algebra generated by the contractions. The principal difference between the case of two projections and the general case of a finite set of projections is now completely clear: for n = 2 (with projections P and Q + (I − Q) = I) we have only one contraction, and the spectral theorem leads directly to the desired description of the algebra. For n > 2 we have to deal with the C ∗ -algebra generated by a finite set of noncommuting positive injective contractions, which is a wild problem. Fortunately, for many important cases related to concrete operator algebras these projections have yet another special property: the operators P Q1 P, . . . , P Qn P commute. This property makes the respective algebra tame, so it has a nice and simple description as the algebra of all n × n matrix valued functions that are continuous on the joint spectrum ∆ of the operators P Q1 P, . . . , P Qn P , with a certain degeneration on the boundary of ∆. Another notable example of the algebraic structures used and developed by N. Vasilevski is his version of the Local Principle. The notions of locally equivalent operators and localization theory were introduced and developed by I. Simonenko in the mid-sixties. According to the tradition of that time, the theory was focused on the study of individual operators, and on the reduction of the Fredholm properties of an operator to local invertibility. Later, different versions of the local principle were elaborated by many authors, including G. R. Allan, R. Douglas, I. Ts. Gohberg, N. Ia. Krupnik, A. Kozak, and B. Silbermann. In spite of the fact that many of these versions are formulated in terms of Banach- or C ∗ -algebras, the main result, as before, reduces invertibility (or the Fredholm property) to local invertibility. On the other hand, at about the same time, several papers on the description of algebras and rings in terms of continuous sections were published by J. Dauns and K. H. Hofmann, M. J. Dupr´e, J. M. G. Fell, M. Takesaki and J. Tomiyama. These two directions have been developed independently, with no known links between the two series of papers. N. Vasilevski was the one who proposed a local principle which gives the global
4
Sergei Grudsky
description of the algebra under study in terms of continuous sections of a certain canonically defined C ∗ -bundle. This approach is based on general constructions of J. Dauns and K. H. Hofmann, and results of J. Varela. The main contribution consists of a deep re-comprehension of the traditional approach to the local principles unifying the ideas coming from both directions, resulting in a canonical procedure that provides the global description of the algebra under consideration in terms of continuous sections of a C ∗ -bundle constructed by means of local algebras. In the eighties and even later, the main direction of the work of Nikolai Vasilevski has been the study of multidimensional singular integral operators with discontinuous coefficients. The main philosophy here is first to study algebras containing these operators, thus providing a solid foundation for the study of various properties (in particular, the Fredholm property) of concrete operators. The main tool has been the version of the local principle described above. This principle was not merely used to reduce the Fredholm property to local invertibility but also for a global description of the algebra as a whole based on the description of the local algebras. Using this methodology, Nikolai Vasilevski obtained deep results in the theory of operators with Bergman’s kernel and piece-wise continuous coefficients, the theory of multidimensional Toeplitz operators with pseudodifferential presymbols, the theory of multidimensional Bitsadze operators, the theory of multidimension al operators with shift, etc. N. Vasilevski defended the Doctor of Sciences dissertation in 1988, based on these results, entitled “Multidimensional singular integral operators with discontinuous classical symbols”. Besides being a very active mathematician, N. Vasilevski is an excellent lecturer. His talks are always clear, sparkling, and full of humor, so natural for someone who grew up in Odessa, a city with a longstanding tradition of humor and fun. He was the first at Odessa State University to design and teach a course in general topology. Students enjoyed his lectures in calculus, real analysis, complex analysis, and functional analysis. He was one of the most popular professors at the Department of Mathematics and Mechanics of Odessa State University. Nikolai is a master of presentations, and his colleagues always enjoy his talks at conferences and seminars. In 1992 Nikolai Vasilevski moved to Mexico. He started his career there as an Investigator (Full Professor) at the Mathematics Department of the Cinvestav (Centro de Investigaci´ on y de Estudios Avanzados). His appointment significantly strengthened the Department, which is one of the leading mathematical centers in Mexico. His relocation also visibly revitalized mathematical activity in the country in the field of operator theory. While actively pursuing his own research agenda, Nikolai also served as the organizer of several important conferences. For instance, we mention the
The Mathematical Life of Nikolai Vasilevski
5
(regular since 1998) annual workshop “An´alisis: Norte-Sur,” and the wellknown international conference IWOTA-2009. He facilitated the relocation to Mexico of a number of other experts in operator theory, including Yu. Karlovich and S. Grudsky. During his career in Mexico, Vasilevski produced a sizable group of students and younger colleagues; six young mathematicians have received a Ph.D. degree under his supervision. During his life in Mexico, Vasilevski’s scientific interests concentrated mainly around the theory of Toeplitz operators on Bergman and Fock spaces. At the end of the 1990s, N. Vasilevski discovered a quite surprising phenomenon in the theory of Toeplitz operators on the Bergman space. Unexpectedly, there exists a rich family of commutative C ∗ -algebras generated by Toeplitz operators with non-trivial defining symbols. In 1995 B. Korenblum and K. Zhu proved that the Toeplitz operators with radial defining symbols acting on the Bergman space over the unit disk can be diagonalized with respect to the standard monomial basis in the Bergman space. The C ∗ -algebra generated by such Toeplitz operators is therefore obviously commutative. Four years later Vasilevski also proved the commutativity of the C ∗ -algebra generated by the Toeplitz operators acting on the Bergman space over the upper half-plane and with defining symbols depending only on Im z. Furthermore, he discovered the existence of a rich family of commutative C ∗ -algebras of Toeplitz operators. Moreover, it turned out that the smoothness properties of the symbols do not play any role in commutativity: the symbols can be merely measurable. Surprisingly, everything is governed by the geometry of the underlying manifold, the unit disk equipped with the hyperbolic metric. The precise description of this phenomenon is as follows. Each pencil of hyperbolic geodesics determines the set of symbols which are constant on the corresponding cycles, the orthogonal trajectories to geodesics forming the pencil. The C ∗ -algebra generated by the Toeplitz operators with such defining symbols is commutative. An important feature of such algebras is that they remain commutative for the Toeplitz operators acting on each of the commonly considered weighted Bergman spaces. Moreover, assuming some natural conditions on “richness” of the classes of symbols, the following complete characterization has been obtained: A C ∗ -algebra generated by the Toeplitz operators is commutative on each weighted Bergman space if and only if the corresponding defining symbols are constant on cycles of some pencil of hyperbolic geodesics. It is also worth mentioning that the proof of this result uses the Berezin quantization procedure in an essential way. Apart from its own beauty, this result reveals an extremely deep influence of the geometry of the underlying manifold on the properties of the Toeplitz operators over the manifold. In each of the mentioned above cases, when the
6
Sergei Grudsky
algebra is commutative, a certain unitary operator has been constructed. It reduces the corresponding Toeplitz operators to certain multiplication operators, which also allows one to describe their representations of spectral type. This gives a powerful research tool for the subject, in particular, yielding direct access to the majority of the important properties such as boundedness, compactness, spectral properties, and invariant subspaces of the Toeplitz operators under study. This new approach has enabled the solution of a number of important problems in the theory of Toeplitz operators and related areas. The results of the research in this direction became a part of the monograph “Commutative Algebras of Toeplitz Operators on the Bergman Space” published by N. Vasilevski in Birkhauser-Verlag in 2008. The extension of the above result from the unit disk to the unit ball was recently done by Nikolai together with his Mexican colleague Raul Quiroga. Geometry again played an essential role in this study. The commutativity properties of Toeplitz operators here are governed by the so-called Lagrangian pairs, pairs of orthogonal Lagrangian foliations of the unit ball with certain distinguished geometrical properties. The leaves of one of these foliations always turn out to be the orbits of a maximal abelian subgroup of biholomorphisms of the unit ball. The result says that, given any Lagrangian pair, the C ∗ -algebra generated by Toeplitz operators, whose generating symbols are constant on the leaves being orbits, is commutative on each commonly considered weighted Bergman space on the unit ball. The program of studying commutative algebras generated by Toeplitz operators as well as the development of various related problems, initiated by N. Vasilevski, is now being carried out by growing groups of mathematicians in different research centers. During his twenty years at the Cinvestav, Nikolai Vasilevski has consistently applied the best traditions of the Russian mathematical school in his training of young talented Mexican researchers. The constantly growing group of his coauthors, colleagues, and students is an established part of the “Mexican school of Toeplitz operators”—an expression heard more and more at international conferences. Selected papers by Nikolai Vasilevski 1. Vasilevski N. L., On a class of singular integral operators with kernels of polar-logarithmic type, Izvestija Akad. Nauk SSSR ser. matem. 40:1 (1976), 131–151 (In Russian). English translation: Math. USSR Izvestija 10:1 (1976), 127–143. 2. Vasilevski N. L., Two-dimensional Mikhlin-Calderon-Zygmund operators and bisingular operators, Sibirski Matematicheski Zurnal 27:2
The Mathematical Life of Nikolai Vasilevski
7
(1986), 23–31 (In Russian). English translation: Siberian Math. J. 27:2 (1986), 161–168. 3. Vasilevski N. L., Algebras generated by multidimensional singular integral operators and by coefficients admitting discontinuities of homogeneous type, Matematicheski Sbornik 129:1 (1986), 3–19 (In Russian). English translation: Math. USSR Sbornik 57:1 (1987), 1–19. 4. Vasilevski N. L., On an algebra connected with Toeplitz operators on the tube domains, Izvestija Akad. Nauk SSSR, ser. matem. 51:1 (1987), 79–95 (In Russian). English translation: Math. USSR Izvestija 30:1 (1988), 71–87. 5. Vasilevski N. L.; Trujillo R., Convolution operators on standard CR manifolds. I. Structural Properties, Integral Equations and Operator Theory 19:1 (1994), 65–107. 6. Vasilevski N. L., Convolution operators on standard CR - manifolds. II. Algebras of convolution operators on the Heisenberg group, Integral Equations and Operator Theory, 19:3 (1994), 327–348. 7. Vasilevski N. L., C*-algebras generated by orthogonal projections and their applications, Integral Equations and Operator Theory 31 (1998), 113–132. 8. Vasilevski N. L., On the structure of Bergman and poly-Bergman spaces, Integral Equations and Operator Theory 33 (1999), 471–488. 9. Vasilevski N. L., Poly-Fock spaces, Operator Theory. Advances and Applications 117 (2000), 371–386. 10. Grudsky S.; Vasilevski N. L., Bergman-Toeplitz operators: radial component influence, Integral Equations and Operator Theory 40:1 (2001), 16–33. 11. Vasilevski N. L., Toeplitz operators on the Bergman spaces: Insidethe-Domain effects, Contemporary Mathematics 289 (2001), 79–146. 12. Vasilevski N. L., Bergman space structure, commutative algebras of Toeplitz operators and hyperbolic geometry, Integral Equations and Operator Theory 46 (2003), 235–251. 13. Grudsky S.; Quiroga-Barranco R.; Vasilevski N. L., Commutative C ∗ algebras of Toeplitz operators and quantization on the unit disk, J. Functional Analysis 234 (2006), 1–44.
8
Sergei Grudsky
14. Vasilevski N. L., On the Toeplitz operators with piecewise continuous symbols on the Bergman space, In:“Modern Operator Theory and Applications”, Operator Theory: Advances and Applications 170 (2007), 229–248. 15. Vasilevski N. L., Poly-Bergman spaces and two-dimensional singular integral operators, Operator Theory: Advances and Applications 171 (2007), 349–359. 16. Quiroga-Barranco R.; Vasilevski N. L., Commutative C ∗ -algebras of Toeplitz operators on the unit ball, I. Bargmann-type transforms and spectral representations of Toeplitz operators, Integral Equations and Operator Theory 59:3 (2007), 379-419. 17. Quiroga-Barranco R.; Vasilevski N. L., Commutative C ∗ -algebras of Toeplitz operators on the unit ball, II. Geometry of the level sets of symbols, Integral Equations and Operator Theory 60:1 (2008), 89– 132. 18. Grudsky S.; Vasilevski N. L., On the structure of the C ∗ -algebra generated by Toeplitz operators with piece-wise continuous symbols, Complex Analysis and Operator Theory 2:4 (2008), 525–548. 19. Grudsky S.; Vasilevski N. L., Anatomy of the C ∗ -algebra generated by Toeplitz operators with piece-wise continuous symbols, Operator Theory: Advances and Applications 190 (2009), 243–265. 20. Vasilevski N. L. Quasi-radial quasi-homogeneous symbols and commutative banach algebras of Toeplitz operators, Integral Equations and Operator Theory 66:1 (2010), 141–152. Sergei Grudsky Departamento de Matem´ aticas, CINVESTAV del I.P.N., Aportado Postal 14-740, M´ exico D.F, C.P. 07360, M´ exico. grudsky@math.cinvestav.mx
Morfismos, Vol. 16, No. 1, 2012, pp. 9–27 Morfismos, Vol. 16, No. 1, 2012, pp. 9–27
∗ Sistemas de funciones iteradas por partes Sistemas de funciones iteradas por partes∗ Sergio Mabel Ju´arez V´azquez 1 1 Sergio Mabel Ju´arezRomero V´azquez2 Flor de Mar´ ıa Correa Flor de Mar´ıa Correa Romero 2 Resumen Resumen En la actualidad los medios digitales se han convertido en herramienla actualidad ya los que medios digitales se han convertido tasEn indispensables la mayor´ ıa de la informaci´ on en seherramienmaneja tas indispensables ya que son la mayor´ ıa de imprescindible la informaci´ on de se informaneja mediante ellos. Las im´ agenes una forma mediante Las im´ agenes son una forma espacio imprescindible de informaci´ on pero ellos. lamentablemente ocupan bastante en la memoria n pero lamentablemente bastante de espacio en la memoria de maci´ una ocomputadora y a su vez ocupan los dispositivos almacenamiento de una computadora a su vez los dispositivos deaalmacenamiento digital suelen ser caros. yReducir el espacio que las im´ genes ocupan suelendeser caros. Reducir elbaja espacio que las im´ agenesque ocupan en digital la memoria una computadora los costos y permite la en la memoria de una computadora transmisi´ on de ´estas sea m´ as eficiente. baja los costos y permite que la n este de ´estas sea m´ s eficiente.una aplicaci´ El transmisi´ objetivo ode trabajo esapresentar on del concepto objetivo de este es opresentar una aplicaci´ on La delt´ concepto de El conjunto fractal a latrabajo compresi´ n de im´ agenes digitales. ecnica fractal a la compresi´ oanbasada de im´ aen genes digitales. La at´ ecnica quedeseconjunto usar´ a para la compresi´ on est´ la teor´ ıa matem´ tica que se usar´ a para lade compresi´ on est´ a basada la teor´ıa matem´ atica denominada sistemas funciones iteradas por en partes. denominada sistemas de funciones iteradas por partes.
2010 Mathematics Subject Classification: 68U10, 65D18. 2010 Mathematics Subject Classification: 68U10, 65D18. Keywords and phrases: sistemas de funciones iteradas, sistemas de funKeywords and phrases: sistemas de funciones iteradas, de funciones iteradas por partes, conjunto fractal, compresi´ on fractalsistemas de im´ agenes. ciones iteradas por partes, conjunto fractal, compresi´ on fractal de im´ agenes.
1
1
Introducci´ on Introducci´ on
La informaci´ on que actualmente se maneja es en su mayor´ıa a trav´es de La informaci´ olan importancia que actualmente se maneja es en ıa a social trav´esy de computadoras, que ´esta tiene para quesulosmayor´ sistemas computadoras, la importancia que ´ e sta tiene para que los sistemas social y econ´ omico funcionen, trajo consigo que se est´e constantemente desarrollanecon´ o mico funcionen, trajo consigo que se est´ e constantemente desarrollando herramientas para tratarla. Problemas como el asegurar la informaci´on do herramientas para tratarla. Problemas el asegurar la informaci´on y almacenarla de manera digital han tomadocomo relevancia. y almacenarla de manera digital han tomado relevancia. ∗ Este trabajo es parte de la tesis de licenciatura desarrollada por el primer autor ∗ trabajo parte deautora. la tesis La de tesis licenciatura desarrollada por el primer autor bajo laEste direcci´ on de es la segunda fue presentada en el Departamento de bajo direcci´ onEscuela de la segunda autora. La tesis fue presentada en el Departamento Matem´ ala ticas de la Superior de F´ısica y Matem´ aticas del Instituto Polit´ ecnico de Matem´ aticas de la Escuela aticas del Instituto Polit´ ecnico Nacional en noviembre de 2010.Superior de F´ısica y Matem´ 1 Becario en Nacional noviembre de 2010. Conacyt. 1 Becario Conacyt. 2 Profesora Becaria COFAA, Departamento de Matem´ aticas ESFM-IPN. 2 Profesora Becaria COFAA, Departamento de Matem´ aticas ESFM-IPN.
9
9
10
Sergio Mabel Ju´ arez V´ azquez y Flor de Mar´ıa Correa Romero
En el caso del almacenamiento digital de informaci´on se han creado m´etodos para reducir su volumen sin que se afecte el contenido de ´esta. Las im´ agenes digitales son una gran fuente de informaci´on. Qui´en no ha escuchado la frase: una imagen dice m´ as que mil palabras. Como las im´agenes suelen ocupar mucho espacio en la memoria de una computadora, se han desarrollado t´ecnicas de compresi´ on para ayudar a resolver tal problema. La compresi´ on de im´ agenes puede ser con p´erdida o sin p´erdida de informaci´ on. Los algoritmos que realizan la compresi´on con p´erdida, reducen la calidad de la imagen reconstruida despu´es del proceso de compresi´ on, as´ı pues esta imagen puede discrepar en relaci´on a la imagen original en detalles como contornos, formas y colores. Con los algoritmos sin p´erdida, las im´ agenes procesadas quedan intactas, pero pagan esta ventaja con una raz´ on de compresi´ on menor en comparaci´on a los algoritmos con p´erdida, por lo tanto se ocupar m´ as memoria para almacenarlas. Para algunos casos se requiere de comprimir im´ agenes en donde la p´erdida sea m´ınima o nula. Ejemplos de esto pueden ser im´ agenes m´edicas, im´agenes de huellas dactilares, im´ agenes donde haya texto, en general im´agenes cuya informaci´ on contenida se necesita de forma muy precisa. En otros casos se puede aprovechar de algunas limitaciones que el ojo humano tiene, de la habilidad que hemos desarrollado para poder intuir formas, del simbolismo que asociamos a ciertas im´ agenes o simplemente que algunos detalles de la imagen no tienen importancia para nosotros. Si un ser querido nos muestra una foto en donde ´el aparece y en el fondo hay algunas nubes, uno pondr´a atenci´ on primordialmente en la persona sin darle tanta importancia a la forma que cada nube tiene, adem´ as las nubes tienen formas aleatorias, as´ı que si la imagen es procesada mediante una t´ecnica de compresi´on y en el proceso se pierde informaci´ on sobre la forma exacta que la nube tiene en la imagen, antes y depu´es del proceso, no lo notaremos o simplemente no importar´ a, pues nos interesar´ a s´ olo el hecho de que es una nube. La compresi´ on fractal es una t´ecnica de las que se denomina con p´erdida, la imagen una vez decodificada no es igual que la imagen original. Este trabajo ilustra la aplicaci´ on que la geometr´ıa fractal tiene a la compresi´ on de im´ agenes digitales. Cada vez que se ocupe la palabra imagen se har´a referencia a una imagen digital a menos que se mencione lo contrario. A lo largo de las secciones siguientes se dar´a un m´etodo para codificar y comprimir im´ agenes en escala de grises con ayuda del concepto de conjunto fractal como el atractor de un sistema de funciones iteradas por partes (SFIP). Los autores observamos que en toda la literatura que revisamos sobre compresi´ on fractal de im´ agenes, el concepto de SFIP carece de una definici´ on formal y no se comprueba que se puede inducir una contracci´on a trav´es del sistema. En este art´ıculo se propone una definici´on para los SFIP
Sistemas de funciones iteradas por partes
11
y se demuestra que dado un SFIP lo podemos asociar con una contracci´on. Cabe mencionar que la generalizaci´on de codificaci´on de im´agenes a colores no es dif´ıcil a partir de la teor´ıa de codificaci´on de im´agenes en escala de grises.
2
Teor´ıa b´ asica
Los resultados que se mencionan en esta secci´on son conocidos, por lo que s´ olo se enunciar´ an. El lector puede consultar las demostraciones en [1] y [13]. Sea (X, d) un espacio m´etrico, denotaremos por H (X) al conjunto de todos los subconjuntos compactos y no vac´ıos de (X, d). Definici´ on 2.0.1. Sea (X, d) un espacio m´etrico, sean x ∈ X y A, B ∈ H (X). Se define la distancia de x al conjunto A como d(x, A) := min {d(x, y) | y ∈ A} , y la distancia de A a B como d(A, B) := max {d(x, B) | x ∈ A} . Proposici´ on 2.0.2. Sea (X, d) un espacio m´etrico, y sea h : H (X) × H (X) −→ R la funci´ on definida por ∀ A, B ∈ H (X), h(A, B) := d(A, B) ∨ d(B, A), entonces tenemos que h es una m´etrica sobre H (X). Teorema 2.0.3. Sea (X, d) un espacio m´etrico completo, entonces el espacio m´etrico (H (X), h) es completo. Definici´ on 2.0.4. Sean (X1 , d1 ) y (X2 , d2 ) dos espacios m´etricos. Una funci´ on f : (X1 , d1 ) −→ (X2 , d2 ) es una funci´on de Lipschitz, si existe un n´ umero real positivo α tal que ∀ x, y ∈ X,
d2 (f (x), f (y)) ≤ α d1 (x, y) ,
al n´ umero α se le llama un factor de Lipschitz de la funci´on f . Si se cumple que 0 ≤ α < 1, entonces f es llamada una contracci´on y α un factor de contracci´ on para f . Teorema 2.0.5. (Teorema del punto fijo). Si (X, d) es un espacio m´etrico completo y f : X −→ X una contracci´ on, con α un factor de contracci´ on de f , entonces existe un u ´nico xf ∈ X tal que f (xf ) = xf , a on. xf se le llama el punto fijo de la contracci´
12
Sergio Mabel Ju´ arez V´ azquez y Flor de Mar´ıa Correa Romero
2.1
Sistemas de funciones iteradas
Proposici´ on 2.1.1. Sea (X, d) un espacio m´etrico completo y sean para ı ∈ {1, . . . , n} fı : (X, d) −→ (X, d) contracciones, con αı un factor de on definida contracci´ on para fı . Sea F : (H (X), h) −→ (H (X), h) la funci´ por n fı (A), ∀ A ∈ H (X), F (A) = ı=1
entonces F es una contracci´ on y α := m´ ax {αı |ı ∈ {1, . . . , n}} es un factor de contracci´ on para F . Definici´ on 2.1.2. Un sistema de funciones iteradas o SFI, consiste de un espacio m´etrico completo (X, d) y una familia finita de contracciones {fı : (X, d) −→ (X, d) | ı ∈ {1, . . . , k}} , al SFI se le denota por {(X, d) ; f1 , f2 , . . . , fk } , y se llama un factor de contracci´ on del SFI al n´ umero α := m´ ax {αı | ı ∈ {1, . . . , k}} , donde el on para fı , ı ∈ {1, . . . , k}. n´ umero αı es un factor de contracci´ De acuerdo con la Proposici´ on 2.1.1, dado un sistemas de funciones iteradas {(X, d) ; f1 , f2 , . . . , fk } , se puede definir una contracci´on F en el espacio m´etrico completo (H (X), h), y por el Teorema del punto fijo existe un u ´nico AF ∈ H (X), tal que ´este es el punto fijo de la contracci´on, el conjunto AF es llamado el conjunto fractal asociado al SFI y a F la contracci´ on inducida por el SFI. Ejemplo. Consideremos el siguiente SFI
donde
(I 2 , de ) ; f1 , f2 , f3 ) ,
de es la distancia euclideana, I = [0, 1], f1 (x, y) := (1/2x + 1/4, 1/2y + 1/2), f2 (x, y) := (1/2x, 1/2y), f3 (x, y) := (1/2x + 1/2, 1/2y). Si F es la contracci´ on inducida por el SFI y tomamos el compacto C de R2 como la imagen siguiente.
Sistemas de funciones iteradas por partes
13
Figura 1: Imagen que representa al compacto C de R2 .
Las figuras que se muestran a continuaci´on de izquierda a derecha y de arriba hacia abajo son respectivamente los resultados obtenidos para F (C), F ◦(2) (C) := F (F (C)), . . . , F ◦(6) (C).
Figura 2: Como se puede abservar F ◦(6) (C) consta de 729 copias reducidas del compacto C.
Notemos que todo elemento de (H (X), h) se puede ver como el atractor de un SFI, pues dado C ∈ (H (X), h), la funci´ on φ : (H (X), h) −→ (H (X), h) definida por, φ(A) := C, para todo A ∈ H (X) es una contracci´on y {(H (X), h) ; φ} es un SFI, que tiene por atractor a C.
14
Sergio Mabel Ju´ arez V´ azquez y Flor de Mar´ıa Correa Romero
Este tipo de funciones son llamadas funciones de condensaci´on3 y a un SFI que contenga a una de estas funciones se le llama un sistema de funciones iteradas con condensaci´ on. Teorema 2.1.3. (Teorema del collage para fractales). Sea (X, d) un espacio m´etrico completo y sean B ∈ H (X) y ε ∈ R+ dados. Si {(X, d) ; f1 , . . . , fk } es un SFI con un factor de contracci´ on α y AF el atractor del SFI tales que h B, ∪ki=1 fi (B) ≤ ε, entonces h(AF , B) ≤
Es decir h(AF , B) ≤ para todo B ∈ H (X) .
ε . 1−α
1 h B, ∪ki=1 fi (B) , 1−α
El Teorema del collage dice que para tener un SFI cuyo atractor sea semenjante a un conjunto B ∈ H (X), tenemos que fabricar un conjunto de contracciones {f1 , . . . , fk }, tal que la uni´on ( o collage ) de los conjuntos f1 (B), . . . , fk (B) est´e cercano al conjunto B. Por lo tanto si h B, ∪ki=1 fi (B) ≤ ε para un ε lo suficientemente pequen ˜o podemos sustituir a B por el atractor del SFI. El teorema tambi´en nos ayuda a tener una medida de que tan cerca estar´a el atractor de un SFI a B sin tener que calcular el atractor, basta s´olo con estimar la distancia entre B y ∪ki=1 fi (B). Adem´ as la aproximaci´ on del atractor al conjunto B ser´a mejor cuando m´ as peque˜ no sea el factor de contraci´ on del SFI y no depende del n´ umero de contraciones que lo forman. Si tomamos a (R2 , de ), con de la m´etrica euclidiana y trabajamos sobre el conjunto de todos los subconjuntos compactos no vac´ıos de R2 para formar al espacio m´etrico (H (R2 ), h) en donde una fotograf´ıa o en general cualquier imagen es considerada como un compacto de R2 , entonces podr´ıamos aproximar una fotograf´ıa por un conjunto fractal que sea el atractor de un adecuado SFI y si adem´ as este SFI consta de pocas contracciones, podemos almacenar las contracciones en lugar de la imagen original y as´ı habremos reducido el espacio ocupado por la imagen en la memoria de un sistema digital. 3 Una funci´ on de condensaci´ on es una contracci´ on y cero es un factor de contracci´ on para ella.
Sistemas de funciones iteradas por partes
15
Esta fue la idea que abri´ o la investigaci´on de la compresi´on fractal de im´ agenes. Existen algunos inconvenientes. Por desgracia el Teorema del collage no proporciona un m´etodo para encontrar dicho SFI y el SFI cuyo atractor aproxima al compacto en cuesti´ on no necesariamente es u ´nico. Si tenemos por ejemplo que {(X, d) ; ω1 , ω2 , . . . , ωk } con k ≥ 2 es uno de los sistemas de funciones iteradas que aproxima a el compacto, entonces podemos escoger dos de las contracciones del SFI, digamos ω1 y ω2 para definir una nueva contracci´ on W1,2 : H (X) −→ H (X) como W1,2 (A) := ω1 (A) ∪ ω2 (A) para todo A ∈ H (X). As´ı tenemos un nuevo SFI {(H (X), h) ; W1,2 , W3 . . . , Wk } donde W1,2 (A) = ω1 (A) ∪ ω2 (A) y ∀ i ∈ {3, . . . , k} ,
Wi (A) = ωi (A),
el cual tiene el mismo atractor que el SFI original y por tanto se acerca al compacto que quer´ıamos aproximar del mismo modo, pero ´este tiene k − 1 contracciones a diferencia del primero que pose´ıa k contracciones. De esta forma podemos definir diferentes SFI que tendr´ıan el mismo atractor, sin embargo hay un aspecto importante que nos hacen preferir un SFI sobre otro y ´este es que las contracciones que forman al SFI est´en definidas de una manera simple lo cual nos permite intuir como ser el atractor del SFI. Cabe se˜ nalar que el Teorema del collage tampoco proporciona una manera de elegir el mejor SFI, sin embargo todo lo anterior no resta importancia a este resultado.
3
Sistemas de funciones iteradas por partes
El Teorema del collage asegura que dada cualquier imagen, siempre existe un fractal que se le parece tanto como nosotros queramos, pero supongamos que tenemos la fotograf´ıa de uno de nuestros seres queridos y que conocemos un SFI cuyo atractor aproxima a la fotograf´ıa, si iteramos la contracci´on inducida por el SFI evaluada en la imagen de un ´arbol, el atractor se parecer´a a la fotograf´ıa que estamos intentando aproximar pero como est´a compuesta de copias transformadas de un ´ arbol, la foto a detalle exhibir´ıa peque˜ nos arbolitos distorsionados, en particular en los rasgos f´ısicos de la persona lo cual no es natural. As´ı los sistemas de funciones iteradas tienen este inconveniente. Hubo algunos intentos para resolver este problema pero no fue sino hasta 1989, cuando un estudiante de Michael F. Barnsley, Arnaud Jacquin dise˜ no un nuevo m´etodo de codificaci´on de im´agenes basado en el
16
Sergio Mabel Ju´ arez V´ azquez y Flor de Mar´ıa Correa Romero
concepto fundamental de los SFI, pero haciendo a un lado el enfoque r´ıgido de los SFI globales. El nuevo m´etodo obedec´ıa a una idea que en principio parece muy simple. En vez de ver a una imagen como una serie de copias transformadas de alg´ un compacto arbitrario, esta vez la imagen estar´ıa formada por copias de pedazos de si misma bajo transformaciones apropiadas. La mejilla de nuestra t´ıa no se parece a un ´ arbol pero es muy probable que su mejilla derecha si sea muy parecida a la izquierda, as´ı pues esta idea contraresta en buena manera el problema que los SFI globales tienen. Este m´etodo se conoce como sistemas de funciones iteradas por partes SFIP.
Figura 3: Existen varias partes de la imagen que se parecen entre si, en particular las que est´ an encerradas en las regiones R1 y R2.
La idea general se basa en tomar una partici´on de la imagen, los elementos de la partici´ on pueden tener formas arbitrarias. En [7], [14], [2] y [9] est´ an descritos diversos algoritmos para realizar una partici´on, algunos ´ ocupan tri´ angulos o pol´ıgonos de tama˜ no variable, Estos pueden mejorar la calidad de compresi´ on pues facilitan encontrar la auto-semejanza entre las secciones de la imagen, pero tambi´en es posible que aumenten los costos de c´ omputo. La manera m´ as sencilla de hacer la segmentaci´on para una implementaci´ on es utilizando cuadrados de tama˜ no uniforme pues es m´as f´ acil delimitar los pixeles encerrados por estas regiones para realizar las comparaciones. Un estudio general del problema anterior se encuentra en [5]. Consideraremos a las im´ agenes como funciones que est´an definidas sobre I 2 y que tienen por contradominio a I, donde I := [0, 1]. A cada punto del cuadrado unitario le corresponde un valor que representa su nivel de gris, con esto podr´ıamos establecer que si a un punto le corresponde el valor 0
Sistemas de funciones iteradas por partes
17
en la imagen se representar´ıa dicho valor como un punto negro, y el valor 1 se representar´ıa en la imagen como un punto blanco. Sea A ⊂ I 2 , sabemos que una funci´on ϕ : A −→ I es un subconjunto de A × I con la propiedad de que para cada (x, y) ∈ A existe un u ´nico z ∈ I, tal que ((x, y), z) ∈ ϕ ⊂ A × I y generalmente a z se le denota como ϕ(x, y). Por otra parte la gr´ afica de la funci´on ϕ es un subconjunto de I 3 que consiste de todas las tripletas (x, y, z), tales que x, y ∈ I y z = ϕ(x, y), a la gr´afica de una funci´on ϕ la denotaremos por ∗ (ϕ). Como puede observarse una funci´ on y su gr´afica no son lo mismo, pero dada una funci´ on, su gr´ afica est´ a implicitamente definida con ´esta, y viceversa, si uno conoce la gr´ afica de una funci´ on puede de inmediato saber qui´en es la funci´ on. Como estaremos trabajando con las funciones antes mencionadas y sus gr´ aficas, tomaremos la siguiente notaci´on en adelante: F := ϕ ⊂ A × I | ϕ es una funci´on y A ⊆ I 2 y definiremos a
G := {∗ (ϕ) | ϕ ∈ F } . A cada elemento de F se le puede asociar de manera u ´nica un elemento de G , rec´ıprocamente a cada elemento de G se le puede asociar de manera u ´nica un elemento de F , por tanto existe una biyecci´on entre ambos conjuntos la funci´ on que definiremos a continuaci´on es tal biyecci´on. : F −→ G dada por ∀ ϕ ∈ F;
(ϕ) := ∗ (ϕ).
La m´etrica del supremo la definiremos sobre nuestro conjunto F de manera usual como: dsup : F × F −→ R+ dada por ∀ ϕ, ψ ∈ F dsup (ϕ, ψ) := sup {|z1 − z2 | | x, y ∈ I, (x, y, z1 ) ∈ ∗ (ϕ) y (x, y, z2 ) ∈ ∗ (ψ)} . Proposici´ on 3.1. El espacio m´etrico (F , dsup ) es un espacio m´etrico completo. Definici´ on 3.2. Un sistema de funciones iteradas por partes o SFIP es una terna ({(F , dsup ); g1 , . . . , gn )} , D, R)
18
Sergio Mabel Ju´ arez V´ azquez y Flor de Mar´Ĺa Correa Romero
formada por un SFI {(F , dsup ); g1 , . . . , gn )} , una cubierta finita de I 2 D := {D1 , . . . , Dk } , tal que â&#x2C6;&#x20AC; i â&#x2C6;&#x2C6; {1, . . . , k} , Di = â&#x2C6;&#x2026; y una partici´on de I 2 R := {R1 , . . . Rn } , tales que: Dada Ď&#x2022; â&#x2C6;&#x2C6; F y dada i â&#x2C6;&#x2C6; {1, . . . , n}, existen Ď&#x2C6; â&#x2C6;&#x2C6; F y j â&#x2C6;&#x2C6; {1, . . . , k} tales que gi (Ď&#x2022;|Dj ) = Ď&#x2C6;|Ri . Definici´ on 3.3. Se dice que una familia de funciones {gi | gi : G â&#x2C6;&#x2019;â&#x2020;&#x2019; G , i = 1, . . . , n} enlosan4 a I 2 , si para toda Ď&#x2022; â&#x2C6;&#x2C6; F , se tiene que n
i=1
gi (â&#x2C6;&#x2014; (Ď&#x2022;)) â&#x2C6;&#x2C6; G .
Definici´ on 3.4. Sea f : A â&#x160;&#x2020; R3 â&#x2C6;&#x2019;â&#x2020;&#x2019; R3 una funci´on y f1 , f2 , f3 : R â&#x2C6;&#x2019;â&#x2020;&#x2019; R sus funciones coordenadas, es decir â&#x2C6;&#x20AC; (x, y, z) â&#x2C6;&#x2C6; R3 ; f (x, y, z) = (f1 (x), f2 (y), f3 (z)), entonces diremos que f es una funci´ on contra´Ĺble respecto a su tercer componente, si existe Îą â&#x2C6;&#x2C6; [0, 1), tal que â&#x2C6;&#x20AC; z1 , z2 â&#x2C6;&#x2C6; R, d(f3 (z1 ), f3 (z2 )) â&#x2030;¤ Îą d(z1 , z2 ) y f1 (x), f2 (y) son independientes de z1 y de z2 , para todo x, y â&#x2C6;&#x2C6; R. A Îą se le llama un factor de contracci´ on respecto a la tercera componente. Proposici´ on 3.5. Sean Di â&#x160;&#x2020; I 2 | i â&#x2C6;&#x2C6; {1, . . . , n} una cubierta para I 2 , {fi : G â&#x2C6;&#x2019;â&#x2020;&#x2019; G | i â&#x2C6;&#x2C6; {1, . . . , n}}
una familia finita de funciones que enlosan I 2 , tales que â&#x2C6;&#x20AC; i â&#x2C6;&#x2C6; {1, . . . , n} , â&#x2C6;&#x20AC; Ď&#x2022; â&#x2C6;&#x2C6; F ;
fi (â&#x2C6;&#x2014; (Ď&#x2022;|Di )) = fi (x, y, Ď&#x2022;(x, y)) con x, y â&#x2C6;&#x2C6; Di
es una funci´ on contractiva respecto a su tercer componente y ιi es un factor de contraccci´ on. Entonces la funci´ on
4 La
F : F â&#x2C6;&#x2019;â&#x2020;&#x2019; F
palabra enlosar significa cubrir un suelo con losas unidas y ordenadas. Por lo que preferimos la palabra enlosar en vez de cubrir, para enfatizar que pretendemos fabricar una cubierta enlosando la imagen con secciones (losas matem´ aticas) de s´Ĺ misma.
Sistemas de funciones iteradas por partes
dada por ∀ ϕ ∈ F;
F (ϕ) :=
−1
n
i=1
∗
19
fi ( (ϕ)) ,
ax {α1 , . . . , αn } es una contracci´ on en el espacio m´etrico (F , dsup ) y α := m´ es un factor de contracci´ on para F . Demostraci´ on. Recordemos que es la biyecci´on que existe entre F y G. Queremos probar que existe α ∈ [0, 1) tal que ∀ ϕ, ψ ∈ F , se cumple que: dsup (F (ϕ), F (ψ)) ≤ α dsup (ϕ, ψ). Sean ϕ, ψ ∈ (F , dsup )y sea α = max {α1 , . . . , αn }, estimemos la distancia entre F (ϕ) y F (ψ).
dsup (F (ϕ), F (ψ)) = = sup {|z − w| | x, y ∈ I, (x, y, z) ∈ ∗ (F (ϕ)) y (x, y, w) ∈ ∗ (F (ψ))} = sup{|z − w| | x, y ∈ I, (x, y, z) ∈ ∗ ( −1 (∪ni=1 fi (∗ (ϕ|Di )))) y (x, y, w) ∈ ∗ ( −1 (∪ni=1 fi (∗ (ψ|Di ))))} = sup{|z − w| | x, y ∈ I, (x, y, z) ∈ ∗ ( −1 (fi (∗ (ϕ|Di )))) y (x, y, w) ∈ ∗ ( −1 (fi (∗ (ψ|Di )))), i ∈ {1, . . . , n}} = sup{|z − w| | (x, y) ∈ Di , (x, y, z) ∈ fi (∗ (ϕ|Di )) y (x, y, w) ∈ fi (∗ (ψ|Di )), i ∈ {1, . . . , n}} = sup{|fi3 (ϕ(x, y)) − fi3 (ψ(x, y))| | (x, y) ∈ Di , ϕ(x, y) = z y ψ(x, y) = w, i ∈ {1, . . . , n} . Por la propiedad de contracci´ on respecto a la tercera componente,∀ i ∈ {1, . . . , n} de fi tenemos. sup{|fi3 (ϕ(x, y)) − fi3 (ψ(x, y))| | (x, y) ∈ Di , ϕ(x, y) = z y ψ(x, y) = w, ψ(x, y) = w, i ∈ {1, . . . , n}} ≤ sup{αi |ϕ(x, y) − ψ(x, y)| | (x, y) ∈ Di , ϕ(x, y) = z y ψ(x, y) = w, i ∈ {1, . . . , n}} ≤ sup{α |ϕ(x, y) − ψ(x, y)| | (x, y) ∈ Di , ϕ(x, y) = z y ψ(x, y) = w, i ∈ {1, . . . , n}}
20
Sergio Mabel Ju´ arez V´ azquez y Flor de Mar´ıa Correa Romero
= α sup{|ϕ(x, y) − ψ(x, y)| | (x, y) ∈ I 2 , ϕ(x, y) = z y ψ(x, y) = w} = α sup{|z − w| | x, y ∈ I, (x, y, z) ∈ ∗ (ϕ) y (x, y, w) ∈ ∗ (ψ)} = α dsup (ϕ, ψ), por lo tanto dsup (F (ϕ), F (ψ)) ≤ α dsup (ϕ, ψ). Adem´ as α = m´ ax {α1 , . . . , αn } ∈ [0, 1), as´ı F es un contracci´on en (F , dsup )
tal y como se quer´ıa demostrar.
4
Implementaci´ on
El modelo matem´ atico de una imagen como una funci´on ϕ : I 2 −→ I nos permite que la teor´ıa funcione, sin embargo a la hora de tratar la informaci´ on de una imagen con una computadora no necesariamente funcionar´a, pues hay que hacer la consideraci´ on de que en la computadora la funci´on que modela a una imagen debe tener un dominio y rango discretos, de lo contrario una imagen representada por una funci´on ϕ : I 2 −→ I a pesar de tener un tama˜ no finito, describir´ıa a una imagen de resoluci´on infinita. A lo que nos referimos en el p´ arrafo anterior es que en una computadora se representa una imagen como una colecci´on discreta de elementos de pigmento o pixeles, cada pixel toma un valor discreto en una escala de grises o bien tres en una escala con tres canales de color, el n´ umero de bits usados para almacenar estos valores es lo que se llama la resoluci´on de la escala. Si usamos 8 bits para almacenar un solo valor por pixel, la imagen podr´ıa almacenarse en una computadora como una matriz de tama˜ no n × n donde cada entrada de la matriz ser´ıa un n´ umero entre 0 y 255, este valor representar´ a un nivel en una escala de grises. La imagen siguiente es un ejemplo de lo anterior, ´esta tiene un total de 256 × 256 elementos de pigmento cada uno con un valor entre 0 y 255, para cada pixel se reservan 8 bits que almacenan su valor en binario. En total para poder almacenar la imagen en una computadora se ocupar´ıan, 1 byte por pixel, lo que ser´ıa 256 × 256 = 65 536 bytes o bien 64 kb.
Sistemas de funciones iteradas por partes
21
Figura 4: Esta imagen es de 256 por 256 pixeles, la escala de grises es un valor entero entre 0 y 255, donde 0 es valor de negro y 255 el de blanco. Ahora supongamos que dividimos esa misma imagen, en una cubierta D y una partici´ on R ambas con elementos cuadrados de tama˜ no uniforme, no 16 × 16 mientras que los Ri ∈ R sean de tama˜ no los Di ∈ D de tama˜ 8 × 8.
D
R
Figura 5: Segmentaci´on de la imagen De esta manera para cada uno de los 16×16 = 256 elementos Ri ∈ R que encierran 16 pixeles se tratar´ a de encontrar un elemento Dj ∈ D de entre los 8 × 8 = 64 y una transformaci´ on tal que ´esta evaluada en Dj minimice los valores en la escala de grises con respecto al Ri en cuesti´on, notemos que de entrada tanto los elementos de D como los de R son cuadrados con la particularidad de que lo primeros son del doble del tama˜ no de los segundos, la tranformaci´ on geom´etrica tendr´ a que ser por fuerza un escalamiento de un elemento de D a la mitad de su tama˜ no, y una traslaci´on a la posici´on del elemento de R en turno, pero hay ocho posibles formas de mapear un
22
Sergio Mabel Ju´ arez V´ azquez y Flor de Mar´ıa Correa Romero
cuadrado en otro, cuatro rotaciones y cuatro reflexiones: vertical, horizontal y respecto a sus dos diagonales. Para cada Ri , tal que i = 1, . . . , 256 debemos tomar min {d(fki (ϕ(Dk )), ϕ(Ri ))} . min k=1,...,64
j=1,...,8
Lo anterior es costoso hablando en t´erminos de c´omputo ya que se requiere hacer para este caso un total de 8×64 = 512 comparaciones por cada elemento de R as´ı que en resumen son 512 × 256 = 131 072 comparaciones. El proceso de codificaci´ on no es muy complicado y se puede implementar con el siguiente algoritmo. 1. - Leer la imagen a ser codificada (traducirla en la matriz antes comentada). 2. - Segmentar la imagen en una cubierta D y una partici´on R. 3. - Para un R0 ∈ R dado, compararlo con cada una de las ocho posibles formas en que se puede mapear cada uno de los elementos de D en R0 . Obtener y almacenar el mejor D0 ∈ D y la mejor transformaci´on que aproximan a R0 . 4. - Repetir el paso anterior para cada uno de los elementos de nuestra partici´ on R. El proceso de decodificaci´ on no es tan tardado comparado con el de codificado, como sabemos de la teor´ıa desarrollada tenemos que iterar la funci´ on evaluada en una imagen cualquiera y obtener el punto fijo, el cual es el l´ımite de esta sucesi´ on de iteraciones. Veremos en los ejemplos que la sucesi´ on se aproxima bastante r´ apido al punto fijo. El algoritmo de decodificaci´ on es el siguiente: 1. - Leer los coeficientes de las funciones as´ı como los Di . 2. - Crear una imagen cualquiera del mismo tama˜ no de la imagen original. 3. - Tomar la cubierta R como en el paso de codificaci´on, y a cada Ri aplicarle la transformaci´ on correspondiente. 4. - Hacer la imagen que resulta en el paso anterior la nueva entrada para el algoritmo hasta la iteraci´ on deseada.
Sistemas de funciones iteradas por partes
23
Seg´ un el n´ umero de iteraciones la calidad de la imagen reconstruida va evolucionando para mejorar como puede observarse en la figura 6.
(a)
(b)
(c)
Figura 6: La figura 4 fue comprimida y en el proceso de reconstrucci´on resultan la imagen (a) haciendo una iteraci´ on, la imagen (b) al hacer cuatro iteraciones y (c) realizando ocho iteraciones.
Mostraremos m´ as ejemplos como las im´agenes 6 y 7 que fueron tomadas de [11] y las im´ agenes 8 y 9 de [12]. Para poder entender los resultados que se obtienen con los algoritmos anteriores al procesar estas im´agenes necesitamos conocer lo que es el PSNR (Peek Signal to Noise Ratio), que es una de las formas m´ as conocidas de medir en decibeles la calidad de una imagen reconstruida con respecto a una imagen original despu´es de un proceso de compresi´ on. Tambi´en se requiere conocer el error cuadr´atico medio o MSE (Mean Square root Error). Supongamos que tenemos dos im´ agenes distintas I y F de tama˜ no n×m, entonces las cantidades anteriores se definen de la siguiente manera:
M SE :=
n−1 m−1 1 ||I(i, j) − F (i, j)||2 , nm i=0 j=0
24
Sergio Mabel Ju´ arez V´ azquez y Flor de Mar´ıa Correa Romero
P SN R := 10 · log10
M AXF2 M SE
= 20 · log10
M AXF M SE
.
Donde M AXF es el valor m´ aximo que puede tomar un pixel en la imagen F.
Imagen original
Imagen reconstruida
Figura 7: La imagen original es de 512×512 pixeles, la imagen reconstruida tiene un PSNR = 30.00 dB y una raz´ on de compresi´ on de 70.29. Tiempo de codificado 1.7 segundos.
Imagen original
Imagen reconstruida
Figura 8: La imagen original es de 512×512 pixeles, la imagen reconstruida tiene un PSNR= 29.91 dB y una raz´ on de compresi´ on de 70.30. Tiempo de codificado 1.1 segundos.
Sistemas de funciones iteradas por partes
Imagen original
25
Imagen reconstruida
Figura 9: La imagen original es de 512×512 pixeles, la imagen reconstruida tiene un PSNR = 33.19 dB y una raz´ on de compresi´ on de 7.93. Tiempo de codificado 2.12 segundos.
Imagen original
Imagen reconstruida
Figura 10: La imagen original es de 256 × 256 pixeles, la imagen reconstruida
tiene un PSNR = 32.75 dB y una raz´ on de compresi´ on de 5.86. Tiempo de codificado 0.7 segundos.
5
Conclusiones
La t´ecnica de compresi´ on de im´ agenes digitales mediante conjuntos fractales generados por sistemas de funciones iteradas por partes es una ingeniosa aplicaci´ on de las matem´ aticas a la inform´atica. Este trabajo s´olo introduce
26
Sergio Mabel Ju´ arez V´ azquez y Flor de Mar´ıa Correa Romero
al lector en la implementaci´ on computacional de dicha t´ecnica. Cabe mencionar que existen estudios en [5] en donde se trata el problema de mejorar el algoritmo en sus distintos pasos para lograr un desempe˜ no m´as eficiente. Adem´ as de la compresi´ on de im´ agenes, existen t´ecnicas de super-resoluci´on a diferentes escalas de una imagen, estas t´ecnicas tambi´en se ayudan de la auto-semejanza o redundancia de la imagen [6].
Sergio Mabel Ju´ arez V´ azquez CIDETEC, IPN. serge.galois@gmail.com
Flor de Mar´ıa Correa Romero Departamento de Matem´ aticas, Escuela Superior de F´ısica y Matem´ aticas IPN. flor@esfm.ipn.mx
Referencias [1] Barnsley M., Fractals Everywhere, Academic Press, New York 1988. [2] Davione F.; Svensson J.; Chassery J., A mixed triangular and quadrilateral partition for fractal image compression, Proceedings of International Conference on Image Processing, Washington, DC (1995), No. 3, 284–287. [3] Ebrahimi M.; Vrscay E., Self-similarity in imaging, 20 years after ”Fractals Everywhere”, Proceedings of International Workshop on Local and NonLocal Approximation in Image Processing, Lausanne (2008), 165–172. [4] Falconer K., Fractal Geometry: Mathematical Foundations and Applications, John Wiley and Sons, 2003. [5] Fisher Y., Fractal Image Compression: Theory and Application, SpringerVerlag, 1995. [6] Glashner D.; Bagon S.; Irani M., Super-resolution from a single image, Proceedings of 12th International Conference on Computer Vision, 2009, 349– 356. [7] Harstenstein H.; Saupe D., Cost-based region growing for fractal image compression, Proceedings of IX European Signal Processing Conference, Rhodes 1998, 2313–2316. [8] Hutchinson J.E., Fractals and self-similarity, Indiana Univ. Math. J. No. 30 (1981), 713–747. [9] Kramm M., Image cluster compression using partitioned iterated function systems and efficient inter-image similarity features, Third International IEEE Conference on Signal-Image Technologies and Internet-Based System, Shanghai 2007, No. 1, 989–996. [10] Mandelbrot B., The Fractal Geometry of Nature, W.H. Freeman, 1977. [11] Ochotta T. y Saupe D., Edge-based partition coding for fractal image compression, Arab. J. Sci. Eng., Special Issue on Fractal and Wavelet Methods (2004).
Sistemas de funciones iteradas por partes
27
[12] P´erez J., Codificaci´ on fractal de im´ agenes, Universidad de Alicante, Espa˜ na, Tech. Rep., 1997. Una versi´ on del documento est´ a disponible en la direcci´ on http://www.dlsi.ua.es/∼japerez/pub/pdf/mastertesi1998.pdf [13] Rudin W., Real and Complex Analysis, MacGraw-Hill 1986. [14] Saupe D.; Ruhl M., Evolutionary fractal image compression, IEEE International Conference on Image Processing, Lausanne 1996, 129–132.
Morfismos, Vol. 16, No. 1, 2012, pp. 29–43 Morfismos, Vol. 16, No. 1, 2012, pp. 29–43
Uniqueness of the Solution of the Yule-Walker ∗ Equations: ofA the Vector SpaceofApproach Uniqueness Solution the Yule-Walker Equations: A1 Vector Space Approach2 Rolando Cavazos-Cadena Ana Paula Isais-Torres Ana Paula Isais-Torres
1
∗
Rolando Cavazos-Cadena
2
Abstract This work concerns the Yule-Walker system of linear equations Abstract arising in the study of autoregressive processes. Given a complex polynomial ϕ(z) satisfying ϕ(0) = 1, This work concerns the Yule-Walkerelementary system of vector linear space equations ideasarising are used to derive an explicit formula for the determinant in the study of autoregressive processes. Given a comof theplex matrix M (ϕ) of thesatisfying Yule-Walker of equations cor-space polynomial ϕ(z) ϕ(0)system = 1, elementary vector responding to ϕ. The main conclusion renders the following nonideas are used to derive an explicit formula for the determinant singularity matrix M (ϕ) is invertible only if corof the criterion: matrix MThe (ϕ) of the Yule-Walker systemifofand equations the product of two roots of ϕ is always different form 1, a property responding to ϕ. The main conclusion renders the following nonthat singularity yields that criterion: the Yule-Walker system associated with aif causal The matrix M (ϕ) is invertible and only if polynomial has a unique solution. The way in which this 1, result is the product of two roots of ϕ is always different form a property implicitly used in the time series literature is briefly discussed. that yields that the Yule-Walker system associated with a causal polynomial has a unique solution. The way in which this result is
2000 Mathematics Classification: 37M10. is briefly discussed. implicitly Subject used in the time series literature Keywords and phrases: autoregressive processes, Yule-Walker equations, causal polynomials, unique solution, determination of an autocovariance 2000 Mathematics Subject Classification: 37M10. function. Keywords and phrases: autoregressive processes, Yule-Walker equations, causal polynomials, unique solution, determination of an autocovariance function.
1
Introduction
This1note Introduction concerns a basic question involving the Yule-Walker equations in time series analysis. To describe the problem we are interested in, This note concerns a basic question involving the Yule-Walker equations ∗ This work is part of the M. Sc. Thesis in Statistics of the first author under the in time series analysis. To describe the problem we are interested in, supervision of the second author. The degree was granted in february 2012 by the Universidad Aut´ onoma Agraria Antonio Narro. ∗ This work issupported part of the Sc. Thesis in of the first author under the 1 Author partially byM. a scholarship of Statistics CONACyT. 2 supervision of the second author. The degree was granted in february 2012 by the Author supported in part by the PSF organization under grant 105657. Universidad Aut´ onoma Agraria Antonio Narro. 1 Author partially supported by a scholarship of CONACyT. 2 Author supported in part by the 29PSF organization under grant 105657.
29
30
A. P. Isais-Torres and R. Cavazos-Cadena
let {Xt } be a zero-mean (second order) stationary process which is supposed to be real-valued and autoregressive of order p (AR(p)), that is, {Xt } satisfies a difference equation of the form (1)
Xt + ϕ1 Xt−1 + · · · + ϕp Xt−p = Zt ,
where the Zt ’s are zero-mean uncorrelated random variables with common variance σ 2 > 0; see, for instance, Anderson (1971) pp. 166–176, or Box and Jenkins (1976), pp. 53–65. It is known that a process {Xt } satisfying (1) exists if and only if the autoregressive polynomial ϕ(z) := 1 + ϕ1 z + · · · + ϕp z p is such that ϕ(z) = 0 for all complex z with |z| = 1, and in this case, there is not loss of generality in assuming that the polynomial ϕ(z) is causal, i.e., ϕ(z) = 0 for all z with |z| ≤ 1, a condition that is supposed to hold in the following discussion; for details, see Remarks 3 and 5 in Brockwell and Davis (1987), pp. 86–88. Now, consider the problem of determining the autocovariance function γ(·) of {Xt }, which is given by γ(h) := Cov (Xt+h , Xt ), h = 0, ±1, . . .. Multiplying both sides of (1) by Xt−i and taking pthe expectation in both sides of the resulting equality, it follows that k=0 γ(|i − k|)ϕk = 0 for p 2 i > 0, and k=0 γ(k)ϕk = σ if i = 0, where ϕ0 = ϕ(0) = 1. The following two-step method, which is described in Brockwell and Davis (1987), p. 97, is a computationally convenient tool to determine γ(·). Step 1. Find γ(0), γ(1), . . . , γ(p) by solving p
γ(k)ϕk = σ 2
k=0
(2)
p k=0
γ(|i − k|)ϕk = 0,
which is the Yule-Walker (Y-W) system of equations associated to ϕ(·). Step 2. Using that γ(i) = − pk=1 γ(i − k)ϕk for i > p, determine γ(p + 1), γ(p + 2), . . . in a recursive way. In order to have that the above method relies on firm grounds, it is necessary to show that (2) has a unique solution, a fact that can be easily verified when the degree of ϕ(z) is small, say p = 1 or p = 2; see Section 3 below. For a polynomial of arbitrary degree, Achilles (1987) used an argument based on matrix theory to obtain a formula for the determinant of matrix M (ϕ) of the above Y-W linear system associated to a polynomial ϕ, and a more compact and advanced approach
Unique Solution of the Yule-Walker System
31
was used in L¨ utkepohl and Maschke (1988). On the other hand, it is interesting to observe that the Yule–Walker equations are used (i) to find moment estimators of the polynomial ϕ (see, for instance, Section 8.2 in Brockwell and Davis, 1991) and (ii) To compute recursively the covariance function of a general ARMA(p, q) process (Brockewll and Davis, 1991, Cavazos-Cadena, 1994). Also, recently an efficient way to implement the innovations algorithm to construct best linear predictors via the Durbin-Levinson algorithm (using the non-singularity of M (ϕ) for a causal polynomial ϕ) was obtained in Mart´ınez–Mart´ınez (2010). The main objective of this work is to present an elementary derivation of the determinant of the matrix M (ϕ) using simple vector spaces ideas. The result in this direction, presented in Theorem 3.1 below, yields the following criterion: The Y-W system associated with the polynomial ϕ(·) has a unique solution if and only if ri rj = 1,
(3)
i, j = 1, 2, . . . , p,
where r1 , . . . , rp are the roots of ϕ(z). Notice that when ϕ(z) is a causal polynomial all of its roots ri lie outside the unit disk, and in this case (3) is clearly satisfied. The proof of Theorem 3.1 is based on an induction argument using standard ideas on vector spaces, and the exposition has been organized as follows: In Section 2 the basic (infinite–dimensional) vector space is introduced, and the main result on the determinant of the matrix M (ϕ) is stated in Section 3. Next, in Section 4 some necessary preliminaries to prove Theorem 3.1 are presented, and the proof of the main result is given in Section 5. Finally, the presentation concludes with some brief comments in Section 6.
2
Auxiliary Vector Space and Basic Notation
Throughout the remainder Z and IN stand for the sets of all integers and all nonnegative integers, respectively, and C denotes the set of all complex numbers. The complex vector space L consists of all vectors v : IN → C with the property that v(k) = 0 for all k large enough, and L is endowed with the usual addition and scalar multiplication. The shift operator s: L → L is defined as follows: For v ∈ L (4)
s(v)(0) := 0,
and
s(v)(k) := v(k − 1),
k = 1, 2, . . . ;
32
A. P. Isais-Torres and R. Cavazos-Cadena
in addition, the n-fold composition of the shift operator with itself is denoted by sn , so that s0 (v) = v,
(5)
and
sn (v) = sn−1 (s(v)).
On the other hand, rows and columns of a squared matrix M are numbered starting from zero, and Det M denotes the determinant of M . For vectors v0 , v1 , . . . , vn ∈ L, the corresponding squared matrix Mn+1 (v0 , v1 , . . . , vn ) of order (n + 1) is defined by (6)
Mn+1 (v0 , v1 , . . . , vn ) := vi (j),
i, j = 0, 1, . . . , n,
whereas span{v0 , . . . , vn } stands for the vector space generated by the vectors v0 , . . . , vn ; the corresponding dimension is denoted by dim span{v0 , . . . , vn }. To conclude, with a given a polynomial ϕ(z) = ϕ0 + ϕ1 z + · · · + ϕp z p of → − degree p, the vectors − ϕ,← ϕ ∈ L are defined as follows: − − → ϕ(k): = ϕp−k , ϕ (k): = ϕk and ← → − ← − ϕ (k) = ϕ(k) = 0.
For k = 0, 1, . . . , p (7)
and, for k > p,
Finally, the following notational convention concerning the coefficients of the polynomial ϕ(z) will be used: (8)
3
ϕk := 0
for k < 0 or k > p.
Main Result
Let ϕ(z) be a polynomial of degree p. The objective of this section is to state a formula for the determinant of the matrix corresponding to the Y-W system (2). To begin with, notice that for i > 0, p k=0
γ(|i − k|)ϕk = =
i k=0
i
γ(i − k)ϕk + γ(j)ϕi−j +
j=0
p
k=i+1
p−i
γ(k − i)ϕk
γ(j)ϕi+j
j=1
and using convention (8) it follows that p k=0
γ(|i − k|)ϕk = γ(0)ϕi +
p j=1
γ(j)[ϕi−j + ϕi+j ].
Unique Solution of the Yule-Walker System
33
Thus, the Y-W system (2) can be equivalently written as p
γ(k)ϕk = σ 2
k=0
(9)
γ(0)ϕi +
p
γ(j)[ϕi−j + ϕi+j ] = 0,
i = 1, 2, . . . , p.
j=1
The (squared) matrix of this system will be denoted by M (ϕ). Clearly, M (ϕ) is of order (p + 1) and is given as follows: For i = 0, 1, 2, . . . , p, (10) M (ϕ)i 0 := ϕi , and M (ϕ)i j := ϕi−j + ϕi+j ,
j = 1, 2, . . . , p;
for instance, M (ϕ)0 0 = ϕ0 and, for j > 0, M (ϕ)0 j = ϕ0−j + ϕ0+j = ϕj , in accordance with the first equation in (9). The next theorem contains a formula for the determinant of M (ϕ) and, as a by-product, a criterion for the non-singularity of M (ϕ) is obtained. Theorem 3.1. Let ϕ(z) = 1 + ϕ1 z + · · · + ϕp z p be a complex polynomial of degree p. If the roots of ϕ(·) are r1 , . . . , rp and M (ϕ) is as in (10), then assertions (i) and (ii) below occur. (i) The determinant of M (ϕ) is given by (11)
Det M (ϕ) =
1≤i<j≤p
[1 − (ri rj )
−1
]
p i=1
(1 − ri−2 )
where, by (the usual) convention, for p = 1 the first product in the above display is 1. Consequently, (ii) M (ϕ) is non-singular if and only if ri rj = 1 for all i, j = 1, . . . , p. This result will be established in Section 5; by the moment, it is convenient to note that (ii) follows immediately from part (i). On the other hand, (11) is easily verified for small values of p. For instance, for p = 1, the polynomial ϕ(z) is given by ϕ(z) = 1 + ϕ1 z and (10) yields that 1 ϕ1 M (ϕ) = ϕ1 1 so that Det M (ϕ) = 1 − ϕ21 , and this yields (11) with p = 1, since ϕ(·) has the unique root r1 = −1/ϕ1 . For p = 2 factorize ϕ(z) as ϕ(z) = (1 + a1 z)(1 + a2 z),
34
A. P. Isais-Torres and R. Cavazos-Cadena
where the roots of ϕ(z) are ri = −1/ai , i = 1, 2. It follows that ϕ(z) = 1 + (a1 + a2 )z + a1 a2 z 2 and M (ϕ) is given by
1 a 1 + a2 a1 a2 0 M (ϕ) = a1 + a2 a1 a2 + 1 a1 + a2 1 a1 a 2 Then, expanding DetM (ϕ) by the third column the following expression is obtained: Det M (ϕ) = a1 a2 [(a1 + a2 ) − a1 a2 (1 + a1 a2 )] +(1 + a1 a2 ) − (a1 + a2 )2
= −(a1 + a2 )2 (1 − a1 a2 ) + (1 + a1 a2 )[1 − (a1 a2 )2 ] = (1 − a1 a2 )[−(a1 + a2 )2 + (1 + a1 a2 )2 ]
= (1 − a1 a2 )(1 − a21 )(1 − a22 )
and replacing ai by −1/ri the formula in (11) is obtained for the case p = 2. The proof of (11) in the general case is by induction and is presented in Section 5.
4
Preliminaries
This section contains the technical tools that will be used to establish Theorem 3.1. The starting point is the idea in the following definition. Definition 4.1. . Let ϕ(z) = 1 + ϕ1 z + · · · + ϕp z p be a polynomial of degree p. The sequence Vϕ = {Vtϕ | t ∈ Z} ⊂ L is defined as follows: (i) For 0 ≤ n < p, Vnϕ (0) := ϕn ,
and
Vnϕ (k) := ϕn−k + ϕn+k ,
k = 1, 2, . . . .
(ii) For n ∈ IN ϕ → : = sn ( − ϕ) V−n
see (4)–(7) for notation.
and
ϕ − Vn+p : = sn ( ← ϕ);
Unique Solution of the Yule-Walker System
35
A glance to the above definition and (4)–(8) shows that the sequence Vϕ is related to the matrix M (ϕ) through the following equality: (12)
M (ϕ) = Mp+1 (V0ϕ , V1ϕ , . . . , Vpϕ ).
The following is the key technical result of this section. Theorem 4.2. Suppose that ϕ(z) = 1 + ϕ1 z + · · · + ϕp z p has degree p and satisfies ϕ(b) = ϕ(1/b) = 0 for some b ∈ C \ {0}. In this case the following assertions (i)–(iii) occur. ϕ ϕ , V0ϕ , . . . , Vp+1 } ≤ p + 1. (i) dim span{V−1
(ii) For each a ∈ C,
ϕ ϕ Det Mp+2 (V0ϕ + aV−1 , V1ϕ + aV0ϕ , . . . , Vp+1 + aVpϕ ) = 0.
(iii) For all a ∈ C, Det M [(1 + az)ϕ(z)] = 0. The proof of Theorem 4.1 has been divided into several pieces presented in the Lemmas 4.1–4.4 below, which involve the notion introduced below. Definition 4.3. Let V = {Vt |t ∈ Z} be a sequence in L. (i) V has property D(p) if for all n ∈ IN dim span{Vt | − n ≤ t ≤ p + n} ≤ p + n. (ii) Given a ∈ C, the sequence Ta V = {Ta Vt | t ∈ Z} is defined by [Ta V]t ≡ Ta Vt := Vt + aVt−1 ,
t ∈ Z.
The starting point of the journey to the proof of Theorem 4.2 is the following. Lemma 4.4. Let ϕ(z) = 1 + ϕ1 z + · · · + ϕp z p be a polynomial of degree p and a ∈ C \ {0}. If θ(z) = (1 + az)ϕ(z), then Vθ = Ta Vϕ Proof. Let n ∈ {1, 2, . . . , p} be arbitrary. From Definition 4.1 it follows that ϕ Vnθ (0) = θn = ϕn + aϕn−1 = Vnϕ (0) + aVn−1 (0) = Ta Vnϕ (0),
36
A. P. Isais-Torres and R. Cavazos-Cadena
and for k = 1, 2, . . ., Vnθ (k) = θn+k + θn−k = (ϕn+k + aϕn−k−1 ) + (ϕn−k + aϕn−k−1 ) = (ϕn+k + ϕn−k ) + a[ϕn−1−k + ϕn−1+k ] ϕ (k) = Vnϕ (k) + aVn−1
= Ta Vn ϕ(k) These two last displays show that, for 1 ≤ n ≤ p, Vnθ = Ta Vnϕ , and to complete the proof this equality should be verified for n < 0 and → − → → n > p. To achieve this goal, first notice that θ = − ϕ + as(− ϕ ) and ← − ← − ← − θ = s( ϕ) + a ϕ, a fact that can be obtained from (4) and (7). Then, for n ≥ 0, → − θ = sn ( θ ) V−n → → ϕ + as(− ϕ )] = sn [ −
→ → ϕ ) + asn+1 (− ϕ) = sn (− ϕ ϕ + aV−n−1 = V−n
ϕ = Ta V−n
Similarly, ← − θ = sn ( θ ) Vn+p+1 − − ϕ) + a← ϕ] = sn [s(←
− − ϕ) + asn (← ϕ) = sn+1 (←
ϕ ϕ + aVn+p = Vn+1+p ϕ = Ta Vn+1+p .
θ = T V ϕ and V θ Thus, it has been established that V−n a −n n+1+p = ϕ Ta Vn+1+p for all n ∈ IN; as already noted, this completes the proof.
The following lemma studies the relation of property D(k) with the transformation Ta ; see Definition 4.3. Lemma 4.5. Let a ∈ C be arbitrary and suppose that V = {Vt | t ∈ Z} ⊂ L has property D(k). Then Ta V has property D(k + 1) Proof. Notice that Ta Vt ∈ span{Vt , Vt−1 } and then, for arbitrary r ∈ IN, span{Ta Vt | − r ≤ t ≤ k + 1 + r} ⊂ span{Vt | − (r + 1) ≤ t ≤ k + (r + 1)}.
Unique Solution of the Yule-Walker System
37
Now observe that, since the sequence V has the property D(k), the space in the right-hand side of the previous equality has dimension less that or equal to k + r + 1 and, consequently, dim span{Ta Vt | − r ≤ t ≤ (k + 1) + r} ≤ (k + 1) + r, a relation that shows that Ta V has property D(k + 1), since r was arbitrary. The next two lemmas relate property D(k) with sequences of the form Vϕ ; see Definition 4.1. Lemma 4.6. Let ϕ(z) be a polynomial of degree p ≥ 1 and suppose that ϕ(1) = 0 or ϕ(−1) = 0. In this case Vϕ has property D(p). Proof. First suppose that p = 1. When ϕ(1) = 0 the polynomial ϕ(z) → − is given by ϕ(z) = ϕ(0)(1 − z) and, using (8), it follows that − ϕ = −← ϕ, ϕ ϕ n n → − ← − which yields that, for n ≥ 0, V−n = s ( ϕ ) = −s ( ϕ) = −Vn+1 . If → − ϕ(−1) = 0 it follows that ϕ(z) = ϕ(0)(1 + z) and then − ϕ =← ϕ, and ϕ ϕ then V−n = Vn+1 , n ∈ IN. Therefore, in either case, for every r ∈ IN, span{Vtϕ | − r ≤ t ≤ 1 + r} = span{Vtϕ | 1 ≤ t ≤ 1 + r} and since the space in the right-hand side has r +1 generators, it follows that dim span{Vtϕ | − r ≤ t ≤ 1 + r} ≤ r + 1, i.e, Vϕ has property D(1); see Definition 4.1. The proof will be now completed by induction in p. Suppose that the result holds for p = k, and let ϕ be a polynomial of degree k + 1 vanishing at 1 or −1. In this case it is clearly possible to factorize ϕ(z) as ϕ(z) = (1 + az)θ(z), where a ∈ C and θ(z) has degree k and satisfies θ(1) = 0 or θ(−1) = 0. By the induction hypothesis, Vθ has property D(k) and, using Lemmas 4.4 and 4.5, it follows that Vϕ = Ta Vθ has the property D(k + 1). The following is the final step before the proof of Theorem 4.1. Lemma 4.7. Let ϕ(z) be a polynomial of degree p ≥ 2 such that ϕ(b) = ϕ(1/b) = 0 for some b ∈ C \ {0, 1, −1}. Then, Vϕ has the property D(p) Proof. The argument is along the same lines as in the proof of Lemma 4.6. First suppose that p = 2. In this case ϕ(z) = ϕ(0)(1 − bz)(1 − z/b) = ϕ(0)[1 − (b + b−1 )z + z 2 ],
38
A. P. Isais-Torres and R. Cavazos-Cadena
→ − and using (7) it follows that − ϕ =← ϕ; by Definition 4.1 (i) this yields ϕ ϕ that V−n = V1+n for every n ≥ 0, and then, for each r ∈ IN, span{Vtϕ | − r ≤ t ≤ 2 + r} = span{Vtϕ | 1 ≤ t ≤ 2 + r}, and since the vector space in the right-hand side has r + 2 generators this yields that dim span{Vtϕ | − r ≤ t ≤ 2 + r} ≤ r + 2, that is, Vϕ has the property D(2). The result for arbitrary p is obtained by an induction argument similar to that used in the proof of Lemma 4.6. The previous lemmas are used below to establish the main result of this section. Proof of Theorem 4.2. Let ϕ(z) = 1 + ϕ1 z + · · · + ϕp z p be a polynomial of degree p with ϕ(b) = ϕ(1/b) = 0 for some b ∈ C \ {0}
(i) When b = 1 or b = −1 Lemma 4.6 yields that Vϕ has property D(p), and Lemma 4.7 implies that the same conclusion holds when b = 1, −1. Then, by Definition 4.3(i), it follows that ϕ ϕ dim span{V−1 } ≤ p + 1. , V0ϕ , . . . , Vpϕ , Vp+1
(ii) Notice that ϕ span{Vrϕ + aVr−1 | r = 0, 1, . . . , p + 1} ⊂ span{Vtϕ | − 1 ≤ t ≤ p + 1}, ϕ and then dim span{Vrϕ + aVr−1 | r = 0, 1, . . . , p + 1} ≤ p + 1, by part ϕ , r = 0, 1, 2, . . . , (p + 1) (i). It follows that the p + 2 vectors Vrϕ + aVr−1 are linearly dependent in L, a fact that implies the linear dependence ϕ ϕ of the rows of Mp+2 (V0ϕ + aV−1 , . . . , Vp+1 + aVpϕ ); as a consequence, ϕ ϕ Det Mp+2 (V0ϕ + aV−1 , . . . , Vp+1 + aVpϕ ) = 0;
see, for instance, Chapter 5 of Hoffman and Kunze (1971). (iii) Set ψ(z) := (1 + az)ϕ(z). In this case ψ(z) has degree p + 1, and using (12) with p + 1 and ψ instead of p and ϕ, respectively, it follows that that ψ M (ψ) = Mp+2 (V0ψ , V1ψ , . . . , Vp+1 ) ϕ ϕ , . . . , Vp+1 + aVpϕ ) = Mp+2 (V0ϕ + aV−1
Unique Solution of the Yule-Walker System
39
where Lemma 4.4 was used to set the second equality; from this point, the previous part yields that Det M (ψ) = 0. This section concludes with a simple a simple fact that will be useful in the proof of Theorem 3.1. Lemma 4.8. Let ϕ(z) be a polynomial of degree p with ϕ(0) = 1. In this case ϕ ). Det M (ϕ) = Det Mp+2 (V0ϕ , V1ϕ , . . . , Vp+1 ϕ ) and observe Proof. By convenience set L := Mp+2 (V0ϕ , V1ϕ , . . . , Vp+1 the following facts (a) and (b). (a) A glance to (6) and Definition 4.1(i) shows that the submatrix obtained by deleting the last row and the last column of L is
Mp+1 (V0ϕ , V1ϕ , . . . , Vpϕ ). Next, the components in the last column of L will be evaluated. First → ϕ )(p + recall Definition 4.2(i) and notice that L0 p+1 = V0ϕ (p + 1) = s0 (− → − 1) = ϕ (p + 1) = 0; see (5) and (7). On the other hand, for 1 ≤ n < p, Ln p+1 = ϕn+p+1 + ϕn−p−1 = 0, where convention 8) was used to set the − − last equality. Finally, Lp p+1 = Vpϕ (p+1) = s0 (← ϕ)(p+1) = ← ϕ(p+1) = 0, ← − ← − and Lp+1 p+1 = s( ϕ)(p + 1) = ϕ(p) = ϕ0 = ϕ(0) = 1. Summarizing: (b) The last column of L consists entirely of zeros except by the element in the last row, which is one. To conclude, expand Det L across the last column. In this case the facts (a) and (b) above together imply that Det L = Mp+1 (V0ϕ , V1ϕ , . . . , Vpϕ ) = Det M (ϕ), where the last equality follows from (12).
5
Proof of Theorem 3.1
The preliminary results in the previous section will be used to establish the main result of this note. Proof of Theorem 3.1. As already mentioned it is sufficient to prove part (i). Let ϕ(z) = 1 + ϕ1 z + · · · + ϕp z p be a polynomial of degree p and factorize ϕ as p ϕ(z) = (1 + ai z), i=1
40
A. P. Isais-Torres and R. Cavazos-Cadena
where the roots of ϕ(z) are −1/ai , i = 1, 2, . . . , p. With this notation (11) is equivalent to p p (1 + ai z) = [1 − ai aj ] (1 − a2i ), Det M i=1
l≤i<j≤p
i=1
an equality that was verified in Section 3 for p = 1 and p = 2. The proof of (5) will be completed by induction. Suppose that (5) holds for p = n ≥ 2 and let a1 , a2 , . . . , an+1 be no-null complex numbers. Define ψ(z) :=
n
(1 + ai z)
i=1
and, for each c ∈ C, set ψ ψ , . . . , Vn+1 + cVnψ ). F (c) := Det Mn+2 (V0ψ + cV−1
Combining Lemma 4.4 and (12) it follows that (a) F (c) = Det M [(1 + cz)ψ(z)]; in particular, n+1 (1 + ai z) . F (an+1 ) = Det M i=1
Using the multilinearity of the determinant function, (5) yields that (b) F (c) is a polynomial in c with degree ≤ n + 2; see, for instance, Hoffmann and Kunze (1971) Chapter 5. Next, the roots of the polynomial F (c) will be determined. First observe that (c) F (1) = F (−1) = 0.
To verify this last assertion set ψ ∗ (z) := (1 + z) pi=2 (1 + ai z) and notice that ψ ∗ (−1) = 0, and (1+z)ψ(z) = (1+a1 z)ψ ∗ (z); see (5). Then the above property (a) yields that F (1) = Det M [(1 + z)ψ(z)] = Det M [(1 + a1 z)ψ ∗ (z)] = 0, where the last equality is due to Theorem 4.2(iii) with ψ ∗ (z) and −1 instead of ϕ(z) and b, respectively. Similarly, it can be shown that F (−1) = 0. (d) F (1/ai ) = 0 for i = 1, 2, . . . , n. To show this, let k, i ∈ {1, 2, . . . , n}
Unique Solution of the Yule-Walker System
41
be fixed integers with k = i and define ˜ ψ(z): = (1 + z/ai )
(k)
(1 + aj z),
(k) where indicates the product over all the integers j between 1 and n satisfying j = k; notice that
˜ (1 + z/ai )ψ(z) = (1 + ak z)ψ(z).
(13)
˜ From (5) it is clear that ψ(−a i ) = 0 and, since k = i, the polynomial ˜ ˜ ψ(z) contains the factor (1 + ai z), and then ψ(−1/a i ) = 0. Therefore, from (a) and (13) it follows that F (1/ai ) = Det M [(1 + z/ai )ψ(z)] = ˜ and an application of Theorem 4.1(iii) with ψ˜ Det M [(1 + ak z)ψ(z)], and −ai instead of ϕ and b, respectively, yields that F (1/ai ) = 0. To continue, suppose by the moment that a1 , a2 , . . . , an are different numbers in C \ {0, 1, −1}. In this case, the above facts (c) and (d) together show that the polynomial F (c) has n+2 different roots, namely, 1, −1, and 1/ai , i = 1, 2, . . . , n. Combining this fact with (b) it follows that F (·) has degree n + 2 and it can be factorized as F (c) = F (0)(1 − c)(1 + c)
n i=1
(1 − ai c).
Setting c = an+1 and using (a) it follows that n+1 n 2 Det M (1 + ai z) = F (0)(1 − an+1 ) (1 − ai an+1 ). i=1
i=1
ψ ), Next, using (5) it follows that F (0) = Det Mn+2 (V0ψ , V1ψ , . . . , Vn+1 and then F (0) = M (ψ), by Lemma 4.8 applied to ψ, which has degree n, and then the induction hypothesis yields that
F (0) =
n i=1
(1 − a2i )
1≤i<j≤n
(1 − ai aj ),
and combining this equality with (5) it follows that n+1 n+1 (14) Det M (1 + ai z) = (1 − a2i ) (1 − ai aj ), i=1
i=1
1≤i<j≤n+1
42
A. P. Isais-Torres and R. Cavazos-Cadena
which is (5) with p = n + 1. Although (14) has been established under the assumption that a1 , . . . , an+1 are different numbers in C \ {0, 1, −1}, the equality holds for arbitrary ai , . . . , an+1 ∈ C \ {0}, since both sides of (14) are continuous functions of the ai s. In short, assuming that (5) holds for p = n it has been shown that it is also valid for p = n + 1 completing the induction argument.
6
Concluding Remarks
Given a polynomial ϕ(z) with ϕ(0) = 1, a necessary and sufficient condition was established so that the corresponding Yule-Walker system has a unique solution, and it has been shown that such a condition is satisfied by a causal polynomial. Besides to provide a rigorous basis for the two step method described in Section 1, there are other parts in the theory of time series where it is important to know that a Yule-Walker system has a unique solution. For instance, consider the following result: Given p >0 and an autocovariance function γ(·) with γ(h) → 0 as h → ∞ there exists an AR(p) process {Yt } whose autocovariance function γY (·) coincides with γ(·) at lags h = 0, 1, . . . , p. A proof of this fact can be found in Brockwell and Davis (1987), pp. 232-233, and here we just mention that there is a passage where, for a certain causal polynomial ϕ(·), it is shown that both {γ(h)|0 ≤ h ≤ p} and {γ(h) | 0 ≤ h ≤ p} satisfy (2), and then it is immediately concluded that γ(h) = γY (h) for 0 ≤ h ≤ p. Thus, it is implicitly assumed that the Yule-Walker system of a causal polynomial has a unique solution, a fact that, by Theorem 3.1, is indeed true. Acknowledgement The authors are deeply grateful to the reviewers for their suggestions to improve the paper. Ana Paula Isais-Torres Subdirecci´ on de Postgrado Universidad Aut´ onoma Agraria Antonio Narro Calzada Antonio Narro 1923, Buenavista Saltillo Coah. 25315, M´exico
Rolando Cavazos-Cadena Departamento de Estad´ıstica y C´ alculo Universidad Aut´ onoma Agraria Antonio Narro Calzada Antonio Narro 1923, Buenavista Saltillo Coah. 25315, M´exico
Unique Solution of the Yule-Walker System
43
References [1] Achilles M., Zur Lsung der Yule-Walker-Gleichungen, Metrika 34 (1987), 237–251. [2] Anderson T. W., The Statistical Analysis of Time Series, Wiley, New York, (1987). [3] Box G. E. P. and G. M. Jenkins, Time Series Analysis, Forecasting and Control , Holden-Day, San Francisco CA, (1976). [4] Brockwell P. J. and R. A. Davis, Time Series: Theory and Applications, Springer-Verlag, New York, (1987). [5] Cavazos–Cadena R., Computing the asymptotic covariance matrix of a vector of sample autocorrelations for ARMA processes , Applied Mathematics and Computation, 64 (1994), 121–137. [6] Hoffman K.; R. Kunze, Linear Algebra, Prentice-Hall, Englewood Cliffs, Massachusetts, (1971). [7] L¨ utkepohl H.; Maschke E. O., Bemerkung zur L¨ osung der YuleWalker-Gleichungen, Metrika, 35 (1988), 287–289. [8] Mart´ınez–Mart´ınez N. Y.,Implementaci´ on cuadr´ atica del algoritmo de innovaciones aplicado a una serie estacionaria, Tesis de Maestr´ıa en Estad´ıstica, Direcci´on de Postgrado, Universidad Aut´onoma Agraria Antonio Narro, Saltillo Coah. (2010).
Morfismos se imprime en el taller de repro duccio ´n del Departamento de Matema ´ticas del Cinvestav, localizado en Avenida Instituto Polit´ecnico Nacional 2508, Colonia San Pedro Zacatenco, C.P. 07360, M´exico, D.F. Este nu ´mero se termino ´ de imprimir en el mes de septiembre de 2012. El tiraje en papel opalina importada de 36 kilogramos de 34 × 25.5 cm. consta de 50 ejemplares con pasta tintoreto color verde.
Apoyo t´ecnico: Omar Hern´ andez Orozco.
Contents - Contenido The Mathematical Life of Nikolai Vasilevski Sergei Grudsky . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Sistemas de funciones iteradas por partes Sergio Mabel Ju´ arez V´ azquez y Flor de Mar´ıa Correa Romero . . . . . . . . . . . . 9
Uniqueness of the Solution of the Yule–Walker Equations: A Vector Space Approach Ana Paula Isais-Torres and Rolando Cavazos-Cadena . . . . . . . . . . . . . . . . . . . 29