SR04

Page 1

Therefore the German national research programme »Information Systems in Earth Management« (Informationssysteme im Erdmanagement) was launched in late 2002 as part of the R&D programme GEOTECHNOLOGIEN. The programme aims to improve the basic knowledge, to develop general tools and methods to improve the interoperability, and to foster the application of spatial information systems at different levels. The funded projects focus on the following key themes: semantical interoperability and schematic mapping, semantical and geometrical integration of topographical, soil, and geological data, rule based derivation of geoinformation, typologisation of marine and geoscientifical information, investigations and development of mobile geo-services, and coupling information systems and simulation systems for the evaluation of transport processes. The abstract volume contains a collection of first research results and experiences presented at a science meeting held in Aachen, Germany, in March 2004. The presentations reflect the multidisciplinary approach of the programme and offer a comprehensive insight into the wide range of research opportunities and applications, including solid Earth sciences, hydrology, and computer sciences. In conjunction with the Science Report No. 2 from the »Kick-off meeting« it gives a good overview on the progress achieved so far.

Science Report GEOTECHNOLOGIEN

At all levels of public life geoinformation and geoinformation systems (GIS) - as tools to deal with this type of information - play an important role. Huge amounts of geoinformation are created and used in land registration offices, utility companies, environmental and planning offices and so on. For a national economy geoinformation is a type of infrastructure similar to the traffic network. Although GIS are used in daily practice there is still an ongoing demand for research especially in the geoscientific context.

Informations Systems in Earth Management

Information System in Earth Management

GEOTECHNOLOGIEN Science Report

Information Systems in Earth Management Status Seminar RWTH Aachen University 23-24 March 2004

Programme & Abstracts

The GEOTECHNOLOGIEN programme is funded by the Federal Ministry for Education and Research (BMBF) and the German Research Council (DFG)

No. 4

ISSN: 1619-7399

No. 4


GEOTECHNOLOGIEN Science Report

Information Systems in Earth Management Status Seminar RWTH Aachen University 23-24 March 2004

Programme & Abstracts

Number 1

No. 4


Impressum

Schriftleitung Dr. Alexander Rudloff Dr. Ludwig Stroink © Koordinierungsbüro GEOTECHNOLOGIEN, Potsdam 2004 ISSN 1619-7399 The Editors and the Publisher can not be held responsible for the opinions expressed and the statements made in the articles published, such responsibility resting with the author. Die Deutsche Bibliothek – CIP Einheitsaufnahme GEOTECHNOLOGIEN; Information Systems in Earth Management, Status Seminar RWTH Aachen University 23-24 March 2004, Programme & Abstracts – Potsdam: Koordinierungsbüro GEOTECHNOLOGIEN, 2004 (GEOTECHNOLOGIEN Science Report No. 4) ISSN 1619-7399 Bezug / Distribution Koordinierungsbüro GEOTECHNOLOGIEN Telegrafenberg A6 14471 Potsdam, Germany Fon +49 (0)331-288 10 71 Fax +49 (0)331-288 10 77 www.geotechnologien.de geotech@gfz-potsdam.de Bildnachweis Titel: RWTH Aachen, Lehrstuhl für Ingenieurgeologie und Hydrogeologie (LIH)


Preface

At all levels of public life geoinformation and geoinformation systems (GIS) – as tools to deal with this type of information – play an important role. Huge amounts of geoinformation are created and used in land registration offices, utility companies, environmental and planning offices and so on. For a national economy geoinformation is a type of infrastructure similar to the traffic network. This report focuses on research issues related to geoscientific aspects. In Germany a national programme »Information Systems in Earth Management« (Informationssysteme im Erdmanagement) was launched in late 2002 as part of the R&D programme GEOTECHNOLOGIEN. The programme aims to improve the basic knowledge, to develop general tools and methods to improve the interoperability, and to foster the application of spatial information systems at different levels. For an initial funding phase (2002-2005) a total sum of nearly € 4 million will be spend by the Federal Ministry of Education and Research (BMBF).

The objectives of the projects cover the following key themes: - Semantical interoperability and schematic mapping - Semantical and geometrical integration of topographical, soil, and geological data - Rule based derivation of geoinformation - Typologisation of marine and geoscientifical information - Investigations and development of mobile geo-services - Coupling information systems and simulation systems for the evaluation of transport processes The main objective of the first status seminar »Information Systems in Earth Management« is to bring together all participants from the different research projects to present their proposed and ongoing work and exchange their ideas; several projects are interlinked and could therefore benefit from synergies. For all participants from Germany, Europe, and overseas, this meeting will provide a forum to discussing the status, the challenges and the future goals of the presented projects and to share their interests and results.

Ralf Bill Alexander Rudloff



Table of Contents

Scientific Programme ........................... 1 -

4

Abstracts of Oral Presentations and Posters (in alphabetical order) ....... 6 -

97

Authors’ Index................................... 98 -

99

Notes



Programme of the Status Seminar »Information Systems in Earth Management«, RWTH Aachen University – 23 - 24 March 2004 Tuesday, 23 March 2004 10:00-11:00 Registration and Poster Mounting (RWTH, Main Building, Templergraben) 11:00-11:30 Welcome by RWTH Aachen University, RWTH/LIH & GEOTECHNOLOGIEN 11:30-12:00 Egenhofer M.J. (Keynote): Information Systems Fueled with Data from Geo-Sensor Networks 12:00-13:00 – SESSION »DISTRIBUTIVE GEODATA« (CHAIR: N.N.) Bogena H., Kunkel R., Leppig B., Müller F., Wendland F.: Assessment of Groundwater Vulnerability at Different Scales Kappler W., Kiehle Ch., Kunkel R., Meiners H.-G., Müller W., Betteraey F. van: Promoting Use Cases to the Geoservice Groundwater Vulnerability Azzam R., Kappler W., Kiehle Ch., Kunkel R.: Development of a Spatial Data Infrastructure for the Derivation of Geoinformation from Distributive Geodata – Prototyping a Geoservice for Groundwater Vulnerability 13:00-14:00 Lunch break 14:00-15:30 – SESSION »ADVANCEMENT OF GEOSERVICES« (CHAIR: N.N.) Plan O., Reinhardt W., Kandawasvika A.: Advancement of Geoservices – Design and Prototype Implementation of Mobile Components and Interfaces for Geoservices Häußler J., Merdes M., Zipf A.: Advancement of Geoservices - A Graphical Editor for Geodata in Mobile Environments Wiesel J., Staub G., Brand S., Coelho A.H.: Advancement of Geoservices – Augmented Reality GIS Client Breunig M., Thomsen A., Bär W.: Advancement of Geoservices – Services for Geoscientific Applications Based on a 3D-Geodatabase Kernel 15:30-16:30 – SESSION »SEMANTIC AND GEOMETRIC INTEGRATION«

1


(CHAIR: N.N.) Gösseln G. von, Sester M.: Change Detection and Integration of Topographic Updates from ATKIS to Geoscientific Data Sets Butenuth M., Heipke Ch.: Integrating Imagery and ATKIS-data to Extract Field Boundaries and Wind Erosion Obstacles Tiedge M., Lipeck U., Mantel D.: Design of a Database System for Linking Geoscientific Data 16:30-17:00 Coffee break 17:00-19:00 Short Poster Presentation (1-2 Transparencies each), followed by Poster-Session approx. 19:00 Dinner/Buffet in the »Gewölbekeller« of the Main Building

Wednesday, 24 March 2004

09:00-10:30 – SESSION »MAR_GIS« (CHAIR : N.N.) Schlüter M., Vetter L., Schröder W.: Marine Geo-Information-System for Visualisation and Typology of Marine Geodata (Mar_GIS) Schröder W., Pesch R.: Using Geostatistical Methods to Estimate Surface Maps for the Sea Floor of the North Sea Vetter L., Köberle A.: Web-Based Marine Geoinformationservices in MarGIS 10:30-11:00 Coffee break

11:00-12:00 – SESSION »MEANINGS« (CHAIR: N.N.) Bernard L., Einspanier U., Haubrock S., Hübner S., Klien E., Kuhn W., Lessing R., Lutz M., Visser U.: Ontology-Based Discovery and Retrieval of Geographic Information in Spatial Data Infrastructures

12:00-13:00 – SESSION »ISSNEW« (CHAIR: N.N.)

2


Hecker J.M., Kersebaum K.C., Mirschel W., Wegehenkel M., Wieland R.: newSocrates – The Crop Growth, Soil Water and Nitrogen Leaching Model Within the ISSNEW Project Waldow H. von: The Implementation of an Analytical Model for Deep Percolation Arndt O.: ISSNEW – Semiautomatic Linking of a Geographic Information System and a Ground Water Model to Evaluate Non-point Nutrient Entries into Surface and Ground Water Bodies 13:00-14:00 Lunch break 14:00 Final Discussion & Further Planning approx. 15:30 End of the Meeting

3


POSTER SESSION (CHAIR: N.N.) – TUESDAY AFTERNOON Azzam R., Kappler W., Kiehle Ch., Kunkel R.: Development of a Spatial Data Infrastructure for the Derivation of Geoinformation from Distributive Geodata – Prototyping a Geoservice for Groundwater Vulnerability Bernard L., Einspanier U., Haubrock S., Hübner S., Klien E., Kuhn W., Lessing R., Lutz M., Visser U.: Ontology-Based Discovery and Retrieval of Geographic Information in Spatial Data Infrastructures Bogena H., Kunkel R., Leppig B., Müller F., Wendland F.: Assessment of Groundwater Vulnerability at Different Scales Butenuth M., Heipke C.: Integrating Imagery and ATKIS-data to Extract Field Boundaries and Wind Erosion Obstacles Dannowski R., Michels I., Steidl J., Wieland R.: Simulation and Information Software for Implementing the European Water Framework Directive: ISSNEW Jerosch K., Schlüter M., Schäfer A.: Data Management Approach for Marine Data Within a Geodatabase – Project MAR_GIS Heier Ch.: Implementation of a Spatial Data Infrastructure for the Derivation of Geoinformation out of Distributive Geodata Based on OGC-compliant Webservices Kappler W., Kiehle Ch., Kunkel R., Meiners H.-G., Müller W., Betteraey F. van: Promoting Use Cases to the Geoservice Groundwater Vulnerability Pesch R., Schröder W.: Typology of the North Sea by Means of Geostatistical and Multivariate Statistical Methods Staub G., Wiesel J., Brand S., Coelho A.H.: Advancement of Geoservices – Augmented Reality GIS Client

4


5


ISSNEW – Semiautomatic Linking of a Geographic Information System and a Ground Water Model to Evaluate Non-point Nutrient Entries into Surface and Ground Water Bodies Arndt O. WASY Gesellschaft fuer wasserwirtschaftliche Planung und Systemforschung mbH, Waltersdorfer Strasse 105, 12526 Berlin, Germany, E-Mail: o.arndt@wasy.de

Introduction Groundwater resources (bodies) in many regions of the world are an important source for water supply. They have to be protected and managed sustainably. In order to enable appropriate action planning and to fulfil the obligations to report the state of water bodies, it is essential to provide a database information system for gathering, structuring and analyzing spatial and non spatial data as well as setting up relationships between data. Furthermore it is also important to provide simulation systems coupled with the information system for the necessary spatially and temporally differentiated evaluation of the effects of measures for improvement of the quantitative and qualitative state. In Europe the »Directive of the European Parliament and of the Council 2000/60/EC Establishing a Framework for Community Action in the Field of Water Policy« – the European Water Framework directive (WFD) – came into effect as a basic framework for European procedure in the area of water management. Implementing the legal and obligatory guidelines of the WFD has lead to a number of new requirements to be met by water management administrations also with regard to groundwater bodies. Some of these new requirements include collecting additional information about the state of water management, systematically preparing these data and – in addition – all relevant water bodies need to be documented and assessed. The WFD stipulates to set up measures by 2009 with the

6

fulfilling of WFD requirements to take place no later than 2015. Considering the processes in the water cycle over time, it appears questionable whether it is possible to meet the requirements in any case within six years and if the currently available technology enables appropriate and efficient measure selection. Apart from the earliest possible undertaking of appropriate initial measures, this leads to two consequences (Dannowski et al. 2003): (1) measures should be introduced based on efficiency and differentiated according to specific place and time so that within the given time frame at least initial effects (e. g., a reverse trend in the water pollution levels) can be demonstrated, and (2) in determining the appropriate measures, tools must be available that explicitly support the specific location and time requirements in allocating the courses of action. One approach to enable such a measure planning is the ISSNEW information system. System design The basis of ISSNEW is the ESRI ArcGIS8 product. Based on the ESRI ArcGIS8 product family including the integrated geodatabase format the ISSNEW information system consists of a WFD specific data model and tools to meet WFD requirements. Besides the regular possibilities of a Data Base Management System (DBMS) – e.g., client/server architecture, recovery functionality, access protection on the user level and so on -, the UML based data modelling of Arc-GIS is important for develo-


ping and maintaining the database structures and implementing custom requirements into the common (WFD specific) structure. Furthermore the published component object model (COM) of ArcGIS enables the development of specific tools as provided by the ISSNEW project. Also parts of ISSNEW are several monolithic simulation systems for simulation of vertical and horizontal flow processes, in this case FEFLOW (Diersch 2002) for the deterministicnumeric simulation of horizontal flow and transport groundwater processes, SOCRATES (Wieland et al. 2002) to describe vertical flows of nutrient entries in the root zone and the UZ Module to model vertical water and nitrate flows between root zone and groundwater.

Such simulation systems - especially FEFLOW have been in use for a long period of time, hence well proved and efficient in their application. Unfortunately these systems rely upon file-based data for modelling, thus they cannot take advantage of the structures of a DBMS without extensive modification. Because FEFLOW is a product of WASY GmbH and SOCRATES as well as the UZ Module are developed by ZALF e. V. – both companies are members of the ISSNEW project – the simulation systems could be adapted to meet ISSNEW requirements, but in general simulation systems are not modifiable due to the lack of available source code.

Figure 1: Overview of the concept of connecting different simulation systems and a GIS using a bi-directional user interface.

7


Bi-directional user interface ISSNEW is to be a concept and implementation of a bi-directional user interface to provide a convenient way to use up-to-date data from an ArcGIS DBMS to initialize and update different simulation models, to chain and coordinate different simulation systems and to return simulation results to the information system for further analysis. The concept of communication is displayed in Fig. 1. Each member of the composite is directly connected to the user interface so that the user interface may process data and coordinate the particular simulation

tions - under the assumption of a programming interface - to an ArcGIS information system. To enable different applications to participate in the simulation composite the ISSNEW interface defines a set of COM interfaces. These interfaces currently could be classified as data provider and data consumer interfaces. In general, each application will implement both, (e.g., it uses data for simulation and returns the results for further analysis). The different applications have to be registered to the ISSNEW user interface; therefore the ISSNEW user

Figure 2: Mapping information by interactive linking data provider and data consumer.

systems. According to the WFD requirements, ISSNEW is implemented to evaluate non-point nutrient entries into water bodies to provide efficient measures planning. However since it is based on COM using well defined interfaces it may be used to link most arbitrary applica-

8

interface is able to access different applications in a uniform manner without any applicationspecific implementation. The information regarding the structure of the information provided or used by the particular application is displayed by the user interface. Therefore the


user is able to perform interactive linking, (e.g., connecting information of the DBMS to model levels of the simulation systems). Fig. 2 illustrates a prototype of the graphical user interface for mapping information. The user has to select the data source and target. Afterwards the data can be mapped by clicking the connect button. Furthermore, for each connection additional parameters can be defined to control the exchange process, (e.g., type conversion, regionalization of spatial data to meet model requirements or spatial and temporal selection of data). Following this step, the so called parameter-mapping has been completely defined by the user. It can be used to initialize the simulation model with up-to-date data stored in the GIS any time it is required without any additional effort. The ISSNEW project will also provide a scenario-based model repository to store different mappings for different models and scenarios within the database. Therefore it will be possible to simulate a largescale area as a rough estimate and interesting sub areas for a more detailed analysis based on the same data and consequently managed within the geographic information system.

References Dannowski, R., et al (2003): Information Systems in Earth Management »Implementing of the European Water Framework Directive: ISSNEW – Developing an Information and Simulation System to Evaluate Non-Point Nutrient Entries into Water Bodies,« – KickOff-Meting, Geotechnologien Science Report, pp. 23-29, Hannover, 2003. Wieland, R., W. Mirschel, H. Jochheim, K.C. Kersebaum, M. Wegehenkel & K.-O. Wenkel (2002): Objektorientierte Modellentwicklung am Beispiel des Modellsystems SOCRATES. In: Gnauk, A. (ed.): Theorie und Modellierung von Ökosystemen – Workshop Kölpinsee 2000. Shaker Verlag, Aachen. Diersch, H.J.G. (2002): FEFLOW Finite Element Subsurface Flow and Transport Simulation System – User’s manual/Reference manual/ White papers. Release 5.0. WASY Ltd, Berlin; 2002.

Conclusion The goal, finally, is to implement a solution to use up-to-date data managed through a GIS to perform simulations in order to evaluate nonpoint nutrient entries into water bodies, spatially and temporally differentiated. These GISbased tools for scenario computations in conjunction with mainstream simulation systems – related to water and nutrient entries on a landscape scale - will be provided as a basis for a decision-support system for long-term land use and management. The combination of GIS technology with simulation systems is a promising approach to meet WFD requirements, especially considering the short amount of time available to implement these new measures. To demonstrate the feasibility of the concept, this project includes an experimental system analysis of a pilot area.

9


Development of a Spatial Data Infrastructure for the Derivation of Geoinformation from Distributive Geodata – Prototyping a Geoservice for Groundwater Vulnerability Azzam R. (1), Kappler W. (2), Kiehle Ch. (1), Kunkel R. (3) (1) RWTH Aachen, Lehrstuhl fuer Ingenieurgeologie und Hydrogeologie (LIH), Lochnerstr. 4-20, 52064 Aachen, Germany, E-Mail: {azzam | kiehle}@lih.rwth-aachen.de (2) ahu AG Wasser Boden Geomatik, Kirberichshof 6, 52066 Aachen, Germany, E-Mail: w.kappler@ahu.de (3) Forschungszentrum Jülich, Systemforschung und Technologische Entwicklung (STE), 52425 Jülich, Germany, E-Mail: r.kunkel@fz-juelich.de

Introduction The overall goal of the research-project »Development of an information infrastructure for the rule-based derivation of geoinformation from distributive, heterogeneous geodata inventories on different scales with an example regarding the groundwater vulnerability assessment« funded by the BMBF-/DFG-programme »Geotechnologies – Information Systems in Earth Management« is to provide a spatial data infrastructure (SDI) for the derivation of geoinformation from distributive, heterogeneous geodata. In this paper outlines the development of an information infrastructure for a web-based geoprocessing of distributive geodata inventories using OCG-compliant software-components is presented and a prototype applied to groundwater vulnerability to prove the concept is described. Objectives The development of new software components using cutting edge technologies is fraught with risk. Therefore, the development is first specified in a technical and abstract manner. To avoid a long-termed specification process without any usable software-prototype, the current project aims to develop a SDI led by the Spiral Model of Software

10

Development by Boehm (BOEHM 1986). In order to complete the task of software development outlined by KIEHLE et al. 2003 a first prototype has been developed. This task aims to involve potential users at an early stage of software development and to determine potential risks during the development and the implementation process. One main topic of the current research project is the usability of the developed software by users from public administration, applied research, private enterprise, and public. An early prototype assures against a misleading in software development and outlines potential risks not mentioned in initial technical specifications so far. Software Specification and Service Chaining The prototype requirements are mainly a prove of concept: It has to provide geodata inventories from different locations via an OGC-compliant webservice. The prototype implements data access from three servers located at Aachen and Bochum. One server gathers the required data and processes the needed information. In order to provide the requested information the SDI consists of the following OGC-compliant webservices: - Web Map Service (composition of Web Feature Services, Aachen) - Web Feature Service (Oracle & ArcSDE, Bochum)


Figure 1: Service chain.

11


- Web Feature Service (Oracle spatial, Bochum) - Web Feature Service (PostgreSQL & PostGIS, Aachen) - Web Coverage Service (raster-files, Aachen) All of the above mentioned webservices are made accessible through deegree (FITZKE et al. 2003), which is an Open Source framework (implemented in Java) for building spatial data infrastructures based on current OGC and ISO/TC 211 specifications. The communication between the different services is XML-based. This data have to be processed e. g. according to the Hölting-formula (HÖLTING et al. 1995). The processing of information is established by service-chaining of Web Feature Services according to the following tasks: - identification of the spatial extend via userinteraction (bounding-box) using an OGCcompliant WMS, presenting raster-files accessed through a Web Coverage Service - the WMS queries the specified spatial extend against distributed WFSs - the response of these requests is processed by the Java Mathematical Expression Parser (JEP) according to the HÖLTING-formula (see HÖLTING et al. 1995, p. 15) - processing of a result set (geometries) via Java Topology Suite (JTS) by performing geoprocessing algorithms - the result of the geoprocessing is an GMLcoded webpage, published as an OGC-compliant WMS. One sub-ordinate target of the prototype is the implementation of a service-chain. In order to integrate the described SDI into large-scale SDIs like GDI-NRW , it is inalienable to provide several Feature- and Coverage- Services as well as composed webservices like the depicted »Geoservice Groundwater Vulnerability«. The »Geoservice Groundwater Vulnerability« is presented to the user as a map (OGC-compliant WebMapService), which, once initialised by a request, activates a service-chain (see fig. 1), consisting of independent, distributive OGCcompliant webservices. These webservices are accessed automatically in the following way: 1. WebMapService: An initial map enables the user to define a spatial extend (bounding

12

box) of a request as well as the kind of requested information (information-layers) 2. WebMapService queries distributive WebFeatureServices 3. The results are being geo-processed and checked for consistency by JTS-based algorithms 4. Attribute-data are being processed and classified by JEP according to the Höltingformula 5. The result is presented through GMLFeatures 6. WebMapService: The final WebMapService presenting the results is also accessible for further requests, which would lead to another iteration of the service chain Further development The current prototype will be enhanced according to the spiral model (s. a.) in several phases; fig. 2 outlines the most important enhancements. The next step is the development of an OGC-compliant Catalog-Service for administration and inquest of geodata and servicemetadata. Of course, geodata as well as service-metadata will be published according to current ISO/TC211-Spezifications. Another challenge is the multiple representation (see SESTER 2000) – the appearance of a specific object in multiple temporal, spatial or semantical context – which has to be integrated into the SDI. Furthermore a user-friendly web-front-end will be developed to easily access the required data and information via multiple clients (e. g. webbrowser, GIS-Viewer/-Client, mobile devices, etc.) presenting multiple formats (e. g. (X)HTML, PDF, geodata-sets, etc.). Conclusion The development of a prototype as implementation of the new information infrastructure has proven that even freely available software enables the developer to integrate different OGC-compliant webservices (here: Featureand Coverage-Services) from distributive data inventories. By implementing those services, data becomes available through multiple


Figure 2: Geoservice Groundwater Vulnerability.

13


clients (whether user or services), independent of vendor-specific software. Once provided as a webservice the (raw) data is easily turned into information by service chaining. The component based software allows to add new functionalities, analytical tools and geoprocessing features. The integration into large-scaled SDI is made possible through a strictly compliance to approved specifications. This is a small step forward on the way from geodata to geoservices. Acknowledgment This work is funded by the BMBF-/DFGResearch-Programme GEOTECHNOLOGIES – »Information-systems in Earth Science. From Geodata to Geoservices.« Special Thanks go to Christian Heier for parts of the development of the prototype and to Dr. Stefan Waluga of the Department of Geography, Ruhr-University Bochum, for providing parts of the hardware infrastructure. References Azzam, R.; Bauer, C.; Bogena, H.; Kappler, W.; Kiehle, C.; Kunkel, R.; Leppig, B.; Meiners, H.G.; Müller, F.; Wendland, F. & Wimmer, G. (2002): Geoservice Groundwater Vulnerability – Development of an Information Infrastructure for the Rule-based Derivation of Geoinformation form Distributive, Heterogeneous Geodata inventories on Different Scales with an Example Regarding the Groundwater Vulnerability Assessment. Geotechnologien Science Report 2: 31-35. Boehm, B. W. (1986): A Spiral Model of Software Development and Enhancement. Software Engineering Notes 11: 22-42. Fitzke, J.; Greve, K.; Müller, M. & Poth, A. (2003): Deegree – ein Open-Source-Projekt zum Aufbau von Geodateninfrastrukturen auf der Basis aktueller OGC- und ISO-Standards. GIS 16 (9): 10-16. Hölting, B., Haertlé, T., Hohberger, K.-H., Nachtigall, K.-H., Villinger, E., Weinzierl, W. &

14

Wrobel, J.P. (1995): Konzept zur Ermittlung der Schutzfunktion der Grundwasserüberdeckung. Geologisches Jahrbuch Serie C, No. 63. Schweizerbart, Stuttgart. Kiehle, C., Heier, C., Kappler, W. & Kunkel, R. (2003): Entwicklung einer Informationsinfrastruktur zur regelbasierten Ableitung von Geoinformationen aus distributiven, heterogenen Geodatenbeständen. In: Bernard, L., Sliwinski, A. & Senkler, K. (Hrsg.) (2003): Geodaten- und Geodienste- Infrastrukturen von der Forschung zur praktischen Anwendung. Beiträge zu den Münsteraner GI-Tagen 26./27. Juni 2003. Schriftenreihe des Instituts für Geoinformatik, Westfälische WilhelmsUniversität Münster. 18: 215-228. Sester, M. (2000): Maßstabsabhängige Darstellungen in digitalen räumlichen Datenbeständen. Habilitationsschrift, Fakultät für Bauingenieur- und Vermessungswesen, Universität Stuttgart, Reihe C, 544, Deutsche Geodätische Kommission, München.


Ontology-Based Discovery and Retrieval of Geographic Information in Spatial Data Infrastructures Bernard L. (1), Einspanier U. (1), Haubrock S. (2), H端bner S. (3), Klien E. (1), Kuhn W. (1), Lessing R. (2), Lutz M. (1), Visser U. (3) (1) Institute for Geoinformatics (IfGI), University of M端nster, Robert-Koch-Str. 26-28, 48149 M端nster, Germany, E-Mail: {bernard | spanier | klien | m.lutz}@uni-muenster.de (2) Delphi InformationsMusterManagement (DELPHI IMM), Dennis-Gabor-Str. 2, 14469 Potsdam, Germany, E-Mail: {soeren.haubrock | rolf.lessing}@delphi-imm.de (3) Center for Computing Technologies (TZI), University of Bremen, Universitaetsallee 21-23, 28359 Bremen, Germany, E-Mail: {huebner | visser}@tzi.de

1 Introduction Geographic information (GI) is the key to effective planning and decision-making in a variety of application domains, not least in the Earth Sciences. So-called intelligent web services permit easy access and effective exploitation of distributed geographic information for all citizens, pro-fessionals, and decision-makers (Bishr & Radwan 2000; Brox et al. 2002). This paper focuses on the discovery and retrieval of geographic information. The specifications provided by the OpenGIS-Consortium (OGC) enable syntactic interoperability and cataloguing of geographic information. However, while OGC-compliant catalogues support discovery, or-ganisation, and access of geographic information, they do not yet provide methods for over-coming problems of semantic heterogeneity. These problems still present challenges for the GI discovery and retrieval in the open and distributed environments of Spatial Data Infrastructures (SDIs). One possible approach to overcome the problem of semantic heterogeneity is the explication of knowledge by means of ontologies, which can be used for the identification and association of semantically corresponding concepts (Wache et al. 2001). This paper describes an architecture for ontology-based discovery and retrieval of geographic information deve-

loped in the meanInGs project. To this end, we extend the query capabilities currently offered by OGC-compliant catalogues with terminological reasoning on metadata provided by an ontology-based reasoning component. We show how this approach can contribute to solve semantic heterogeneity prob-lems during free-text search in catalogues and how it can support intuitive information access once an appropriate resource has been found. The work described in this paper was done in close cooperation of the three partners within the meanInGs project. Each of the partners, however, took a leading responsibility for certain as-pects of the research and development: - The main responsibilities of the Institute for Geoinformatics (ifgi) were the creation of a pro-ject-wide SDI, the analysis of semantic heterogeneity in SDIs, the specification of metadata entities for semantic information, the specification of OGC-compliant service interfaces for the newly defined services and the design of the overall architecture. - The company Delphi IMM focused on the implementation of several software components in the context of the SDI and the first use case. Instances of the CatalogServer have been in-stalled at the Delphi IMM and TZI web sites; the company-own FeatureServer has been ex-tended by the functiona-

15


lity of automatically updating its data (in this case for water level measurements). Further OGC-compliant web services and geoprocessing services are cur-rently under development and integrated into the SDI. - The Center for Computing Technologies (TZI) was responsible for the construction of multi-layered shared vocabularies and application ontologies, and the design and development of the Ontology-based Reasoner. The integration of the Spatial Reasoner, which already has been developed, is planned for a succeeding step of the project. The remainder of the paper is structured as follows. We first introduce a motivating example for our work that serves the purposes of analysis and illustration throughout the paper. In section 3 we describe the motivation behind current GDIs and their components. Section 4 illustrates the problems that remain even if GDIs are employed for data discovery and retrieval. The ontology-based approach to their solution is presented in section 5, the architecture implementing this approach is described in section 6. We conclude the paper by pointing out open questions to be dealt with in future research. 2 Motivating Example: Discovering information on water levels in the Elbe River Throughout this paper we use a motivating example to point out semantic heterogeneity prob-lems that can occur when using state-ofthe-art GI query possibilities and to illustrate our pro-posed solution . While the example is drawn from the domain of hydrology, our work is designed to be independent of a particular GI domain and is not restricted to only this example. The results obtained for the described example will be verified using other examples from the Earth Sciences at a later stage in the meanInGs project. These include applications facilitating the availability of soil data. John, the main actor in our example, is a hydrologist who is interested in water levels of the Elbe River. As an expert in the field he knows the existing control points in the river.

16

He wants to know the measurement of the water level at a specific control point at a specified time. Since John does not know about an existing Web Feature Service (WFS) offering this kind of informa-tion, he makes use of an OGC-compliant catalogue in order to find appropriate information for answering his question: »What is the water level at control point X at time Y in the Elbe River?« There are several data providers that offer information about water levels in the Elbe River. Cur-rently three providers are integrated into this use case: a) The Federal Agency for Hydrology (Bundesanstalt für Gewässerkunde, BfG), b) The Electronical Information System for Waterways (Elektronisches WasserstraßenInformationssystem, ELWIS), c) The Czech Hydrometeorological Institute (CHMI). The data of all three providers are updated regularly (usually once a day). The providers BfG and ELWIS keep track of those measurements, which are distributed along the German part of the river, whereas the CHMI considers the five Czech measurement sites (Figure 1). These agencies do not yet provide their data on water levels through WFS but through normal HTML pages. For implementing the example the information is automatically parsed from these pages and provided through WFS interfaces . All three institutions use different vocabularies to describe their data. One of the major targets of this use case is to overcome these heterogeneities, which are therefore maintained in the WFS. Table 1 lists the names of the GML features returned by these WFS and their property names. 3 Principles and Components of SDIs This section gives an overview on the motivations for building SDIs, their main components and on the role of standards within SDI development.


Figure 1: Control points for water level measurements at the Elbe River provided by BfG, ELWIS and CHMI.

3.1 Motivation and Aims A main motivation for setting up spatial data infrastructure (SDIs) is to make the work with geo-data more efficient (McKee 2000; Nebert 2001). This is motivated by problems that occur with conventional GIS technology and geographic data sets. Two major problems are that data sets exist in a plethora of different data formats and that they are often not (sufficiently) documented: - Missing or insufficient documentation makes it difficult or even impossible for outside users to discover data sets and to assess whether a given data set is useful for their tasks. - Datasets in different formats often have to be converted in order to be used in a different system. This problem is usually tackled by providing data in commonly used vendor

formats or vendor-neutral data exchange formats like GML (OGC 2003a). In the case of frequently changing data such as water level measurements, however, the conversion of the data has to be automated as manual conversion is not feasible. The development of SDIs addresses these problems. SDIs are based on the assumption that it is usually not the data a user is interested in, but a piece of information that can be generated using the data . Therefore SDIs are based on geographic information (GI) services that imple-ment standardised service interfaces. Through these services distributed geographic data can be accessed and processed across administrative and organisational boundaries. Also, data and services can be accessed in an

17


Table 1: Names of the GML features returned by three WFS and their properties.

WFS

BfG

ELWIS

CHMI

Feature

Pegelmessung

Wasserstand Messung

StavVody

Name of the control point

name

pegel

stanice

Unique ID of the control point

id

Internet address of the control point

url

quelle

url

Water level measured in cm

wasserstand_cm

hoehe

stav

Date and time of the measure-ment

zeitpunkt

datum

Date of the measurement

datum

Time of the measurement

uhrzeit

Geometry as Point

gml:pointProperty

standort

gml:position

Name of the river

tok

Discharge in cubic meters per second

prutok

ad-hoc manner. As a result geographic data sets and the GI services using them can be created and maintained locally which leads to increased quality (e.g. timeliness) and efficiency. Also, SDIs can be easily extended to include new services and/or data sets. 3.2 Components of SDIs The main (physical) components of an SDI are GI services, geographic data and catalogues providing metadata on the data and services. Following the definition of (Groot & McLaughin 2000) SDIs also contain the institutional, organisational, technological and economic resources that support the development and maintenance of the SDI and their geographic information. However, these issues are outside the scope of the meanInGs project and will therefore not be addressed here.

18

3.2.1 GI Services Interoperability is a key requirement for SDIs. It is defined as Âťthe capability to communicate, execute programs, or transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those unitsÂŤ (ISO/TC-211 & OGC 2002). In order for two services (e.g. a web map client and a web map service) to interoperate they have be interface and service interoperable. This means they have to agree on the set of serv-ices offered by the entities of the two systems and the interfaces to them (ISO/IEC 1996). The standardisation of these interfaces is an important feature of SDIs because it allows the classification of services in well-known service types that provide the behaviour specified by


the interface. Thus, standardisation makes it possible to connect arbitrary service instances as long as they are of a well-known service type. In the geospatial domain, there are mainly two stan-dardisation efforts to enable interoperability in distributed systems: the ISO Technical Committee (TC) 211, which develops the 19100 series of standards, and the OpenGIS Consortium (OGC). The OGC has developed an architectural framework for geospatial services on the Web plat-form (OGC 2003b). It specifies the scope, objectives and behaviour of a system and its func-tional components. It identifies behaviour and properties that are common to all such services, but also allows extensibility for specific services and service types. In so-called testbeds implementation specifications for several service types have been developed, e.g.: - Web Map Service (WMS) (OGC 2002b) for producing digital maps; and - Web Feature Service (WFS) (OGC 2002a) and Geographic Markup Language (GML) (OGC 2003a) for accessing XML-encoded geographic data. A complete list of approved, candidate and planned OGC implementation specifications can be found at http://www.opengis.org/ specs/. Knowing these specifications, a service consumer that is aware of the location of a service pro-vider can connect to the service over the Web and invoke its operations. 3.2.2 Catalogues and Metadata Catalogues are a fundamental part of SDIs and become relevant in scenarios where clients and services are arbitrarily distributed (in large networks) and unaware of each other. A catalogue’s task is to allow a client (service consumer) to find and access resources (data and services) available on servers (service providers) that are unknown to the client and fit the client’s needs. Service providers offer particular

data access and geoprocessing (data manipulation) services. Both types of spatial resources are described by metadata. The catalogue itself consists of the metadata and the operations working on these metadata. In general each service provider has to register (publish) its offerings by means of metadata to a catalogue to enable accessibility. A catalogue may also collect metadata from known service providers (pull). In addition to these registration functions a catalogue provides »librarian func-tions« (discovery, browsing, querying) for service consumers. The structure, entities and element sets of the metadata entries in a catalogue are determined by a metadata schema. For geographic data, there are several standards for metadata schemas, the most prominent being FGDC (FGDC 1998) and ISO 19115 (ISO/TC-211 2003). Both standards also contain guidelines to develop metadata profiles to allow its customisation for specific user groups. The resulting heterogeneity leads to problems when querying and interpreting search results of different distributed catalogues, and is further aggravated by the fact that different user groups may use different vocabularies to describe their datasets and services. 3.2.3 Geographic Data An SDI also contains a number of vector or raster data sets, e.g. a river network, water level measurement points and satellite images for the motivating example described in section 2. These data sets are provided through geographic model / information management services. e.g. OGC Web Feature Services (WFS) (OGC 2002a). It is currently discussed, e.g. in the context of developing the Infrastructure for Spatial Informa-tion in Europe (INSPIRE), whether socalled core, framework or reference data sets are to be part of the underlying infrastructure and what they are to contain (Barr 2003; Luzet

19


2003). These basic data sets are meant to provide a frame of reference for all datasets for more specialised applications. 4 Semantic Heterogeneity Problems During State-of-the-Art Discovery and Retrieval of Geographic Information This section describes possible problems caused by semantic heterogeneity between user re-quests and application schemata or metadata descriptions, if users and providers of geographic information are from different information communities (OGC 1999). 4.1 Discovery In current standards-based catalogues (e.g. (GDI-NRW 2002)) users can formulate queries us-ing keywords and/or spatial filters. The metadata fields that can be included in the query depend on the metadata schema used (e.g. ISO 19115) and on the query functionality of the service that is used for accessing the metadata. Two types of semantic heterogeneity can lead to problems if John performs a simple keyword-based search, e.g. using the terms »water level« and/or »Elbe«. These types are classified by (Bishr 1998) as: 1. Naming heterogeneity (synonyms): John may fail to find the existing WFS that are offering the information, because their metadata description contains slightly different terminology, e.g. »depth« or »Labe« (the Czech name for »Elbe«). 2. Cognitive heterogeneity (homonyms): John’s request could also result in finding services that are not appropriate for answering his question, thus indicating the occurrence of cognitive heterogeneity. This would be the case, e.g., if John’s free-text search for »water level« resulted in discovering a service for depicting the network of water level control points, without the actual information about the current water level or a service providing groundwater rather than surface water levels.

20

These examples show that keywords used in free-text entries have to be considered a poor way to capture the semantics of a query or item (Bernstein & Klein 2002). 4.2 Retrieval Another major difficulty arises if John wants to access geographic information via one of the existing WFS. The DescribeFeatureType request (OGC 2002a) returns the application schema for the feature type, which is essential for formulating a filter for the query. John now runs into troubles if the property names are not intuitively interpretable (cp. Table 1). For example, he can only guess that the property names »hoehe« (German) or »stav« (Czech) refer to the measure-ment of the water level. The goal of the architecture presented in this paper is to provide user support for interpreting property names adequately, since this is a precondition for formulating an appropriate query. 5 Ontology-based approach To solve the semantic heterogeneity and interpretation problems presented in the previous section, an approach is needed that exceeds the capabilities of current free-text search facilities in catalogues and supports an intuitive interpretation of property names. Accepting the diversity of geographic application domains, such an approach would need to enable navigating differences in meaning (Harvey et al. 1999). (Stuckenschmidt 2002) suggests to use explicit context models that can be used to re-interpret information in the context of a new application. Ontologies have become popular in information science as they can be used to explicate contextual information. We adopt a modified version of Gruber’s (Gruber 1993) often-quoted definition of the term »ontology« by (Studer et al. 1998), who defines it as »an explicit formal specification of a shared conceptualization« (a conceptualization being a way of thinking about some domain (Uschold 1998)). This makes the onto-


logy a perfect candidate for communicating a shared and common understanding of some domain across people and computers (Studer et al. 1998). 5.1 Hybrid Ontology Approach The hybrid ontology approach used in our architecture for enhancing information discovery and retrieval has been adopted from (Visser & Stuckenschmidt 2002). It is based on the idea of having a source-independent shared vocabulary for each domain (Figure 2).

Figure 2: The hybrid ontology approach (Visser & Stuckenschmidt 2002), modified.

It is assumed that the members of a domain share a common understanding of certain con-cepts. These concepts require no further explication and therefore form the basic terms con-tained in a shared vocabulary. Once a shared vocabulary exists, the terms can be used to make the contextual information of an information sources explicit, i.e. to build an application ontology for it (Visser & Stuckenschmidt 2002). Thus, the vocabulary has to be general

enough to be used across all information sources that are to be annotated within the domain, but specific enough to make meaningful definitions possible (Schuster & Stuckenschmidt 2001). The task of constructing an application ontology lies in the responsibility of the provider of the information source. For the ontology-based annotation of information sources we have made two modifications to this approach. First, the information sources are not annotated directly. Instead, we describe the feature type provided by a servi-

ce, which is defined through its application schema. To be more precise, the shared vocabulary is used to describe in detail the properties included in the schema. There is therefore an additional level of semantic annotation. Together with the syntax, which can be requested via the normal DescribeFeatureType() operation, the annotation of an information source is complete. Second, we do not only use domain-specific ontologies (e.g. measurements, hydrology), but also domain-independent ontologies (e.g. SI units). In our example, John searches for the water level of control point X at time Y. In our approach, this means that he uses these properties

21


(i.e. »location«, »water level«, and »date and time of measurement«) to describe a feature type. The precise formulation of the query and its execu-tion and result are presented in the next section. 5.2 Ontology-Based Search We distinguish two (closely-related) types of query. In a simple query the user can choose a concept from an existing application ontology for his query. The defined concept query allows the user to define a concept based on a given shared vocabulary, which fits his understanding of a concrete concept (Visser & Stuckenschmidt 2002). In the following steps

within the same domain. This is possible by applying a terminological reasoner, e.g. RACER (Reasoner for A-Boxes and Concept Expressions Re-named) (Haarslev & Möller 2001), which can work with concepts described in the Description Logic SHIQ (Horrocks et al. 2000). A reasoner like RACER allows the classification of data into another context by equality and subsumption. Subsumption means that if concept B satisfies the requirements for being a case of concept A, then B can automatically be classified below A (Beck & Pinto 2002). This procedure enables query processing and searching in a way that is not possible with keyword-based search.

Figure 3: Extract of some concepts definitions in measurements (left) and the hydrology domain (right).

Figure 4: Definition of the query concepts for a feature type representing a water level measurement.

existing application ontology concepts and userdefined concepts are treated the same and will be referred to as query concepts. The actual search is performed by automatically mapping between the query concept and concepts of different application ontologies

22

In our example, John can use existing domain concepts like Measurement and WaterLevel (Figure 3) to formulate his query concept Query_for_Measurement (Figure 4). By re-classifying this concept in RACER, it can be deduced that all three web services match the user


query because they all provide measurements (having a location, a date and a time stamp) whose result is restricted to water levels. The subsumption hierarchy computed by RACER is shown in Figure 5.

6 Architecture for ontology-based discovery and retrieval The architecture we propose in this paper offers two functionalities that significantly enhance the usability of existing geographic information: using defined concept queries to overcome semantic heterogeneity problems during information discovery, and providing interpretation support for feature types and properties during information retrieval. In order to support the advanced query capabilities described above, some new service inter-faces and information items are needed in addition to the well-known components as catalogues and Web Feature Services of current SDIs. We will first describe these information items and interfaces and then sketch the

Figure 5: Subsumption hierarchy including the query concept Query_for_Measurement. The query concept is classified as a super-concept to all feature types provided by the three WFS.

Figure 6: Services and information items required for ontology-based discovery and retrieval of geographic information.

23


information flow by means of our motivating scenario. 6.1 Components to Enable Ontology-Based Discovery and Retrieval First, we have to provide the ontologies. For each application schema there is one application ontology that is described with the shared vocabulary of the corresponding domain. These on-tologies provide the formal description of the application schema of a data source. Therefore, they are referenced from the feature catalogue description metadata section of the corresponding ISO 19115 documents for that data source (Figure 6). This metadata section describes content information of that data source, e.g. a list of the available feature types names. To provide access to the ontologies, two new interfaces are defined (Figure 7): The Concept Definition Service interface allows access to

Figure 7: Components and interfaces required for ontology-based discovery and retrieval.

24

the concepts of the shared vocabulary and appli-cation ontologies. The Concept Query Service interface allows to reason about possible matches with simple and defined concept search. In our prototype, both interfaces are imple-mented by a reasoning component that makes use of ontologies expressed in SHIQ. The second component is a cascading catalogue service that is ÂťawareÂŤ of the application on-tologies. It provides access through the standard OGC Stateless Catalogue Service interface, thus implementing the decorator design pattern (Gamma et al. 1995). It extends the functionality of the conventional catalogue service by analysing and manipulating the filters of metadata que-ries. If a filter constrains a query only to return metadata results with a specific feature type in the feature catalogue description section, the advanced matchmaking capabilities of the Concept Query Service are used. The returned list of concepts is also added to the filter. This allows the decoration


of any conventional standard catalogue service because the expanded filter requires only the usual exact word match. The decorating catalogue service would also enable enhanced matchmaking on other metadata elements by plugging in additional services, e.g. a gazetteer of hierarchically ordered place names. The last component that deserves special attention is a user interface (UI) that utilizes the ontologies. It makes use of the Concept

Definition Service to allow a user to formulate enhanced queries for metadata and geodata. Metadata queries for data sources with specific application schema information are supported by allowing the construction of a query concept. The concepts from the application ontologies support the formulation of WFS queries for unknown ap-plication schemas and the interpretation of the results.

Figure 8: Information flow within the architecture for the motivating scenario.

25


6.2 Interaction and Information Flow in the Motivating Example We come back to our motivating example to illustrate the interaction and information flow within the architecture (Figure 8). John wants to construct a defined concept query in the UI using the domain’s shared vocabulary. For this the UI component first retrieves the concepts of the shared vocabulary from the Ontology-based Reasoner. The user defines his query concept and a spatial query constraint that covers the Elbe catchment. The UI component then constructs a filter with a conjunction of the spatial constraint and a featureType constraint. For building the latter the metadata element in the application schema information section is constrained by the query concept. The filter is the input of the GetRecord request to the Enhanced Cascading Catalogue. The catalogue discovers that the filter contains a constraint on the content of the data source. It uses the query concept to get a list of all matching concepts from the Ontology-based Reasoner. It replaces the original query concept in the filter by a disjunction of all matching concepts. This filter is forwarded to the conventional catalogue, which performs an exact word-based match. The results of the GetRecord request are finally returned to the UI component. In the second step, the user wants to analyze the geodata he found with his query. The returned metadata documents contain a reference to a WFS to access the data. The data is encoded in GML. To get a description of the schema of the feature type, the UI invokes a DescribeFeature-Type request and presents the GML Schema to the user. To help the user to interpret the schema, the UI also invokes the Ontology-based Reasoner to get the description of the concept defining this feature type. Because the concept is described with terms from the domain’s shared vocabulary, the user can select the correct properties he is interested in, i.e. the water level and the date/time of the measurement.

26

7 Conclusion and Future Work We have presented an approach and architecture for ontology-based discovery and retrieval of geographic information that can contribute to solving existing problems of semantic heterogeneity. The tested scenario comprises information items with simple structures. Future tests of the architecture will include more complex application schemas and further examples from other domains within the Earth Sciences. The presented architecture is componentbased, i.e. it is extendable in various directions. So far, the Enhanced Cascading Catalogue Service and the Reasoner component are tightly coupled in the architecture. However, the standardized interfaces allow to extend the architecture with multiple and exchangeable components. It is also planned to extend the architecture with modules for spatial and temporal reasoning (Vögele et al. 2003) as well as gazetteer services. Also, in a future version of the architecture, the tasks of discovery and retrieval will be combined in one query. The user will then be able to formulate his actual question straight away (i.e. without having to perform a query on the metadata first) using terms from the familiar shared vocabularies. The discovery and the filter formulation for retrieval will then be automated within the system. This »intelligent« query capability will enhance the usability of existing geographic infor-mation even further. 8 References Barr, R. (2003): Inspiring the Infrastructure. Presentation at the 9th EC-GI&GIS Workshop: ESDI - Serving the User. La Coruña, Spain, 2527 July 2003., URL: http://wwwlmu.jrc.it/ Workshops/9ecgis/presentations/isop_1_barr.p df. Last accessed: 10.07.2003 Beck, H. & H. S. Pinto (2002): Overview of Approach, Methodologies, Standards and Tools for Ontologies.


Bernstein, A. & M. Klein (2002): Towards HighPrecision Service Retrieval, in: Horrocks, I. & J. Hendler (ed.): The Semantic Web - First International Semantic Web Conference (ISWC 2002): 84-101. Bishr, Y. (1998): Overcoming the semantic and other barriers to GIS interoperability, International Journal of Geographical Information Science 12 (4): 299-314. Bishr, Y. & M. Radwan (2000): GDI Architectures, in: Groot, R. & J. McLaughlin (ed.): Geospatial Data Infrastructures. Concepts, Cases, and Good Practice, Oxford, Oxford University Press: 135-150. Brox, C., Y. Bishr, K. Senkler, K. Zens & W. Kuhn (2002): Toward a Geospatial Data Infrastructure for Northrhine-Westphalia, Computers, Environment and Urban Systems 26 (1): 19-37. FGDC (1998): Content Standard for Digital Geospatial Metadata. Version 2.0 (FGDC-STD001-1998), Federal Geographic Data Committee. Gamma, E., R. Helm, R. Johnson & J. Vlissides (1995): Design Patterns: elements of reusable object-oriented software, Boston, MA, USA, Addison-Wesley. GDI-NRW (2002): Catalog Services für GeoDaten und GeoServices, Version 1.0, International Organization for Standardization & OpenGIS Consortium. Groot, R. & J. McLaughin [ed.] (2000): Geospatial data infrastructure - Concepts, cases, and good practice, Oxford, Oxford University Press. Gruber, T. (1993): A Translation Approach to Portable Ontology Specifications, Knowledge Ac-quisition 5 (2): 199-220.

Haarslev, V. & R. Möller (2001): Description of the RACER System and its Applications, International Workshop on Description Logics (DL2001). Harvey, F., W. Kuhn, H. Pundt & Y. Bisher (1999): Semantic Interoperability: A Central Issue for Sharing Geographic Information, The Annals of Regional Science 33 (213-232. Horrocks, I., U. Sattler & S. Tobies (2000): Reasoning with Individuals for the Description Logic SHIQ, in: McAllester, D. (ed.): 17th International Conference on Automated Deduction (CADE-17) (Lecture Notes in Computer Science): 482-496. ISO/IEC (1996): ISO/IEC 10746 Information Technology - Open Distributed Processing Refer-ence Model. ISO/TC-211 (2003): Text for FDIS 19115 Geogaphic information - Metadata. Final Draft Version, International Organization for Standardization. ISO/TC-211 & OGC (2002): Geogaphic information Services Draft ISO/DIS 19119. OpenGIS Service Architecture Vs. 4.3. Draft Version, International Organization for Standardization & OpenGIS Consortium. Luzet, C. (2003): EuroSpec – Providing the Foundations to Maximize the Use of GI. Presentation at the 9th EC-GI&GIS Workshop: ESDI - Serving the User. La Coruña, Spain, 2527 July 2003., URL: http://wwwlmu.jrc.it/ Workshops/9ecgis/presentations/ied_luzet.pdf. Last accessed: 10.07.2003 McKee, L. (2000): Who wants a GDI?, in: Groot, R. & J. McLaughlin (ed.): Geospatial Data In-frastructure - Concepts, cases, and good practice, New York, Oxford University Press: 13-24.

27


Nebert, D. (2001): Developing Spatial Data Infrastructures: The SDI Cookbook, Version 1.1, Global Spatial Data Infrastructure, Technical Comittee. OGC (1999): Topic 13: Catalog Services (Version 4), Open GIS Consortium. OGC (2002a): Web Feature Server Interface Implementation Specification, Version 1.0.0 OpenGIS Project, URL:http://www.opengis.org. Last accessed: OGC (2002b): Web Map Server Interface Implementation Specification, Version 1.1.1 OpenGIS Project, URL:http://www. opengis.org. Last accessed: OGC (2003a): Geography Markup Language (GML) Implementation Specification, Version 3.0, Open GIS Consortium. OGC (2003b): OpenGIS Web Services Architecture (OpenGIS Discussion Paper OGC 03-025), OpenGIS Consortium. Schuster, G. & H. Stuckenschmidt (2001): Building shared ontologies for terminology integration, in: Stumme, G., A. Maedche & S. Staab (ed.): KI-01 Workshop on Ontologies. Stuckenschmidt, H. (2002): Ontology-Based Information Sharing in Weakly Structured Environments. PhD Thesis, Vrije Universiteit Amsterdam: Amsterdam. Studer, R., V. R. Benjamins & D. Fensel (1998): Knowledge Engineering: Principles and Methods, Data and Knowledge Engineering 25 (1-2): 161-197. Uschold, M. (1998): Knowledge level modelling: concepts and terminology, The Knowledge En-gineering Review 13 (1): 5-29.

28

Visser, U. & H. Stuckenschmidt (2002): Interoperability in GIS - Enabling Technologies, in: Ruiz, M., M. Gould & J. Ramon (ed.): 5th AGILE Conference on Geographic Information Science: 291-297. Vögele, T., S. Hübner & G. Schuster (2003): BUSTER - An Information Broker for the Semantic Web, KI - Künstliche Intelligenz 03 (3): 31-34. Wache, H., T. Vögele, U. Visser, H. Stuckenschmidt, G. Schuster, H. Neumann & S. Hübner (2001): Ontology-Based Integration of Information — A Survey of Existing Approaches, IJCAI-01 Workshop: Ontologies and Information Sharing: 108-117.


Figure: Poster

29


Assessment of Groundwater Vulnerability at Different Scales Bogena H. (1), Kunkel R. (1), Leppig B. (2), Müller F. (3), Wendland F. (1) (1) Forschungszentrum Jülich, Systemforschung und Technologische Entwicklung (STE), 52425 Jülich, Germany, E-Mail: {h.bogena | r.kunkel | f.wendland}@fz-juelich.de (2) RWTH Aachen, Lehrstuhl fuer Ingenieurgeologie und Hydrogeologie (LIH), Lochnerstr. 4-20, 52064 Aachen, Germany, E-Mail: bjoern@lih.rwth-aachen.de (3) ahu AG Wasser Boden Geomatik, Kirberichshof 6, 52066 Aachen, Germany, E-Mail: f.mueller@ahu.de

Introduction The project »Development of an information infrastructure for the rule based derivation of geoinformation from distributive, heterogeneous geodata inventories on different scales with an example regarding the groundwater vulnerability assessment« is funded by the BMBF-/DFG-programme »Geotechnologies – Information Systems in Earth Management«. The overall goal of this project is to develop an information infrastructure in order to process distributive, heterogeneous geodata inventories into geoinformation in a rule-based manner and independent of scale. A geoservice will be made available going beyond the actual provision of geodata by developing approaches for interlinking distributive geodata irrespective of scale, which can then be transferred to various issues. The new developed infrastructure will be demonstrated, in which the »concept for determining groundwater vulnerability« (Hölting et al., 1995) in two catchment areas of North Rhine-Westphalia (Rur and Erft catchment) serves as a geo-scientific case study (see Fig. 1). Heterogeneous, geological conditions (areas of solid and unconsolidated rock) occur in this study area as well as regions characterized by intensive anthropogenic activities (lignite and

30

hard coal mining, agriculture and forestry, urban areas). Furthermore, the study area crosses federal and national boundaries. Rules will be compiled and implemented permitting consistent spatial information to be derived and displayed from geodata recorded on different scales and in different formats. In order to derive these rules, it is necessary to analyse the effects of spatial information at different scales. Therefore, the groundwater vulnerability calculation is performed for different scales levels, in which the results are conditioned by different input data, different geometrical accuracies and algorithms. The present paper outlines the assessment of groundwater vulnerability at the microscale (~1 : 5,000), the mesoscale (~1 : 25,000) and the macroscale (<1 : 50,000). In order to cover these scales, three study areas were selected (see Fig. 1): - Saubach catchment area, 16 km2 (microscale), - Inde catchment area, 353 km2 (mesoscale), - Rur and Erft catchment area, 4125 km2 (macroscale) Data base The input data are incorporated in a staggered resolution corresponding to the individual scale ranges (see Table 1).


Figure 1: Location of the study areas.

Table 1: Names of the GML features returned by three WFS and their properties

31


Method The Hölting method considers certain soil-physical, hydrological and geological pa-rameters with respect to their influence on the residence time of the leachate and the degradation potential in soil. The assessment is performed as a standardized, parametrical assessment system, see Fig. 2. The sub areas of »soil«, »leachate rate«, »rock type of the individual strata« and their thikkness, as well as »suspended groundwater layers« and »artesian pressure conditions« are considered separately and given a dimensionless point value. The factor leachate rate was determined separately for each scale using the GROWA model (Kunkel & Wendland, 2002). The sub point values are offset by an assessment algo-rithm and then reclassified in five

Figure 2: Derivation scheme of the Groundwater Vulnerability assessment according to Hölting et al. (1995).

32

intervals (vulnerability function classes) from very low to very high. Results The preliminary results of the groundwater vulnerability calculation for the study ar-eas are presented in Fig. 3 (microscale), Fig. 4 (mesoscale) and Fig. 5 (macroscale).

First comparisons of the results showed that the magnitude and spatial distribution of groundwater vulnerability calculated for the different scales is quite similar. However, as it was expected, the assessment of the groundwater vulnerability on different scale levels supplied different results for the same area. These differences are determined by the parameter variation, which in spatial terms essentially depends on two factors. The content-related differentia-


Figure 3: Groundwater Vulnerability for the Saubach catchment (microscale).

Figure 4: Groundwater Vulnerability for the Inde catchment (mesoscale).

33


Figure 5: Groundwater Vulnerability for the Rur and Erft catchments (macroscale).

tion of the input data concerning geological properties and land use diverges on the different scale levels (accuracy). Furthermore, the data differ with respect to spatial precision. Both factors jointly determine the significance and validity of the information derived.

Detailed studies of anthropogenic influences, e.g. due to urbanization and abandoned industrial sites, of the influence of the geological structure and on the regionalization of climate data and on determining the leachate rate will be performed next.

Conclusions Determination of the groundwater vulnerability represents a suitable framework for developing a prototype systems architecture. Basic data from different scale levels also enable the input data, parameters and algorithm to be varied in a meaningful manner. Thus, it is possible to customize the precision of the groundwater vulnerability calculation for the different demands of the users. In order to investigate the influence of the parameters on the groundwater vulnerability at different scale levels, the influence of one of the area parameters regarded as particularly important on the groundwater vulnerability will be further studied on each scale level.

References HĂślting, B., HaertlĂŠ, T., Hohberger, K.-H., Nachtigall, K.-H., Villinger, E., Weinzierl, W., Wrobel, J.P. (1995): Konzept zur Ermittlung der Schutzfunktion der GrundwasserĂźberdeckung. Geol. Jb. Series C, No. 63. Schweizerbartsche Verlagsbuchhandlung, Stuttgart.

34

Kunkel, R., Wendland, F. (2002): The GROWA98 model for water-balance analysis in large river basins - the river Elbe case study. J. Hydrol., 259, 152-162.


Advancement of Geoservices – Services for Geoscientific Applications Based on a 3DGeodatabase Kernel Breunig M., Bär W., Thomsen A., Research Centre for Geoinformatics and Remote Sensing, University of Vechta, P.O. Box 1553, 49364 Vechta, Germany, E-Mail: {mbreunig | wbaer | athomsen}@fzg.uni-vechta.de

OBJECTIVES Coming geoservices will provide ubiquitous access to geodata. Therefore the efficient exchange of geodata will be a central and critical task (Giguère, 2001; Breunig & Bär, 2003). The requirements on versatility, interoperability, portability and performance of mobile and distributed geoservices are considerable, as are the requirements on 3D and 4D geodatabases in a distributed and mobile environment. Such geodatabases must efficiently manage a huge number of large and complex application-specific objects like 2D, 3D and spatio-temporal models. The necessity to achieve a clear, reliable and not overly complex implementation has to be weighted against the requirements on flexibility and performance of the management of complex geometric and topological objects on the database server, as well as against the requirements on interoperability and speed of data transmission in an environment of low bandwidth. In this extended abstract we discuss objectoriented 3D geodatabase services to be used in a mobile environment. In the first section, we discuss the requirements of geological applications for the management of 3D geodata, with reference to the common application scenario presented by the Karlsruhe partner project. We briefly outline the main design and implementation choices, and then present a complex 3D-to-2D service. Tests with large 3D geometry objects show the different performances of page-server and

object-server based architectures. Finally we give a conclusion and an outlook on our future research work. REQUIREMENTS OF GEOSCIENTIFIC APPLICATIONS Complex models of the subsurface Whereas classical GIS-applications generally represent the surface of the earth by 2D/2.5D geometry models, geological applications concern the subsurface, and therefore use genuine 2.5D/3D geometry models. There is, however, another feature that distinguishes geological modelling apart from classical GIS: the subsurface in its entirety is not accessible to observation. Therefore the position, form and structure of geological bodies must be inferred from spatially limited observations, e.g. by statistical estimation, or by the interpretation of seismic data. A lot of geological background knowledge is necessary to achieve a useful model of the subsurface, which will always be subject to a considerable uncertainty. The maps and sections used by geologists are always only so many projections of and cuts through a background model that has undergone a number of interpretation and estimation steps. The introduction of informatics permits to communicate such knowledge not only by 2D maps and sections derived implicitly from geometry models, but to make the process of geometrical modelling of the subsurface explicit and communicable. Present-day 3Dmodelling tools allow several geologists to co-

35


operate during the establishment of a subsurface model, making the process of modelling reproducible. 3D geometry and topology combined with thematic attributes Database services for subsurface geology applications must provide access to entire 3Dmodels, as well as to 2D representations (projections and sections) derived therefrom. Whereas in the case of mostly undisturbed sedimentary bodies, triangulated 2.5D surfaces representing strata boundaries may be sufficient, the general case comprising also important faults and folds, or non-stratiform bodies (e.g. salt-domes) requires true 3D-models. A geological subsurface model consists of a structural model of the geometry and topology, and a property model of the thematic attributes. Different approaches to 3D modelling can be distinguished: regular and hierarchical grids, 2D and 3D simplicial complexes, freeform surfaces and 3D volume bodies in boundary representation, as well as hybrid approaches. In our project, we restrict ourselves to 2D and 3D simplicial complexes and boundary representation of complex 3D-volume objects. Thematic attributes attached to geometry objects provide a flexible way of managing a simple property model. Database services, modelling tools and on-site clients The system of distributed geoservices developed in this joint project consists of on-site clients for data acquisition, viewing and augmented reality, which are developed by our project partners in Munich, Heidelberg and Karlsruhe. These clients communicate over network with the geodatabase services presented here. Besides an efficient 3D-geometry database providing shared access, storage and retrieval, and a comprehensive set of services providing problem-specific operations and transformations, a distributed environment for geological applications requires a powerful interactive 3D-modeling system. We use GOCAD速 (Mallet, 1992) as modelling tool and concentrate our research on the

36

3D-database and basic geometric/topologica operations. Transactions and versions In a distributed and mobile environment, a 3Dgeometry database server for geological applications should enable the geologists in the field, as well as in the laboratory, to refer to a shared common model of the subsurface during the process of data caption, processing, interpretation and assessment. The cycle of steps involved in updating a geological model can be long, however, and the result may never be free of subjective appreciation. Therefore, rather than supporting direct editing by transaction management, it is advisable to use strategies of version management to control the evolution of the shared model. Restrictions on the client side A comprehensive subsurface model may consist of hundreds of geological bodies, each represented by complex objects, e.g. triangulated surfaces, composed of up to more than a hundred thousand elements (e.g. triangles). Considering a portable client instrument, e.g. a robust PDA combined with a GPS client, both the transmission and the graphical representation of such a complex model are not yet realistic, because of insufficient available bandwidth and performance of the graphical display. On the other hand, the geoscientist in the field often needs only a selected part of the information, specified by e.g. a 3D-region, a stratigraphic interval, a set of thematic attributes and some other geometric and thematic criteria. Even such a reduced information may be too large for use in the field, motivating the use of techniques of data reduction and progressive transmission (Shumilov et al., 2002). Graphical representation of a 3D-model can be reduced to a sequence of 2D-sections and projections that are displayed using the limited graphical capabilities of a mobile client. By sliding through successive sections, even a 2D display can provide insight into the form and structure of a complex 3D body.


DESIGN CHOICES Object-Oriented DBMS Since a number of years, the object-oriented approach to geometry modelling has been well established. In the domain of database management, however, object-oriented DBMS have not succeeded in supplanting the so-called object-relational approach, i.e. the extension of relational databases by some objectoriented features. We decided to use a genuine OODBMS as basis for the 3D geometry database, because of the more straightforward mapping of geometry models onto persistent storage. Performance considerations led to the choice of a commercial ObjectOriented DBMS with page-server architecture. Interoperability Interoperability can be supported in different ways. Whereas CORBA (OMG) technology can be used to achieve a flexible connection between clients and servers in a heterogeneous environment (Shumilov, 2003, Shumilov et al., 2002), we chose the Java programming language for portability, XML for flexibility of data exchange, whereas the invocation of specific services of the 3D database over the network is supported by java-based JINI (Waldo & Arnold, 2000). XML today is in widespread use for flexible data exchange in a heterogeneous environment. Because of its extensibility, XML can practically be used to express any objectoriented data structure, though at the cost of considerable redundancy, which however can be reduced by the use of general-purpose compression techniques. Moreover, the XSL transformation language XSLT provides tools for transformation between different XML representations. 3Dto2D SERVICE FOR GEOLOGICAL MODELS In the application scenario described by our Karlsruhe project partners, geological profile sections serve as the basis for the construction of a 3D-geometry model. Such 2D sections, however, also provide the classical means for the investigation and visualisation of complex

geometries. Therefore as a concrete geological application we discuss the 3Dto2D geoservice. It provides all the necessary functionality of information reduction from complex 3D models to 2D models in order to make the model usable and displayable on constrained client devices (PDAs). Such a service on a mobile device will allow the field geologist to compare the actual observed situation with information provided by the subsurface model, and to take decisions on sampling accordingly. The 3Dto2D service provides the derivation of 2D profiles from a 3D model. It is composed of the following elementary services: - RetrieveService – supports queries for the complex geological objects. - PlaneCut – cuts a planar profile through the 3D model for a given plane. - PlaneProjection – projects objects onto the plane profile. - AffineTransform – transforms the resulting 3D object into a 2D xy plane. The construction of piecewise planar profile sections by repetition of these operations with different parameter values is straightforward. The computed result from the 3D model of this service will be a 2D map in the xy plane representing an arbitrary plane profile through the model with additional information projected onto the profile. Figure 1 shows the principle steps of the 3Dto2D service. Each of the elementary services implies a large amount of geometry operations, and in consequence requires a considerable amount of time. Therefore, in order to reduce the length of transactions, the elementary services of which this 3Dto2D service consists of, are operating each in transactional mode. Single failures of an elementary service can be compensated by restarting the elementary service, and do not require starting the whole process from the beginning.

37


Figure 1: Example of a 3Dto2D service: Planar profile section between endpoints A and B, with projected borehole profiles b1 and b2. (a) – location in map plane, (b) – block view of 3D model, (c) – view of profile section with part of model removed, (d) – resulting 2D profile section with the projected boreholes.

The results of this and all other geoservices made accessible by our DB are delivered as a custom XML representation. As such, the results can easily be transformed with XSLT to other XML representations as GML in 2D, X3D in 3D or any other textual format like VRML, GOCAD ASCII etc. PERFORMANCE TEST ON GEOMETRIC QUERIES The performance tests for the object-oriented data stores were performed respectively with a page- and an object-server based system architecture of an OODBMS. We examined different spatial queries for the retrieval of the internal geometric elements of a complex geological object. Such object internal queries are used in almost all geological algorithms based on the optimisation through internal spatial indexing of

the complex object elements. For the performance tests we used simulated geometry objects from the CS-department at Bonn University (Breunig et al., 2001). For each architecture, a test database was built consisting of two triangulated surfaces comprising 100,000 and 200,000 triangles. Then different spatial queries on the internal geometric elements of the complex objects were executed. Table 1 and 2 show the times needed for the insertion of the surfaces into the test databases as well as the retrieval times for different spatial queries. The spatial join query between the two intersecting surfaces resulted in a set of 1,979 intersecting triangles. The window queries, performed with two different query sizes, resulted in 1,144 (5%) and 61,255 elements (25%). Clearly the page-server architecture outperforms the object-server architecture for this

Table 1: Results for the page-server based system architecture of an OODBMS

Number of triangles in the surface

Page-server based data store Unit

100000

200000

Insertion

Minutes

0.6

1.2

Spatial Join

Seconds

44.3

Window Query 5% of space

Seconds

1.9

Window Query 25% of space

Seconds

33.5

Nearest Neighbour

Seconds

1.2

38


application, due to the large number of internal elements of the complex geological objects. A triangulated surface with 200,000 triangles comprises, including internal R*-tree objects and internal topological relations, approximately 1,100,000 individual objects. Under such a load the object-based server turns into a bottleneck through its objectsbased network transport and its object-level locking. The page-server architecture benefits from the fact that it groups about 150 to 200 small objects onto one page which also is the level of locking and of transport over network.

Breunig,M., Cremers,A.B., Müller,W., Siebeck,J.: New Methods for Topological Clustering and Spatial Access in Object-Oriented 3D databases. Proc. 9th ACM GIS, Atlanta(GA), 10p., 2001. Giguère,E., Mobile Data Management: Challenges of Wireless and Offline Data Access. Proc. 17th. Intern. Conf. Data Engineering, Heidelberg, IEEE Computer Society, Los Alamitos(CA), 227-228, 2001. Mallet,J.-L., GOCAD: A Computer Aided Design Program for Geological Applications. A.K.

Table 2: Results for the object-server based system architecture of an OODBMS.

Object-server based data store

Number of triangles in the surface Unit

100000

200000

Insertion

Minutes

Spatial Join

Seconds

Window Query 5% of space

Seconds

8.1

Window Query 25% of space

Seconds

234.6

Nearest Neighbour

Seconds

2.5

CONCLUSION AND OUTLOOK We have presented requirements of geoscientific applications for 3D geometry data management services. We briefly discussed a geological application scenario and a geoservice for mobile devices. A performance test on a simulated data set showed the advantage of the page-server over object-based architecture for very large complex objects. In future work, we will extend our research on geological services based on objectoriented data stores towards further types of services, especially to the management of spatiotemporal data for mobile geological services. REFERENCES Breunig,M., Bär,W.: Usability of Geodata for Mobile Route Planning Systems. Proc. 6th Int. AGILE Conference on Geographic Information Science, Lyon, 451-462, 2003.

42.2 233.3

Turner (ed.), Three-Dimensional Modeling with Geoscientific Information Systems, NATO ASI 354, Kluwer Academic Publishers, Dordrecht, 123-142, 1992. Shumilov,S.: Integrating existing object-oriented databases with distributed object management platforms, PhD dissertation, Bonn, 2003, 245 p. Thomsen,A., Cremers,A.B., Shumilov,S., Koos,B.: Management and Visualisation of large, complex and time-dependent 3D Objects in Distributed GIS. Proc. 10th ACM GIS, McLean(VA), 2002. Waldo,J., Arnold, K.: The Jini Specifications, 2nd edition, Pearson Education, 2000, 688p.

39


Integrating Imagery and ATKIS-data to Extract Field Boundaries and Wind Erosion Obstacles Butenuth M., Heipke Ch. Institute of Photogrammetry and GeoInformation, University of Hannover, Nienburger Str. 1, 30167 Hannover, Germany, E-Mail: {butenuth | heipke}@ipi.uni-hannover.de

1. Introduction and motivation The integration of different data sources is an important task to obtain overall solutions in many areas, for example geo-scientific analysis, updating and visualization. The specific task, being carried out at the Institute of Photogrammetry and GeoInformation (IPI) together with the Niedersächsisches Landesamt für Bodenordnung (NLfB) and the Bundesamt für Kartographie und Geodäsie (BKG), is the integration of vector data with imagery to enhance the Digital Soil Science Map of Lower Saxony (Germany) with automatically derived field boundaries and wind erosion obstacles (hedges, tree rows), c.f. (Sester et al. 2003). This data is needed for various problems: (1) One application area is the derivation of potential wind erosion risk fields, which can be generated with additional information about the prevailing wind direction and soil parameters, (2) another area is the agricultural sector, where information about field geometry is important for tasks concerning precision farming. Several investigations have been carried out regarding the automatic extraction of manmade objects such as buildings or roads, see (Mayer 1999) for an overview, and regarding the extraction of trees, see (Hill and Leckie 1999) for an overview. In contrary, the extraction of field boundaries from high resolution imagery is still not in an advanced phase: (Löcherbach 1998) presented an approach to refine topologically correct field boundaries, but not the acquisition of new ones, and (Torre and Radeva 2000) give a semi-automatic approach to extract field boundaries from aeri-

40

al images with a combination of region growing techniques and snakes. Generally, the extraction of objects with the help of image analysis methods starts with an integrated modelling of the objects of interest and the surrounding scene. Furthermore, exploiting the context relations between different objects leads to more overall and holistic descriptions, see for example (Baumgartner et al. 1997). The use of GIS-data as prior knowledge supporting object extraction leads usually to better results (Baltsavias 2002). These aspects concerning the modelling of the extraction of field boundaries and wind erosion obstacles will be described in the next section in detail, before the strategy of the extraction of field boundaries and wind erosion obstacles will be explained. Furthermore, results will be given to demonstrate the potential of the proposed solution.

2. Modelling the integration of vector data and aerial imagery Modelling the integration of vector data with imagery in one semantic model is an essential prerequisite for successful object extraction, as described in detail for example in (Butenuth and Heipke 2003). We differentiate in the semantic model an object layer, consisting of a geometric and a material part, a GIS part and a Real World part, and an image layer (c.f. Figure 1). The relations between the different layers (»concrete of«) of one object are referenced in the image layer to CIR-images, which are generated in summer, when the vegetation is in an advanced period of growth. The use of


prior knowledge plays a particularly important role, which is represented in the semantic model by an additional GIS-layer, in this case the vector data of ATKIS DLMBasis (c.f. Figure 1): (1) Information about field boundaries and wind erosion obstacles are only of interest in the open landscape; thus, settlements, forests and water bodies are masked out in the imagery for further investigations. (2) The road network, rivers and railways can be used as borderlines to select regions of interest to simplify the scene. The subsequent processing focuses on the field boundaries within the selected regions of interest, which have to be extracted; the external ones are consequently already fixed (e.g. a road is a field boundary). This fact is incorporated into the semantic model and the mentioned objects are introduced with a direct relation from the GISlayer to the Real World. (3) The objects hedge and tree row are modelled in the ATKIS-data of the GIS-layer, but only, when they are longer than 200 m and lie along roads or are formative for the landscape. Accordingly, only

part of the wind erosion obstacle is contained in the ATKIS-objects and the relation from the GIS-layer to the Real World is limited. The combined modelling of the objects of interest leads in addition to relationships inside the layers: One object can be part of another one or be parallel and nearby, and together they form a network in the real world. For example, wind erosion obstacles are not located in the middle of a field, because of disadvantageous cultivation conditions, but solely on the field boundaries. Accordingly, the geometries of these different objects are in part identical or at least parallel with a short distance in between. The object field is divided in field area and field boundary in order to allows for different modelling: On the one hand the field area is a 2D vegetation region, which is described in the image as a homogeneous region with a high NDVI (Normalised Difference Vegetation Index) value, and on the other hand the field boundary is a 2D elongated vegetation boundary, which is described in the image as a straight

Figure 1: Semantic Model.

41


line or edge. Both descriptions lead to the desired result from different sides. The object wind erosion obstacle is divided in hedge and tree row due to different available information from the GIS-layer. Afterwards, both objects are merged, because the »concrete of« relations are identical in modelling. In addition, the wind erosion obstacles are not only described by their direct appearance in geometry and material, but also with the fact, that due to their height (3D object) there is a 2D elongated shadow region next to and in a known direction (e.g. northern direction at noon). Therefore, the »concrete of« relation not only exists of an elongated, coloured region with a high NDVI value, but additional of an elongated dark region with a low NDVI value alongside the real object of interest in a known direction. 3. Strategy to extract field boundaries and wind erosion obstacles from aerial imagery The strategy to extract field boundaries and wind erosion obstacles is based on the modelled characteristics: Regions of interest are selected to take into account the constraints as described in the semantic model. The strategy divides the approach first into two different algorithms to accomplish the extraction of the different objects separately. At a later date we will return to a combined solution due to the mentioned geometrical and thematic similarities of the objects of interest. A common evaluation of the intermediate results will enable an enhanced and refined extraction process. The strategy for the extraction of field boundaries includes the exploitation of different characteristics of fields and their surrounding boundaries to derive a preferably optimum result concerning correctness. The homogeneity of the vegetation within each field enables a segmentation of field areas, processed in a coarse scale to ignore small disturbing structures. The absolute values of the gradient are computed on the base of a Gaussian filtered version of the NDVI-image. The topography of the resulting grey values is used as input for a watershed segmentation (Beucher 1982). The

42

deduced segments are marked with their corresponding mean grey value of the NDVIimage. Potential field areas are derived by grouping the resulting segments with respect to neighbourhood and a low grey value difference, additionally considering a minimum size. The segmentation result is only an intermediate one, because the case of identical vegetation in neighbouring fields – and therefore a missing boundary – is until now ignored. A line extraction is carried out in the original resolution image to derive missing field boundaries. The extracted line segments are processed to straight lines in consideration of a minimum length due to the characteristics of the field boundaries, remaining lines are rejected. A further step is the selection of lines, which are not located next to already derived boundaries of segments. In addition, lines are linked by their calculated intercept points of the lines, if the corresponding end points have a distance smaller than a threshold, and furthermore the lines are extended to the boundaries of the segments, if the distance lies again below a threshold. Finally, gaps between intercept points are closed, if the distance is not too large. The strategy of the extraction of wind erosion obstacles contains the modelled prior knowledge to generate search areas deduced from the ATKIS-data. Extracted information about a high NDVI-value and a higher value of the Digital Surface Model (DSM) than the surrounding area give evidence of wind erosion obstacles within the defined search areas. Objects of interest, which are not located near to GISobjects have to be extracted without prior information about the location. In addition to the extraction of high NDVI-values and higher DSM-values, characteristics as straightness, a minimum length and width have to be taken into account to derive wind erosion obstacles. 4. Results and conclusions In Figure 2 on the left side a selected region of interest is shown, the boundaries of the region have been derived from GIS-data. In Figure 2 on the right side the results of the segmentation and the grouped field boundaries are depicted. The derived results of the proposed


approach are promising, because not only well visible field boundaries, but also most of the ones which are hard to distinguish, could be successfully extracted. In the future, the use of colour and/or texture may stabilize the segmentation step. Additionally, the use of an integrated network of snakes for all field areas of the region of interest could improve the topological correctness of the results. The solution concerning the extraction of field boundaries has to be evaluated with a larger data set to better assess the proposed strategy. Work on the extraction of wind erosion obstacles is currently in progress, first results will be available soon.

Baumgartner, A., Eckstein, W., Mayer, H., Heipke, C. and Ebner, H., 1997, Context Supported Road Extraction, in: Gruen, Baltsavias, Henricson (Eds.), Birkhäuser, Automatic Extraction of Man-Made Objects from Aerial and Space Images II, Basel Boston Berlin, vol. 2, pp. 299-308. Beucher, S., 1982, Watersheds of Functions and Picture Segmentation, IEEE, International Conference on Acoustics, Speech and Signal Processing, Paris, pp. 1928-1931. Butenuth, M. and Heipke, C., 2003, Modeling the Integration of Heterogeneous Vector Data and Aerial Imagery, Proceedings ISPRS

Figure 2: Left: NDVI-image, depicted is one selected region of interest. Right: Segmented field areas, additionally shown are lines, extracted and grouped to field boundaries (white lines).

5. References Baltsavias, E. P., 2002, Object Extraction and Revision by Image Analysis Using Existing Geospatial Data and Knowledge: State-of-theArt and Steps Towards Operational Systems, International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Xi´an, vol. XXXIV, part 2, pp. 13-22.

Workshop on Challenges in Geospatial Analysis, Integration and Visualization II, Stuttgart, Germany, pp. 55-60. Hill, D. A. and Leckie, D. G. (Eds.), 1999, International Forum: Automated Interpretation of High Spatial Resolution Digital Imagery for Forestry, February 10-12, 1998, Natural Resources Canada, Canadian Forest Service, Pacific Forestry Centre, Victoria, British Columbia,

43


Löcherbach, T., 1998, Fusing Raster- and Vector-Data with Applications to Land-Use Mapping, Inaugural-Dissertation der Hohen Landwirtschaftlichen Fakultät der Universität Bonn, Germany. Mayer, H., 1999, »Automatic Object Extraction from Aerial Imagery - A Survey Focusing on Buidlings«, Computer Vision and Image Understanding, no. 2, pp. 138-149. Sester M., Butenuth M., Gösseln G. v., Heipke C., Klopp S., Lipeck U., Mantel D., 2003, New Methods for Semantic and Geometric Integration of Geoscientific Data Sets with ATKIS Applied to Geo-Objects from Geology and Soil Science, in: Geotechnologien Science Report »Information Systems in Earth Management«, Koordinierungsbüro Geotechnologien, Potsdam, No. 2, 51- 62. Torre, M. and Radeva, P., 2000, Agricultural Field Extraction From Aerial Images Using a Region Competition Algorithm, International Archives of Photogrammetry and Remote Sensing, Amsterdam, vol. XXXIII, part B2, pp. 889-896.

44


Simulation and Information Software for Implementing the European Water Framework Directive: ISSNEW Dannowski R. (1), Michels I. (2), Steidl J. (1), Wieland R. (1) (1) Leibniz Zentrum fuer Agrarlandschafts- und Landnutzungsforschung (ZALF) e. V., M端ncheberg, Institut fuer Landschaftswasserhaushalt, Institut fuer Landschafts-systemanalyse, Eberswalder Str. 84, 15374 M端ncheberg, Germany, E-Mail: {rdannowski | jsteidl | rwieland}@zalf.de (2) WASY Gesellschaft fuer wasserwirtschaftliche Planung und Systemforschung mbH, Waltersdorfer Strasse 105, 12526 Berlin, Germany, E-Mail: i.michels@wasy.de

The Problem Excessive NO3- concentrations in ground- and surface waters are a problem through-out the EU. Between 50 and 80 % of NO3- input stems from agricultural sources. The European Water Framework Directive (WFD) calls for the development of tools to: - make long-term estimates of nutrient loadings - identify regions of high sensitivity - evaluate proposed measures for mitigation The implementation of the legal and obligatory guidelines of the WFD has lead to a number of new requirements that need to be met by the German water management administrations. Not only does additional information about the state of water management resources need to be collected and systematically prepared but also, extending on this data, all relevant water bodies in Germany need to be documented and assessed. If the result of the assessment leads to the conclusion that the water does not hold to the called for ecological and the physical/chemical parameters, then steps are to be set up and undertaken appropriate to meet WFD requirements. All of these named activities are to be carried out within a governmentally controlled plan undergoing a multi-step process of open participation from the public.

With regard to time the WFD has stipulated that all such measures must be set up by 2009 with the fulfilling of WFD requirements to take place no later than 2015. If one considers, however, the processes in the water cycle over time, two things need to be pointed out: 1. It appears questionable whether it is possible to meet the requirements in every case within six years. 2. It appears questionable whether it is possible with the currently available tech-nology to take the most sound, optimal measures in meeting the WFD re-quirements. This leads to two consequences, apart from the earliest possible undertaking of ap-propriate initial measures: 1. Measures should be introduced based on efficiency and differentiated ac-cording to specific place and time so that within the given time frame at least initial effects (e. g. a reverse trend in the water pollution levels) can be demon-strated. 2. In determining the appropriate measures tools must be available that explicitly support the specific location and time requirements in allocating the courses of action (e. g. via scenario analysis techniques based on distributed and process-oriented models). In doing so, relevant information can be factored in as extensively as possible.

45


The WFD is also, however, the cause and basis of rethinking environmental proce-dure for another reason. First and foremost, a new aspect consists of river-related regions projects, something which the WFD unconditionally stipulates. The ramifica-tions of this are clear: as a rule multiple organisations and countries will be needed to work together. From the perspective of IT, this entails in turn such topics as multi-user access, client/server, and data storage, among many others. Yet there are also large hurdles concerning non-coordinated data collection, non-standard data struc-tures, unknown access paths to the data, the problematic further application of the data, to name just a few. If one examines the current practice of using GIS systems in water management specifically, one will find primarily individual locations and file-based works including, among others, file-sharing over network data servers, heterogeneous, inconsistent data records, missing links of individual topics and much more which prevents efficient and highquality work. Some GIS and data banks are linked where specific applications or a specific person is usually responsible for maintaining the data. All of these practices eventual-

Aims The ISSNEW project includes the preparation of equipment that meets the essential parameters of the WFD in the form of a market-ready product family that can supply many of the current WFD obligations with efficient solutions. This equipment, there-fore, must have the following components (Fig. 1): 1. GIS and Data bank based information system for the gathering, structuring and visualization of Geo-data and simulation results on non-point nutrient input in water bodies (ground water and surface water). 2. Simulation system for the evaluation of the effects on the water quality that the measures against non-point nutrient flow from agricultural lands and non-point nutrient input into water bodies have. 3. Bi-directional intelligent interfaces between the information system and simulation system. Goals Through the components themselves and especially through their effective collaboration the following goals are to be reached - Extending on a standardised information structure taking into account the impor-

Figure 1: ISSNEW components.

ly lead to problems with the upkeep and use of spatial reference information thus ultimately preventing the implementation of this technology as a strategic instrument to obtain information.

46

tance of simulation models, a platform-independent, completely component based software system for data storage, analysis, and presentation needs to be available. - The already existing large geo databases as well as the ones still to be collected as plan-


ned by water management (river catchment areas) are to be supplied performantly by the implemented information system, improved, and optimally further processed. - Against the background of the WFD implementation, an improved management of knowledge should be made possible through the integration of space-/timebased, scenario-capable simulation and modelling software for non-point nutrient input into the groundwater and into the surface water. - For the further development of geo services, internet-oriented solutions for general access to the planned data bases independent of location need to be available.

Simulation system The main essentials of the simulation system (Fig. 2) have already been outlined (Dannowski et al., 2003). A new development is the so-called UZ Module (v. Waldow, 2004) which will describe the vertical water flow and nitrate transport in the (deeper) unsaturated zone. Pilot area For model validation and the test of the ISSNEW compound, a pilot area has been established in eastern Brandenburg (Fig. 3). Hydrogeologic data has been digitised from meso-scale thematic maps. Land use and agricultural data are available at ZALF from former monitoring and evaluation studies. Additional information was ob-tained from the State Environmental Agency, who are also interested in scenario re-sults of the underground nitrate transport for that region.

Figure 2: The ISSNEW simulation system (interaction between modules SOCRATES, UZ, FEFLOW).

47


Figure 3: Map of the pilot area in eastern Brandenburg.

References Dannowski, R., I. Michels., J. Steidl., R. Wieland., R. Gründler., K.-C. Kersebaum., H. v. Waldow., J.-M. Hecker. & O. Arndt (2003): Implementation of the European Water Framework Directive: ISSNEW – developing an information and simulation system to evaluate non-point nutrient entry into water bodies. GEOTECHNOLOGIEN Science Report No. 2: Information Systems in Earth Management, Projects. KickOff Meeting, Univ. of Hannover, 19 Febr. 2003. Potsdam: Koordinierungsbüro GEOTECHNOLOGIEN, 23-29. (ISSN 1619-7399) Waldow, H. v. (2004): The implementation of an analytical model for deep percolation. This issue.

48


Information Systems Fueled with Data from Geo-Sensor Networks Egenhofer M. J. National Center for Geographic Information and Analysis, Department of Spatial Information Science and Engineering, Department of Computer Science, University of Maine, Orono, ME 04469-5711, U.S.A., E-Mail: max@spatial.maine.edu; WWW: http://www.spatial.maine.edu/~max/

The advent of microsensors will have a huge impact on the way geospatial information can be analyzed. Of particular interests are chemical and biological sensors as they can track traces of substances beyond what the human eye meets. The combination with location sensors, such as GPS receivers, and wireless communication devices creates geo-sensors. Measurements of multiple geo-sensors can be related spatially and temporally so that changes in real-world phenomena can be analyzed, almost in real time. Current geographic information systems are not well suited for the analysis of this kind of data streams as they lack appropriate data models for fields that are flexible enough to deal with space-time series. This presentation will focus on such field data models that are tailored to data collected by geo-sensor networks and, therefore, form the foundation for information systems that are

driven by geo-sensor networks. We will highlight two important properties of a geo-sensor data model: First, while projecting a geo-sensor field first temporally and then spatially is conceptual equivalent to projecting it first spatially and then temporally, the actual result may differ due to the interpolation methods employed. Second, since a single geo-sensor that moves while collecting data captures a field (the values collected along a space-time path), the geo-sensor field of multiple such rovers is recursively a field of fields. These properties form the foundation for an algebra of geo-sensor fields, such that query processing can be optimized. We will finally discuss the grand challenge of the transition from a geosensor field as a collection of samples to a single function that approximates the field in a very compact fashion.

49


Change Detection and Integration of Topographic Updates from ATKIS to Geoscientific Data Sets GĂśsseln G. von, Sester M. Institut fuer Kartographie und Geoinformatik, Universitaet Hannover, Appelstr. 9a, 30167 Hannover, Germany, E-Mail: {guido.vongoesseln | monika.sester}@ikg.uni-hannover.de

1. Overview Solving environmental or geoscientific problems usually involves the integration of different data sources for an integrated analysis. Despite the fact that all geoscientific data sets rely on the same source, the earth surface, they show significant differences due to different acquisition methods, formats and thematic focus, different sensors, level of generalisation, and even different interpretation of a human operator. Another problem which occurs while working with different data sets is the problem of temporal consistency: Even if the data sets are originally related to the same objects, different update cycles in the different thematic data sets lead to significant discrepancies. Observing this problem it is obvious that harmonisation, change detection and updating of different data sets is necessary to ensure consistency, but hardly practicable when performed manually. The overall project deals with different aspects of data integration, namely integration of different vector data sets, integration of vector and raster data, as well as providing an underlying data structure in terms of a federated data base, allowing a separate, autonomous storage of the data, however linked and integrated by adapted reconciliation functions for analysis and queries on the different data sets (Sester et al., 2003). In this paper there will be a concentration on the work of the Institute of Cartography and Geoinformatics (ikg), namely the semantic and geometric integration of vector data: Methods

50

for the automatic integration, change detection and update between heterogeneous data sets. We will present an overview of the work done so far. 2. Used data sets As it has been specified in (Sester et al., 2003) three data sets are used in this project, all at a scale of 1:25000: - ATKIS - the topographic data set, - GK - the geological map and - BK - the soil-science map. In a geographic information system (GIS) simple superimposition of different data sets is possible and directly reveals visible differences (Fig. 1). These differences can be explained by comparing the creation of the geological, the soil-science map and ATKIS (von Goesseln & Sester, 2003). As for ATKIS, the topography is the main thematic focus, for the geo-scientific maps it is either geology or soil science – however they are related to the underlying topography. The connection between the data sets has been achieved by copying the thematic information from topographic to the geo-scientific maps at that point of time the geological or soil-science information is collected. While the geological content of these data sets will keep its actuality for decades, the topographic information in these maps does not: In general, topographic updates are not integrated unless new geological information has to be inserted in these data sets (Ad-Hoc, 1994 & 2002).


Figure 1 : Simple superimposition of ATKIS (dark border, hatched) and geological map GK 25 (solid fill).

The update period of the feature classes in ATKIS varies from three months up to one year– in general, 10% of the objects have to be updated per year (LGN 2003). These differences in acquisition, creation and updating lead to discrepancies, making these data sets difficult to integrate. In order to identify changes in the data sets and update the changes, the following steps are needed: identification of corresponding objects in the different data sets, classification of possible changes, and finally update of the changes. 3. Data Integration 3.1 Semantic Integration Firstly, semantic correspondences between these data sets must be identified and described. Enabling the adaptation of updates from one data set to another leads to the problem of integration of heterogeneous data sets. In our case, we are dealing with the most complex type of integration problems according to a classification by Walter & Fritsch (1999): handling data sets which are stored in heterogeneous sources and differ in data modelling, thematic content, acquisition method, accuracy and temporal update.

In the first phase of this project, the topographic feature class Âťwater areasÂŤ has been chosen as a candidate for developing and testing (others will follow), because of the presence of this topographic element in all data sets. To ensure a correct and fully automatic process, the detection of changes and the correct linking between semantic partners is a must. 3.2 Geometric Integration Following the semantic integration, differences in geometric representation have to be identified and processed. Geological and soil-science maps are single-layered data sets which consist only of polygons with attribute tables, while ATKIS is a multi-layered data-structure with points, lines and polygons, together with attribute tables. The different data models used in ATKIS and the geoscientific data sets are resulting in more discrepancies in the geometric representation requiring a harmonisation procedure before the establishing of links between corresponding objects could be done. 3.3 Harmonisation Water objects in ATKIS are represented in two different ways: Water areas and rivers exceeding a certain width are represented as poly-

51


gons. Thinner rivers are digitised as lines and are assigned additional attributes, referring to some classified ranges of widths. The representation of water objects in the geo-scientific maps is always a polygon. These different representations have to be taken into account. Another problem is the representation of grouped objects in different maps resulting in different relation cardinalities that have to be integrated: 1:1, 1:0, 1:n, and n:m. In a first step different criteria like area, position and shape are used to identify relations between the data-sets, which enables the detection of 1:1 and 1:0 relations. The usage of different algorithms to reveal these relations are the maintopic at this point of time in the project. Another problem which occurs comparing the data-sets is the partial representation of objects. For this case different methods of similarity calculations are tested to ensure a proper detection of »partially-digitised« objects. These methods calculate the similarity between the geometry of objects in two data-sets using points located on the shape of each objects. For every point of an object in one data-set a nearest-neighbour is searched in the amount of points placed on the shape of the other object. Evaluation of similarity is done by comparing the sum of points placed on both objects with the amount points for which a nearest neighbour has been found using the criteria of proximity. Analysing 1:n and n:m relations is done by grouping single objects in one data-set and compare these »test-groups« from one data-set with a single-object (1:n) or another »test-group« (n:m) from the other data-set. Comparation of two groups or a group and a single object is done by calculating a convex hull for the »test-groups«. 4. CHANGE DETECTION Objects which have been selected through semantic and geometric integration and have been considered as a matching pair will be investigated for change detection. A simple intersection of corresponding objects is used for the change detection. Yet, the mentioned differences may cause even more problems which are visible as discrepancies in position,

52

scale and shape. Therefore firstly, a local transformation will be applied, leading to a better geometric correspondence of the objects. To this end, the iterative closest point (ICP) algorithm from (Besl & McKay, 1992) has been implemented to achieve the best fitting between the objects from ATKIS and the geoscientific elements using a rigid transformation. The result of this iterative procedure, is the best fit between the objects, and a link between corresponding objects in the different data set is established. Evaluating the transformation parameters allows to classify and characterize the quality of the matching: in the best case, the scale parameter should be close to 1; also rotation and translation should not be too large – assuming, that the registration of the data sets is good. At this state of the implementation ATKIS is taken as the reference, therefore the geometry of the geoscientific data sets is adapted to the reference geometry. 4.1 Intersection and evaluation After the local adaptation of the objects, the intersection of objects for a proper change detection leads into a more promising result as simple intersection. The analysis and the classification of these segmented intersection result into different change situations, is a semantic problem and will be conducted in close collaboration with experts from geology and soil science, who are also partners in the project. At this time of the project three different classes have been identified: the intersection segments can be classified according to the respective classifications of the area the segments are intersected from in the original data sets: - Type I : Area is defined as water area in both maps, no adaptation required, - Type II : Area in the geoscientific data set has been any type of soil, but is defined as water-area in the reference data set ATKIS; therefore the attribute of classification will be changed in the geoscientific map, - Type III : Area in geoscientific data set has been water-area, but is now updated as soiltype. Therefore a new soil-classification is required.


Figure 2 : left: Results of simple intersection; right: Results of intersection after ICP matching and area-threshold filtering.

While Type I and II require only geometric corrections and can be handled automatically, Type III needs more of the operators attention. The resulting segments from the intersection process will be filtered using a predefined areathreshold. Segments below these thresholds are taken as geometric discrepancy and will be closed using the attributes of the surrounding neighbours. Areas or segments which are larger than the given threshold are reported to the operator. There are different ways a water area can disappear, either through different natural (e.g. erosion) or man-made (e.g. refill) processes which have influence on the new soil type. This new soil type could not be derived automatically, but there are different proposals which could be offered to the user by the software. As a result a visualisation will be produced showing all the areas where an automatically evaluation of the soil situation could not be derived or only a proposal could be delivered and manual »field work« must be performed (Fig. 2). The visualisation of Type III segments will reduce the amount of human resources needed to detect the topographic changes between the geoscientific data sets and ATKIS.

5. CONCLUSION In this paper the ongoing research on semantic and geometric integration has been presented. The selection of the topographic element water, the automatic merging of the segmented objects and the use of the ICP-algorithm showed very good results. In the near future the semantic catalogue will be expanded to cover all topographic elements which are represented in each of the three data sets, german digital topographic map (ATKIS) and the geoscientific maps from geology and soil-science. The introduction of punctual and linear elements will enhance the process of geometric integration. Additional discussions with the external geoscientific partners will ensure the creation of a fully-functional and automatic process. 5.1 Literature Ad-hoc AG Boden, 1994. Bodenkundliche Kartieranleitung. Hannover, Germany, pp. 27-44. Ad-hoc AG Geologie, 2002. Geologische Kartieranleitung – Allgemeine Grundlagen. Geologisches Jahrbuch G 9. Hannover, pp. 19 ff. Besl, P. & McKay, N.,1992. A Method for Registration of 3-D Shapes, Trans. PAMI, Vol. 14(2), pp. 239-256.

53


Goesseln, G. v. & Sester, M., 2003. Semantic and geometric integration of geoscientific data sets with atkis – applied to geo-objects from geology and soil science. In: Proceedings of ISPRS Commission IV Joint Workshop, Stuttgart, Germany, September 8-9, 2003. LGN, 2003. ATKIS in Niedersachsen und in Deutschland. In: Materialien zur Fortbildungsveranstaltung Nr. 1/2003, Hannover. Sester, M., Butenuth, M., Goesseln, G. v., Heipke, C., Klopp, S., Lipeck, U., Mantel, D., 2003. New methods for semantic and geometric integration of geoscientific data sets with ATKIS – applied to geo-objects from geology and soil science. In: Geotechnologien Science Report, Part 2, Koordinierungsbüro Geotechnologien, Potsdam. Walter, V. & Fritsch, D,. 1999. Matching Spatial Data sets: a Statistical Approach, International Journal of Geographical Information Science 13(5), 445–473.

54


Advancement of Geoservices - A Graphical Editor for Geodata in Mobile Environments Häußler J., Merdes M., Zipf A. European Media Laboratory GmbH, Schloss-Wolfsbrunnenweg 33, 69118 Heidelberg, Germany, E-Mail: {jochen.haeussler | mattias.merdes | alexander.zipf}@eml.villa-bosch.de

Introduction A range of GIS companies now offer specialized tools for the visualization and processing of geodata on mobile devices. Most mobile GIS so far usually focus on geodata viewer functionality. If editing is supported then they rely on off-line synchronization of the data when back in the office. Examples include »ArcPad« by ESRI or »IntelliWhere OnDemand« from Intergraph. Such tools are tuned for special GIS systems and their server and data formats. Some offer development kits in order to extend these systems, but these are proprietary and their future is not foreseeable. First research prototypes or commercial systems also try to address the issue on-line editing, but - if at all - they then favour proprietary solutions, which are not based on OGC standards. In particular they do not offer standard-based possibilities for the on-line transmission and update of the updated spatial data. Spatial Data Infrastructures are becoming more and more important for the geo sciences. Currently there are a range of initiatives aiming at developing spatial data standards on the international (GSDI), European (INSPIRE), national (Geomis.Bund, Geoportal.DE) or regional level (GDI.NRW, GDI Berlin-Brandenburg etc.). While these are not yet available completely they are based on the same premises as our approach. This is reflected by the following formula that characterizes a Spatial Data Infrastructure (SDI): SDI = Spatial Data + Services + Networks + Standards (Adapted definition of the Federal Office for Cartography and Geodesy)

Our approach fits perfectly in this formula, as we aim at developing geo services based on open standards for wireless networks to manage, visualize, edit and acquire spatial data. So, the key components of our system include the ability to view, edit, and acquire geodata on mobile devices with online connection to the server-based data infrastructures based on open standards of the Open GIS Consortium and the W3C Consortium, which allow for easy integration in future standard-based infrastructures [9]. The importance of OGC standards for mobile geo-services has already been stressed by [8]. Requirements for the Graphical Editor While the overall architecture of the system is described in [4] we want to focus on the editor: The editor constitutes the user interface of the mobile data acquisition system. The central element is a map which displays the geodata received from a server, developed by [1]. The usual tools for manipulating the map, getting information about features and editing their attribute data and geometries are required. In order to insert new features into the map it should be able to select an object type from a list and (graphically) insert them into the map. This list should be generated generically from the application schema. This generation shall not only take the object types themselves into account but also the associated attributes with their respective data types, ranges of allowed values and so on. It should thus be possible to create appropriate input masks for the attribute data of the various (geographical) object

55


types involved. The processing will only be finished if the transaction could be executed successfully. For a better orientation the current user position (acquired from GPS) shall be displayed on the map. With a 'follow-GPS' function the visible section of the map can be centred automatically. To minimize the volume of the data requested from the server a spatial data management component is also necessary: If the user moves the visible map into an area for which no data has previously been fetched from the server or if she moves physically into such a region then an appropriate request (OGC getFeature request plus filter encoding) should be sent to the server automatically in order to complete the missing data. To integrate additional functionality for more specific application scenarios we consider providing an interface for plugins. This way parts of the additional functionality can be integrated into the core editor keeping the actual editor component thin. This would also foster the desired reusability of the software application. The following list contains examples about additional functionality which may be activated: - Triggering of a single measurement or a series of measurements in certain spatial or temporal intervals and insertion of the respective measuring point(s) into the database - Storing of a series of measurements which had been buffered on a measurement device - Tools for advanced operations which are being processed on the server side Scalable Vector Graphics (SVG) Based Visualization of Geodata Also for the visualization of geodata it is desirable to find a solution which is based on open standards. Another criterion is a rendering performance which is acceptable for the end user. This must be achieved by the whole architecture but is also particularly relevant for the interaction with map content. Furthermore the

56

client application must remain usable in case of a temporary interruption of the client-server connection. For the interaction with the geodata and its associated attribute data not to require a contact with the server in every case it is necessary to maintain a semantic description of the spatial structures, their localization as well as the attribute data. For these reasons it seems to be necessary to display the geodata not with pixel graphics but with a map based on vector graphics. This also implies considerable advantages for the editing and creation of spatial structures. Scalable Vector Graphics (SVG) is an open format for vector graphics. It is an XML-based specification by the W3C consortium. SVG combines the well known advantages of XML with those of vector graphics. It also includes a standardized mechanism for embedding arbitrary metadata within SVG documents. This feature can be utilized for the management of geographic attribute data on the client application. The use of SVG for map visualization on mobile devices has also been investigated e.g. by [6,7,8], but that contributions did not consider OGC standards as interfaces or editor functionality, but experienced still problems with SVG on very small devices like PDAs. There is available a number of commercial as well as free implementations of SVG. The most important open source implementation comes from the Apache Software Foundation (ASF) and is called Batik. The Batik toolkit has a number of advantages: - freely available - open source, modifiable - Java-based implementation - prepared for extension - stable, mature implementation of large parts of the specification - support for scripting with ECMAScript - support for CSS-based styling With the Batik distribution comes a SVG browser/viewer implementation called Squiggle. Despite its many advantages this browser application only supports the rendering of SVG documents. Geometric structures cannot be manipulated and the associated SVG documents can-


not be modified. The enhancement of the Batik browser to support generic editing of geodata is a major focus of our ongoing work. An evaluation of these will provide insight for the final implementation of the editor. Two additional features of the SVG specification are very useful when developing a geodata editing tool: dynamic manipulation of the SVG Document Object Model (DOM) tree with ECMAScript and specification of style attributes such as colours, fonts, and visibility attributes with Cascading Style Sheets (CSS) syntax. Although not vector graphics features these capabilities of SVG greatly enhance its possibilities to move and manipulate geometric objects and to specify styling attributes for rendering the spatial structure of geodata, respectively. These styling attributes can be derived from OGC-compliant Styled Layer Descriptor (SLD) documents. In order to use SVG for the display of geodata it is first necessary to convert the GML-based geodata to SVG with embedded or linked metadata. After editing existing geo-objects or creating new ones these must be converted back to GML- and WFS-compliant formats. These can partially be achieved by the use of XSLT transformations. In its simplest form single XSLT style sheets for both directions of the transformation can be used. The back transformation is only necessary when the rendered SVG elements representing geo-objects have been edited, moved, or created from scratch. The result of this back transformation is a WFScompliant transaction request which can be executed against a Web Feature Server. Hardware Platform and Implementation Considerations Conventional commercial solutions for mobile data acquisition often require the purchase of a dedicated hardware device (e.g. Leica GS 20, Trimble Geo Explorer). Our approach is based on the free Java and SVG technologies and does not require expensive specialized hardware but enables us to use standard hardware and operating systems.

The first target platform for our client system are Tablet PC's which offer a good compromise between processing power, cost and weight. The development will first be carried out on notebooks, as the porting of the system to Table PC's is straightforward. Future Work First experiences with the laptop/tablet PCbased prototype as well as the further input from end users will determine whether a porting also for smaller devices such as PDAs will be considered within the project. A further client for the system for visualizing the geodata in an Augmented Reality environment is being developed by [2]. References 1. Breunig, Martin; Bär, Wolfgang; Thomsen, Andreas: Services for geoscientific applications based on a 3D geodatabase kernel. In: GEOTECHNOLOGIEN »Science Report« No. 4. Potsdam: Koordinierungsbüro GEOTECHNOLOGIEN, 2004 2. Wiesel, Joachim; Staub, Guido; Brand Stephanie; Hering Coelho, Alexandre: Augmented Reality GIS Client. In: GEOTECHNOLOGIEN »Science Report« No. 4. Potsdam: Koordinierungsbüro GEOTECHNOLOGIEN, 2004 3. Open GIS Consortium: Web Feature Service Implementation Specification. URL:http://www.opengis.org/specs/?page=spe cs (2004): Open GIS Consortium, 2002 4. Oliver Plan, Wolfgang Reinhardt, Admire Kandawasvika: Advancement of Geoservices Development of mobile components and interfaces for geoservices. In: GEOTECHNOLOGIEN »Science Report« No. 4. Potsdam: Koordinierungsbüro GEOTECHNOLOGIEN, 2004 5. Leonhard DIETZE, Klaus BÖHM: Position Determination of Reference Points in Surveying. In: Zipf, A., Meng, L. and Reichenbacher, T. (eds.) (2004): Map-based mobile services – Theories, Methods and Implementations. Springer Geosciences. Springer. Heidelberg.

57


6. Breunig. M., Brinkhoff, T., Bär W, Weitkämpe, J. (2002): XML-basierte Techniken für Location-Based Services. In: Zipf, A., Strobl, J. (eds): Geoinformation mobil - Grundlagen und Perspektiven von Location Based Services, Wichmann. Heidelberg. pp. 26-35. 7. Brinkhoff, T. (2003) A Portable SVG Viewer on Mobile Devices for Supporting Geographic Applications. In: Proceedings of the 6th AGILE Conference on Geographic Information Science, Lyon, France, Presses Polytechniques et Universitaires Romandes, pp. 87-96. 8. Zipf, A. (2001) Interoperable GIS-Infrastruktur für Location-Based Services (LBS) MCommerce und GIS im Spannungsfeld zwischen Standardisierung und Forschung. In: GIS Geo-Informations-Systeme. Zeitschrift für raumbezogene Information und Entscheidungen. 09/2001, pp 37-43. 9. Zipf, A. (in print): Mobile Anwendungen und Geodateninfrastrukturen. In: Bernard, L.; Fitzke, J.; und Wagner, R. (eds): Geodateninfrastrukturen. Wichmann. Heidelberg. 10. http://inspire.jrc.it HYPERLINK »http://inspire.jrc.it/« 11. http://www.geomis.bund.de HYPERLINK »http://www.geomis.bund.de/« 12. http://www.gdi-nrw.org HYPERLINK »http://www.gdi-nrw.org/«

58


newSocrates – The Crop Growth, Soil Water and Nitrogen Leaching Model Within the ISSNEW Project Hecker J.-M., Kersebaum K.C., Mirschel W., Wegehenkel M., Wieland R. Centre for Agricultural Landscape and Land Use Research (ZALF) / Institute for Landscape Systems Analysis, Eberswalder Straße 84, 15374 Müncheberg, Germany, E-Mail: {hecker | ckersebaum | wmirschel | mwegehenkel | rwieland}@zalf.de

The implementation of the EU Water Framework Directive requires for modelling instruments to meet the assessment and forecasting demands of this directive. Non-point nutrient entries, especially nitrates, are identified as key contaminants from agricultural areas. To be up to that demand, the ISSNEW project was established within the national research program Information Systems in Earth Management in Germany. The main goal of the ISSNEW project is the development of an information and simulation environment, which is capable to reach the requested functionality with an interactive and inter linked modelling system of proved components. The system is characterised by: - GIS and a database management system (DBMS) for structuring, retrieval and visualization of geographical data as well as model parameter and other information, - a component based modelling system which is linked by - bidirectional intelligent interfaces. The modelling components are (i) a 1D plant growth, water flux and nitrate leaching model (newSocrates) covering crop stand and root zone, (ii) a 1D hydrological model for unsaturated flow (UZmodule) in the vadose zone and (iii) the 3D groundwater transport model (FEFLOW). We focus on (i) the 1D model newSocrates. Software design On first hand newSocrates is a redesign of an

approach formerly known as SOCRATES (Mirschel et al., 2002a). The new design, as sketched in fig. 1, is basically platform independent. Within ISSNEW newSocrates performs as a COM based library (DLL). The COM interface hides the core of newSocrates and must be substituted on other platforms. This enables a cross platform usability. On runtime the newSocrates DLL will be initialised and controlled by FEFLOW. Three databases (fig. 1: blue bubbles) provide the system with (i) scenarios, (ii) basic parameters, and (iii) time step related data. On initialization the socScenarioManager constructs a map of topoi, which are elementary areas, being homogeneous in certain characteristics: soil type, land use etc. Each of them holds a so called layer stack made of multiple layers. These layer objects hold all essential capacity and flux variables. The process objects (crop growth, water, nitrogen) are operating on these variables. The number of topoi and time steps are on principle unlimited. On each time step the whole number of topoi are processed. Today three basic process models are implemented: a crop growth, a water and a nitrogen model. Crop growth model The crop model is subdivided into two parts. The first part is static: The estimation of crop characteristics at harvest (yield, above-ground

59


and root biomass), which is realised by using different crop type dependent ratio parameters like shoot-root-ratio or biomass-yield-ratio. Based on these static crop characteristics at harvest the biomass accumulation as dynamic component (second part) is calculated using the Evolon differential equation approach (Peschel, 1988). Destruction and cooperative growth processes as well as the process velocities (acceleration, deceleration, interruption) are considered. On the basis of daily biomass and nutrient uptake rates different nutrient contents as well as carbon accumulation within crops can be calculated. Plant variables like ontogenesis, crop stand height, rooting depth, crop covering and crop specific potential evapotranspiration are calculated as functions of sum of daily temperatures since sowing. A detailed description of the crop growth model is given by Mirschel et al. (2002b). Water model Infiltration, soil water movement and ground water recharge is simulated with a multiplelayer capacity approach. The infiltration of throughfall (precipitation minus interception) into the topsoil layer and the generation of surface runoff is described by a modified, semi-empirical approach according Holtan (1961). The actual soil water content and the water fluxes of each soil layer i at time step ∆t is calculated using a non linear storage routing technique according to Glugla (1969).

if

if

where θi,∆t is the volumetric soil water content of layer i at time step Dt and FCi is the field capacity of layer i, both in mm dm-1 or Vol%. PERCi∆t is the percolation rate; CAPi∆t the capillary rise; AETi∆t the actual evapotranspiration (all in mm d-1); and l i is an empirical storage

60

parameter of layer i. The water flux across the lower boundary of the soil profile is defined as ground water recharge. In the presence of groundwater in the soil profile, capillary rise to layer i is calculated according to German Soil Classification rules (AG Boden 1994) depending on soil texture, bulk density, distance from layer i to groundwater table and soil water content in layer i. More information can be obtained from Wegehenkel (2000). Nitrogen model The nitrogen modules were extracted from the integrated nitrogen balance and crop growth model HERMES (Kersebaum, 1995). Submodels are integrated for the processes of netmineralisation, denitrification and nitrate transport within the profile with the water fluxes. The model approach for netmineralisation follows the concept of Stanford and Smith (1972) describing the decomposition of a potentially decomposable organic nitrogen pool with a first order kinetic. The model considers two pools of organic nitrogen differing in their decomposition coefficients. One pool with slowly decomposable organic nitrogen compounds is derived from soil organic matter, another easily decomposable pool consists mainly of fresh crop material or organic manure which are derived from yield and yield/residue relations of the previous crop. The release of mineral N from the organic pools is simulated depending on the temperature by two Arrhenius functions modified by a scaling effects. function for soil moisture Denitrification is simulated for the top soil using a Michaelis-Menten kinetic dependent on nitrate content modified by reduction functions for the effects of water filled pore space and temperature. Nitrate transport is described with the convection-dispersion equation. Soil moisture and water fluxes required for the nitrogen transformation processes and for transport are supplied by the water module.


References AG Boden (1994): Bodenkundliche Kartieranleitung. 3. Aufl. Schweizerbart Stuttgart. Glugla G. (1969): Berechnungsverfahren zur Ermittlung des aktuellen Wassergehaltes und Gravitationsabflusses. AlbrechtThaerArchiv 13, 371-376. Holtan H.N. (1961): A concept for infiltration estimates in watershed engineering. U.S. Dept. ARS, 41-51. Kersebaum, K. C. (1995): Application of a simple management model to simulate water and nitrogen dynamics. Ecological Modelling, 81, 145-156 Mirschel, W.; Kersebaum, K.C.; Wegehenkel, M.; Wieland, R.; Wenkel, K.-O. (2002a): SOCRATES – ein objektorientiertes Modellsystem zur regionalen Abschätzung der Auswirkungen von Landnutzungs- und Klimaänderungen auf Boden- und Pflanzengrößen. In: Wild, K.; Müller, R.A.E.; Birkner, U. (eds.): Referate der 23. GIL-Jahrestagung [Informations- und Qualitätsmanagement - Neue Herausforderungen von Politik und Markt an

die Agrar- und Ernährungswirtschaft], Dresden, 18.-20.September 2002, Berichte der Gesellschaft für Informatik in der Land-, Forstund Ernährungswirtschaft Bd. 15, S. 234-237 Mirschel, W.; Wieland, R.; Jochheim, H.; Kersebaum, K.C.; Wegehenkel, M.; Wenkel, K._O. (2002b): Einheitliches Pflanzenwachstumsmodell für Ackerkulturen im Modellsystem SOCRATES. In: Gnauck, A. (ed.): Theorie und Modellierung von Ökosystemen, Shaker Verlag, Aachen, pp. 225-243 Peschel, M. (1988): Regelungstechnik auf dem Personalcomputer. Verlag Technik, Berlin, 206 pp. Stanford, G. and S.J. Smith (1972): Nitrogen mineralization potentials of soils. Soil Sci. Soc. Amer. Proc. 36, 465-472. Wegehenkel M. (2000): Test of a modelling system for simulating water balances and plant growth using various different complex approaches. Ecological Modelling 129, 39-64.

Figure 1: Sketch of newSocrates components.

61


Implementation of a Spatial Data Infrastructure for the Derivation of Geoinformation out of Distributive Geodata Based on OGC-compliant Webservices Heier Ch. Blumenstr. 44, 44147 Dortmund, Germany, E-Mail: christian.heier@gmx.de

This work provides the conceptual design, the development, the implementation and the testing of a prototypical system architecture for the dynamic generation of geoinformation out of distributive geodata. The raw-data were held in several ORDBMS; installed on Windows- as well as Linux- operating systems. In order to access the geodata several OGC-compliant Web Feature Services (WFS) and Web Coverage Services (WCS) have been established. Furthermore an OGC-compliant Web Feature Service (WFS) which is capable to recall geodata from distributive sources and to process it according to predetermined rules has been developed. Finally an OGC-compliant Web Map Service (WMS) has been integrated to

62

visualize the results. The user-interaction and the display of the results can be handled via any fourth-generation web browser. The concept for determining groundwater vulnerability after HĂ–LTING serves as an geoscientific example for evaluating the softwaresetup. It is a typical combinative problem solved with the help of Geoinformation Systems. Within the scope of a diploma thesis a contribution for multiple-shift usage and reuse of geodata inventories is given. The added value of already available geodata is enhanced significantly by the use of web-technologies.


Data Management Approach for Marine Data Within a Geodatabase – Project MAR_GIS Jerosch K., Schlüter M., Schäfer A. Alfred-Wegener-Institute for Polar and Marine Research (AWI), Am Handelshafen 12, 27570 Bremerhaven, Germany, E-Mail: {kjerosch | mschlueter | aschaefer}@awi-bremerhaven.de

Introduction MAR_GIS intends the conjunction of GeoInformation-Systems (GIS), research data, and multivariate geo-statistical techniques. It is aspired to characterise and identify distinct provinces at the seafloor of marine and coastal zones including spatial budgets of geological and biogeochemical cycles based on the combination of several information layers. Currently, the multitude of information about marine geological, geochemical, and biological data and spatial patterns as well as economic use in coastal areas and along continental margins increases tremendously. MAR_GIS supplies the need for a generalised analysis and

synthesis of seafloor data of the North Sea, the Baltic Sea and the Continental Margin of the Norwegian Sea. Finally the multitude of detailed information will be structured within a geographical information system which will be accessed easily by users. The increasing amount of point data and meta information about marine research required the generation of a relational database management systems (rDBMS). In our case a rDBMS was developed using MS Access and MS SQLServer which executes the specific demands of marine information - both biotic and abiotic data. The rDBMS structure allows a flexible analysis for several expert questions on

Figure 1

63


the one hand but furthermore it provides a basis to integrate the database into the GIS. Thus the rDBMS builds the interface for a successful coupling of point data and digital maps- the first demand to create a spatial geodatabase. A geodatabase in general is a geographic database that is hosted inside a rDBMS It provides services for geographic data and supports a model of topologically integrated feature classes and other object-oriented features. Geographic data include all data with coordinates: raster data sets e.g. remote sensing data or geo-referenced images as mosaics as well as field data and planar topologies e.g. digitized maps based on vectors. Within the MAR_GIS project the ESRI ArcGIS applications are used. ArcSDE (Spatial Data Engine) defines an open interface between the applications as ArcMap or ArcCatalog and the database system and allows to manage geographic information on a variety of different database platforms - as Microsoft SQL Server in our case. Data and meta data acquisition The data sets are provided by several national and international databases as well as by national and international projects. Large and small datasets organized well but sometimes randomly had to be checked and formatted. Problems had to be solved during the integration of the large datasets into the MAR_GIS database structure due to the different methods of data and metadata management which were used within the institutes. Thanks to all the supporting institutes and institutions listed below. Preliminary data sources: Alfred-Wegener-Institute for Polar AWI and Marine Research BGS British Geological Survey BODC British Oceanographic Data Centre BfN Federal Nature Conservation Agency BSH Federal Maritime and Hydrographic Office DOD German Oceanographic Data Centre

64

Geological Survey of Denmark and Greenland International Council for the ExplorICES ation of the Sea Institute of Marine Research, UniverIfM sity of Hamburg IfÖ/ Institute for Fishery Ecology / Federal BFA-Fi Research Centre for Fisheries ISH/ Institute for Sea Fisheries / Federal BFA-Fi Research Centre for Fisheries MUDAB Marine Environmental Database of BSH and UBA (Federal Environmental Agency) SBS/ School of Biological Sciences, UniverUWB sity of Wales, Swansea and Bangor TNONetherlands Institute of Applied GeoNITG science TNO – National Geological Survey GEUS

Creating a database design in Microsoft Access and integration of available data Designing a database is a fundamental process that requires planning and revision. The database structure (which is shown in fig. 2 in a condensed version) had to be specified to the way of marine data collection and all properties of the different kind of data types – biotic and abiotic. The compilation of marine data is yet made by expeditions of research vessels and often organized within international projects. In a research area stations are determined according to scientific aspects which are visited by the vessel and its crew. At the stations samples are taken and very often the samples, e.g. sediment cores, are subdivided into series of measurements analysing divers’ parameters. For each parameter one measurement is required. Each sample is accurately defined by the location (position), the point of time (time and date) and the sampling depth. One of the most important tasks with regard to the use of the database (visualization of queries within the GIS) is the positioning. E.g. the database has to mirror the fact that fishery data often have two pairs of coordinates (heave down and up the gear) belonging to the same sample. In another case only one pair of coordinates is given. Beside this it is nearly impossible to sample the same sample site


(position) twice at sea. The following key words build the columns of the marine database: expedition, station, sample (combined to ÂťpositioningÂŤ in fig. 2), series of measurements, parameter and measurements. Up to the sample level all data are contiguous, than the data are split thematically. Arriving to the sample level is possible to include biotic as well as abiotic data and bring them back together easily to a certain station (query

represented as objects with properties, behaviour, and relationships (comparable to the rDBMS in Mircosoft Access). Support for a variety of different geographic object types is built into the system. These object types include simple objects, geographic features (objects with location), network features (objects with geometric integration with other features), annotation features, and other more specialized feature types. The model allows to define

Figure 2

function). The metadata (persons in charge, literature,...) and the phylogeny establish two further blocks. The integration of different data sets from several institutes implicates a lot of formatting work and questions of contents, which is underestimated frequently. Attributes have to be converted into the common language of our data management, the tables have to be created and added to the existing tables.

relationships between objects, together with rules for maintaining the referential integrity between objects. After a good data model design and the database tuning within the ArcGIS the geodatabase is ready to start as a multiuser GIS system. ArcCatalog has various tools for creating and modifying the geodatabase schema, while ArcMap has tools for analyzing and editing the contents of the geodatabase (cf. (3)).

Building a geodatabase and integration of all available data A geodatabase supports an object-oriented vector data model. In this model, entities are

The geodatabase builds stable platform also with large datasets (as in our case) and allows a comprehensive analyse altogether digitized maps (sediments, bathymetry, cur-

65


rents, geological processes,...) and field data in one query and to optimize the use of the available data. Furthermore the database establish the basis for the implementation of geostatistical and multivariate statistical applications and finally the definition of distinct provinces at the seafloor. References (1) Adam, N.R. u. A. Gangopadhyay (1997): Database issues in geographic information systems. Kluwer international series on advances in database systems; 6, Boston. (2) G端nther, O. (1998): Environmental information systems. Berlin [u.a.]: Springer. (3) MacDonald, Andrew (2001): Building a Geodatabase. ESRI Documents. USA.

66


Promoting Use Cases to the Geoservice Groundwater Vulnerability Kappler W. (1), Kiehle Ch. (2), Kunkel R. (3), Meiners H.-G. (1), M端ller W. (1), Betteraey F. van (1) (1) ahu AG Wasser Boden Geomatik, Kirberichshof 6, 52066 Aachen, Germany, E-Mail: {w.kappler | g.meiners | w.mueller | f.v.betteray}@ahu.de (2) RWTH Aachen, Lehrstuhl fuer Ingenieurgeologie und Hydrogeologie (LIH), Lochnerstr. 4-20, 52064 Aachen, Germany, E-Mail: kiehle@lih.rwth-aachen.de (3) Forschungszentrum J端lich, Systemforschung und Technologische Entwicklung (STE), 52425 J端lich, Germany, E-Mail: r.kunkel@fz-juelich.de

Introduction The objective of this project is to set up a geodata infrastructure for the scale-based derivation of groundwater vulnerability from heterogeneous and distributed data. This geodata infrastructure should enable users from the field of geo-sciences to generate geo-information interactively. In order to coordinate the functionalities and user requirements the first step in the development of software is a requirements analysis. The present paper describes the approach and the contents of the requirements analysis, as the first step in the IT implementation and as a basis for discussion with later users. Requirements Analysis The requirements analysis specifies the expert, technical and user-specific requirements of the geodata infrastructure groundwater vulnerability to be developed. This comprises the following elements: - coordination of product features with the user, - setting system constraints, - provision of basic information for the planning of technical contents of the iterations, - provision of basic information for the estimation of time and costs, - and the definition of the user interface. The results of the requirements analysis are described in the use-case document, vision document and glossary. Within the framework

of the evolutionary development of the system (Spiral Model of Software Development by Boehm, 1986) the documents are subject to continual revision and further enhancement. The current status of the requirements analysis for the present project is described as follows. Use Cases A use case is a number of system activities from the user's perspective, which lead to a perceptible result for the users (Oestereich 1998). Use cases are essential artefacts for the specification of the requirements of a system. In the current development phase the following five use cases have been specified from the user's perspective: 1. Login User login to the system 2. Direct visualisation of the input data Visualisation of the input data without calculating the groundwater vulnerability as a map 3. Standardised derivation of the groundwater vulnerability. The system proposes an appropriate computation process and suitable geodata for the derivation of the vulnerability for the selected question (in particular for the selected scale) and automatically calculates the vulnerability function to visualise the results. 4. Variant calculation of the groundwater vulnerability function on the basis of varying elementary data and visualisation

67


of the results. 5. Comparison of the results of the calculation Comparison of the results of the different variants of the vulnerability calculation by forming and visualising differences. The use cases described will be validated and concretised in a user workshop in March 2004. It can be assumed that other use cases will arise from the data provider's perspective. Possible additional use cases are, for example, the provision of new data and services in the system. The results from the workshop will be reported at the status seminar. Vision Document The vision document comprises a detailed verbal description of the requirements of the geodata infrastructure. The following points are concretised : - Positioning of the system - Description of stakeholders and users - Product capabilities - Product features (product performance to meet the user requirements - see below) - Constraints and dependencies - Quality requirements - Priorities (as regards the realisation of product features)# An important part of the vision document is the product features which are necessary to realise the use cases. The following product features were specified as mandatory: - User administration - Mapping module - Catalogue service - Access to geodata - Geo-processor - Statistics module - Diagram module - Difference assessment - Quality assessment These product features have priority. Depending on the scope of the function, they can be developed with varying degrees of input.

68

Table 1: shows the close relation between the use cases and product features. If the requirements are modified the product features concerned will have to be identified and adjusted accordingly. The use cases require the system to be able to search for, process and present geodata. The specification of further use cases may require product features which are directly coupled to other product features. Further Procedure Based on the results of the first user workshop in March 2004, the product specifications on hand (in particular use cases and product features) will be reviewed and concretised. Following that, the individual system services will be developed. A new alignment with the user requirements will then follow. The system services will be integrated in the prototypes of the geodata infrastructure and transferred into pilot operation. References Balzert, Helmut (2000): Lehrbuch der SoftwareTechnik – Software Entwicklung. – 2. Aufl.; Heidelberg Berlin (Spektrum Akademischer Verlag) Oestereich, Bernd (1998): Objektorientierte Softwareentwicklung – Analyse und Desgin mit der Unified Modeling Language. – 4. Aufl.; München Wien (R. Oldenbourg Verlag) Boehm, B. W. (1986): A Spiral Model of Software Development and Enhancement. Software Engineering Notes 11: 22-42.


Table 1: Relation: Use Case - Product Function.

69


Typology of the North Sea by Means of Geostatistical and Multivariate Statistical Methods Pesch R., Schröder W. Institute for Environmental Science, University of Vechta, Oldenburger Str. 97, 49377 Vechta, Germany, E-Mail: {rpesch | winfried.schroeder}@ispa.uni-vechta.de

Background and objectives By helping to support marine management issues (e.g. the installation of offshore wind power plants, seafloor cable deployments, sand and gravel dred-ging, declaration of protection zones) the classification (or typology) of the seafloor is one prerequsite for the upcoming economic use of the seafloor. In terrestrial geoscience different thematic maps like geological, pedological or ecological maps already serve as digital instruments for public, economic and scientific planning needs. By using multivariate statstical methods Schmidt (2002) produced a map of German ecoregions from surface data layers of the potential natural vegetation, altitude, soil texture as well as climatic parameters. These ecoregions were then used to optimise environmental measurement nets like immission and soil monitoring networks of the federal states as well as a german wide biomonitoring network called the »Metals in Mosses Survey Programme« or »Moss-Monitoring« (Schröder et al. 2001, Schröder et al. 2003). The superordinate objective of the BMBF-project MAR_GIS is the development of a general concept for the analysis of spatial data including a typological approach suitable to identify different provinces at the seafloor. Compared to the increasing amount of data and information about marine research, only very few concepts and techniques are applied for the optimal utilisation of present and upcoming data sets. The typology approach used in MAR_GIS should be based on traditional GIS-

70

techniques as well as geostatistical and multivariate statistical methods. Up to now several thematic surface data layer (e.g. sediment data maps) as well as an extensive amount of biotic (e.g. data on benthos organisms) and abiotic (e.g. data on parameters like organic carbon content, grain size, salinity, temperature or dissolved oxygen) measurement data have been acquired from national and international authorities. They serve as the database to perform the classification of the sea floor of the North Sea. Statistical Methods With geostatistical methods like variogram analysis and Kriging procedures measurement data on marine physical or chemical parameters can be spatially generalized to grid data or contour plots. The spatial autocorrelation of the measurement values as well as directional dependencies can be examined and modeled by variogram analysis. The resulting model variograms are needed when Kriging is chosen to estimate the spatial pattern of the respective parameter without geographical gaps. Several Kriging options exist (e.g. Ordinary Kriging, Indicator Kriging). All of them have in common that they minimise the estimation variance. Besides traditional GIS-techniques multivariate statistical methods like cluster analysis, CART (Classification and Regression Trees) or CHAID (Chi square Automatic Interaction Detection) are used to aggegate characteristic grid data sets to sea floor provinces. Whereas hierarchi-


cal and partitioning cluster procedures like the Ward or the K-Means method only allow to aggregate metric data, CART or CHAID can be used to group data of both metric and nonmetric scale dignity. In marine geoscience data acquisition includes measurements of both scale types. An example for non-metric data sets are classified raster data like sediment types and facies. Examples for metric data sets are grain size or marine physical-chemical parameters like salinity that are measured at distinct sites. Methodical approach Different possibilities exist to calculate sea floor provinces for the North Sea. This at first depends on the scientific, economic or planning objectives the sea floor typology is needed for. For economic planning purposes like sand and gravel dredging one may concentrate on optimising existing sediment maps by means of advanced geostatistical methods like Stratified Kriging, Fuzzy Kriging or Simple Updating Kriging (Bardossy et al. 1998, Usländer 2003) and then intersecting the optimised sediment maps with bathymetric data in a GIS. To declare marine protection zones a sea floor typology should also meet ecological demands. Therefore biological data like the abundancies of benthic organisms or biomass as well as abiotic data like sediment data or measurement data on salinity, temperature or dissolved oxygen should be used to classify the sea floor. This corresponds to ecoregions as known in terrestrial geoscience. Until now two different classification options were devoloped in MAR_GIS. Both of them use surface data derived by geostatistical analysis. On the one hand multivariat statistical cluster procedures are used to aggregate geostatically estimated surface data on salinity, temperature, dissolved oxygen and grain size data as well as bathymetric data to ecological sea floor provinces for the North Sea. Two cluster techniques are applied: the hierarchical Ward algorithm as well as the partitioning cluster method K-Means. By intersecting the resulting clusters with the geostatistically extrapolated measurement data it is examined if these pro-

vinces differ significantly in terms of salinity, temperature, dissolved oxygen, bathymetry and grain size. By furthermore intersecting the clusters with biotic data on benthic communities it can be checked whether the clusters inhibit characteristic biological properties. In the second alternative the CART method is used to classify data sets of mixed scale. Here it does not matter if the data are of nominal, ordinal and metric scale dignity. CART studies the relationship between a dependent variable and a series of predictor variables (Breiman et al. 1984). It applies decision trees to display class memberships by recursively partitioning a heterogeneous data set into more homogeneous subsets by means of a series of binary splits. To derive ecological sea floor regions benthic data (benthic communities or abundancies of benthic organisms) are used as dependent variables. Metric data on salinity, temperature, bathymetry and dissolved oxygen as well as nominal data like sediment types are taken as independent variables. Literature BÁRDOSSY A., HABERLANDT U. and GRIMMSTRELE J. (1998): Interpolation of Groundwater Quality Parameters Using Additional Information. geoENV I - Geostatistics for Environmental Applications, (ed. A. Soares), Kluwer Academic Publishers, Dordrecht, pp. 189-200 BREIMAN, L.; FRIEDMAN, JH., OHLSON, RA. & STONE, C.J. (1984). Classification and Regression Trees. Pacific Grove, California: Wadswooeth and Brooks. SCHMIDT, G. (2002): Eine multivariat-statistische Raumgliederung Deutschlands. Dissertation, Hochschule Vechta. dissertations.de – Berlin SCHRÖDER, W.; SCHMIDT, G.; PESCH, R.; MATEJKA, H.; ECKSTEIN, TH. (2001): Konkretisierung des Umweltbeobachtungsprogrammes im Rahmen eines Stufenkonzeptes der Umweltbeobachtung des Bundes und der Länder. Teilvorhaben 3. - Vechta (Umwelt-

71


forschungsplan des Bundesministers für Umwelt, Naturschutz und Reaktorsicherheit. Abschlussbericht FuE-Vorhaben 299 82 212 / 02, im Auftrag des Umweltbundesamtes) SCHRÖDER, W.; SCHMIDT, G. & PESCH, R. (2003): Harmonization of Environmental Monitoring. Instruments for the Examination of Methodical Comparability and Spatial Representativity. Gate to EHS / Environmental and Health Science / Environmental Science / GIS and Remote Sensing, 24th July 2003 [DOI: http://dx.doi.org/10.1065/ehs2003.07.010] USLÄNDER, T. (2003): Benutzerhandbuch SIMIK+ ArcView-Erweiterung Version 1.0 zur flächenhaften Darstellung der Grundwasserbeschaffenheit, Fraunhofer Institut für Informations- und Datenverarbeitung, Karlsruhe

72


Advancement of Geoservices – Design and Prototype Implementation of Mobile Components and Interfaces for Geoservices Plan O., Reinhardt W., Kandawasvika A. AGIS - GIS lab, University of the Bundeswehr Munich, Werner-Heisenberg-Weg 39, 85577 Neubiberg, Germany, E-Mail: {oliver.plan | wolfgang.reinhardt | admire.kandawasvika}@unibw-muenchen.de

Abstract This paper describes the conceptual and functional aspects of a GIS client for mobile data acquisition. This includes research and prototype implementations on concepts of online access to spatial databases, manipulation of features and schemes and quality assurance. This project is part of the joint research project »Advancement of Geoservices« which is funded by the German Ministry of Education and Research (BMBF). 1 Introduction The aim of the project »Advancement of geoservices« is to show how the geosciences and other user communities can benefit from recent advancements in the fields of wireless communication technologies, Internet, client/ server computing and information technology in general. The use of mobile Internet technology, in combination with geospatial services, allows to shorten workflows in measuring campaigns and provides the user with real-time geospatial information in the field (see also [4]). The main advantages of this approach are: - on site access to geospatial information in real-time - mobile data acquisition with immediate update of remote databases - in-situ quality management Starting from the user scenario given in [2] the following user requirements are derived. Chapter 3 describes the functionality of the

system which is needed to fulfill the user requirements. Chapter 4 covers some results of the market survey that has been carried out in the project. The conceptual architecture of the system is outlined in chapter 5. 2 User requirements As shown in Figure 1, the user wants to access geospatial information sources in the field without prior downloading of the data. This offers the possibility to access information in real-time without the necessity to synchronize mobile and stationary data (i.e. data residing on a geo-spatial server). This allows for a great flexibility in the users work provided that wireless LAN hotspots, UMTS, GSM/GPRS or other communication technologies are available. This architecture also enables access to any information source provided by a geodatainfrastructure that might be of interest for the current application. In this project, one of the main issues is the acquisition of spatial and non-spatial data by means of sensors and other measuring means. Therefore an interface is needed which allows the integration of position data from geodetic instruments like GPS receivers or total-stations. Often proprietary protocols are used in this area, which means an interoperable communication layer has to be established. Note also that analogue measurements and readings have to be integrated by means of a human interface. For this reason forms are needed to give a possibility to add attributive information to features.

73


This feature is closely associated with the quality assurance issue. This means, every transaction has to be checked against well-known quality parameters like consistency, completeness, correctness and accuracy. In addition to the above mentioned functionality, visualization and analysis of the data should also be possible for the user in the field (refer to [6]).

different application schemas described by any XML-Schema. For that reason, the client must be capable of downloading an application schema at runtime. Another mandatory functionality is the acquisition of new measurements in the field. These measurements include position and several attributes of features specified by the establis-

Figure 1: User requirements.

3 Components of the system The client being developed is connected to the server by means of standardized protocols. In this case, WFS/HTTP [3] is used to access and update vector data sources. Beyond that, additional services providing multidimensional data are being developed in [1]. In both cases, GML will be used as encoding standard for spatial information. Regarding ISO 19109 – »Rules for application schema« in conjunction with ISO 19107 – »spatial schema«, the application schema should be open for different spatial domains. This means the system should be able to use

74

hed application schema. To assure that the collected data is conformant to the given application schema, forms have to be generated by means of the application schema. These forms (see e.g.[5]) stipulate which kind of attributes are mandatory, optional and which accuracy has to be achieved. Further information on the editor is given in [6]. 4 Market survey In this part of the project, a survey identified the current state-of-technology in computer hardware, sensor hardware (i.e. PDAs, GPS receivers, total stations) and communication


fields. The main issues looked at, among others, were interfaces (i.e. cable, wireless, Ă…c) and protocols (standard or proprietary) implemented by these equipment. In short the survey has shown that a remarkable trend towards wireless applications can be recognized. This trend fits very well with the needs of mobile or field workers who may need to create their own personal networks (i.e. via Bluetooth) where, for example, a PDA, laptop, GPS, and total station (i.e. data acquisition equipment) are connected to one another wirelessly and, at the same time, enabling the user to roam or move freely during data acquisition. It is also interesting to note that many laptops, tablet devices and digital assistants support current communication interfaces such as WLAN 802.11x, Bluetooth or both. Also many public hotspots have been and are still being installed in cities. At the time of writing, in Germany UMTS is in the testing phase. However, despite the technological advancements toward wireless, the cable technology (i.e. serial interfacing) will still exist in order to complement wireless technology.

5 Architecture The architecture of the overall system is shown in Figure 2. It shows the main components of the server developed by the University of Vechta. Currently two clients are planned to be connected to these services. The first one is the virtual reality client of the University of Karlsruhe. The second is the data acquisition client developed together by AGIS (University of the Bundeswehr Munich) and EML (European Media Laboratory). The core functionality like establishing data connections to services, transfer of data schemas and data manipulation is carried out by AGIS. In addition, an interoperable communication layer based on XML is investigated to communicate with sensor instruments. The data acquisition process is supported by methods of quality assurance. The graphical user interface is being developed by EML. This includes functions for presentation and user interaction. Besides that, support for other user interface components is planned. This means that the user should be able to control a measurement instrument (e.g. trigger it to fire measurements) from the user interface and should be able to enter

Figure 2: architecture of the system.

75


attribute information in dynamic generated user forms. 6 Conclusion In this paper, some use cases for the usage of a mobile client have been outlined. Accorting to the scenario given in [2], some concepts of the architecture for the mobile client have been described. Furthermore the functionality of the mobile client carried out by the project partners EML and AGIS has been outlined. At this time, the feasibility of the concepts is verified by means of prototype implementation. 7 References [1] Breunig, Martin; Bär, Wolfgang; Thomsen, Andreas: Services for geoscientific applications based on a 3D geodatabase kernel. In: GEOTECHNOLOGIEN »Science Report« No. 4. Potsdam: Koordinierungsbüro GEOTECHNOLOGIEN, 2004 [2] Wiesel, Joachim; Staub, Guido; Brand Stephanie; Hering Coelho, Alexandre: Augmented Reality GIS Client. In: GEOTECHNOLOGIEN »Science Report« No. 4. Potsdam: Koordinierungsbüro GEOTECHNOLOGIEN, 2004 [3] Open GIS Consortium: Web Feature Service Implementation Specification. URL: http:// www.opengis.org/specs/?page=specs (2004): Open GIS Consortium, 2002 [4] Open GIS Consortium: OpenGIS Location Services (OpenLS): Core Services [Parts 1-5]. URL: http://www.opengis.org/specs/?page= specs (2004): Open GIS Consortium, 2004 [5] W3C: XFORMS 1.0 W3C Recommendation. URL: http://www.w3.org/TR/xforms/ (2004): W3C Consortium, 2004 [6] Häußler, Jochen; Merdes, Matthias, Zipf, Alexander: A Graphical Editor for Geodata in Mobile Environments. In: GEOTECHNOLOGIEN »Science Report« No. 4. Potsdam: Koordinierungsbüro GEOTECHNOLOGIEN, 2004

76


Marine Geo-Information-System for Visualisation and Typology of Marine Geodata (Mar_GIS) Schlüter M. (1), Schröder W. (2), Vetter L. (3) (1) Alfred-Wegener-Institut fuer Polar und Meeresforschung, Am Handelshafen, P.O. Box 120161, 27515 Bremerhaven, Germany, E-Mail: mschlueter@awi-bremerhaven.de. (2) Institut fuer Umweltwissenschaften (IUW) und Forschungszentrum für Geoinformatik und Fernerkundung, Hochschule Vechta, Postfach 1553, 49364 Vechta, Germany, E-Mail: wschroeder@iuw.uni-vechta.de (3) Fachbereiche Geoinformatik sowie Umweltplanung, Fachhochschule Neubrandenburg, Postfach 11 01 21, 17041 Neubrandenburg, Germany, E-Mail: vetter@fh-nb.de

Extended Abstract Environmental, economic, and scientific interests in marine coastal environments and ocean margins increased considerably in recent years. Key words in this context are: benthic habitats, fishery, wind energy, offshore oil and gas reservoirs, dis-tribution pipelines, or slope stability. Under this perspective their is a demand for geoinformation describing the marine environment of the seafloor and water column to suit scientific needs as well as planning strategies. The compilation of such marine data sets for the North and Baltic Sea and for parts of the Norwegian Sea belongs to the tasks of the project Mar_GIS. Furthermore multivariate geostatistical techniques are applied to characterise and identify distinct provinces at the seafloor. Spatial budgets of such geological, biological and biogeochemical entities will be derived. These geodata as well as the typological approach to identify environmental entities will be distributed via internet. Within te first phase of the project we focused to the North Sea and – to a lesser extend – to the Baltic Sea for data compilation. An extensive data set was integrated into the geodatabase of ArcGIS 8.3 (ESRI). Data compilation was supported by co-operation with several scientists and national and international institutions, as BSH (Federal Maritime and Hydrographic Office), BfN (Federal Nature Con-

servation Agency), DOD (German Oceanographic Data Centre), UBA (Federal Environmental Agency), BODC (British Oceanographic Data Centre), GEUS (Geological Survey of Denmark and Greenland), ICES (International Council for the Exploration of the Sea), BFA-Fi (Federal Research Centre for Fisheries), IfÖ (Federal Research Centre for Fisheries), ISH (Institute for Sea Fisheries), School of Biological Sciences, University of Wales (Swansea and Bangor, UK), TNO-NITG (Netherlands Institute of Applied Geo-science). An overview about some of the marine geodata incorporated into the GIS will by given. The geodata comprise raw data as well thematic maps, calculated by geo-statistical techniques or (in most cases) scanned, georeferenced, and digitized within the project. This includes data and maps on bathymetry, sedimentology, water chemistry, benthic biology, bottom water currents and present commercially used as well as nature protection areas. For the management of such data a data model for a Relational Data Base Management Systems (RDBMS) was development and will be refined during progress of the project. The classification (typology) of the seafloor, an approach well established in terrestrial geoscience, is one of our objectives. Examples for this typological approach, combining multivariate statistics and geostatistical means for the assignment of areas of the seafloor to types are presented.

77


Advances in marine technology will increase, at least for selected target areas of the coastal zone and ocean margins, the availability of marine geodata significantly. Such technologies include ROV´s and AUV´s (Remotely Operated Vehicles, and Autonomous Underwater Vehicles). These devices are able to operate for several hours or a few days in water depths of more than 3000m and are equipped with high resolution video systems, physical-chemical sensors, and acoustic multibeam sensors for mico-bathymetric and shallow seismic surveys. The RDBMS applied in Mar_GIS will be refined to cope with the different types and volume of these ROV and AUV based data. One example for such data are the video-mosaics derived during a cruise with the ROV Victor (IFREMER). In total more than 3500 georeferenced mosaics (applying MATISSE software of IFREMER) were obtained, for bio-geochemical »habitat« mapping at the Haakon Mosby Mud

Figure 1: A) Tracks of video surveys obtained by several ROV dives with VICTOR (IFREMER) at the Haakon Mosby Mud Volcano.

78

Volcano (ocean margin of the Barents Sea). Visualisation of this amount of Geotiff´s within the GIS was virtually impossible. In cooperation with the Center for Computing Technology (TZI) in Bremen an efficient algorithm was applied to identify the contours of the Geotiff. Combined with visual inspection to identify specific topics at seafloor and subject indexing, this provided an efficient mean for incorporation of these information into the GIS and the RDBMS. The closely related task of Mar_GIS of data compilation (combining raw data, thematic maps and visual observations by video), integration into a Geodatabase linked to GIS, and multivariate statistics aims to identify distinct provinces at the seafloor of coastal regions and ocean margins.

B) Contours of Geotiff´s are gener-ated automatically and after subject indexing these information are incorporated into the geodata base of GIS.


Using Geostatistical Methods to Estimate Surface Maps for the Sea Floor of the North Sea SchrĂśder W., Pesch R. Institute for Environmental Science, University of Vechta, Oldenburger Str. 97, 49377 Vechta, Germany, E-Mail: {winfried.schroeder | rpesch}@ispa.uni-vechta.de

Background and objectives The key objective of the project Mar_GIS is to classify the sea floor of the North Sea to characteristic sea floor types. These provinces can be a useful working instrument for planning issues like the installation of offshore wind power plants or the declaration of protection zones. The typology approach to be used in Mar_GIS is based on the use of advanced statistical methods and GIS-analytical instruments to aggregate surface grid data layers to characteristical sea floor types. Since the beginning of the project a large amount of data has been acquired from national and international institutes and authorities. Along with thematic surface data (e.g. sediment data maps) most of this data consist of biotic (e.g. data on benthos organisms) and abiotic (e.g. data on chemical and physical parameters like salinity, temperature or nitrate) measurement data. One possibility to use the point data sets in regard to creating a sea floor typology is to extrapolate the measurement values to contour plots or grid data maps. This can among others be done by applying geostatistical methods. Geostatistics – basics and application fields To create continuous surfaces from measurement data different mathematical-statistical approaches exist. Johnston et al. (2001) distinguishes deterministic and geostatistical methods. Whereas deterministic procedures create surfaces from measured points by using defined mathematical equations (e.g. IDW –

Inverse Distance Weighted, Spline method) geostatistical methods use the properties of the measurement points, respectively the values that are measured there. Originally coming from applied geological research to estimate mineral resources and reserves, geostatistics is now-adays being applied in different terrestrial and marine scientific research fields. Geostatistics is based on the assumption that measurements lying closer together tend to be more alike than things being farther apart. This fundamental geographic principal is called spatial autocorrelation and can be examined by means of variogram analysis. Only if measurement data is spatially autocorrelated surface estimation methods like Kriging can be seen as statistically meaningful. Along with the spatial data configuration as well as the values of the measured sample points around the prediction location Kriging uses a fitted semivariogram model to make a prediction for an unknown value of a specific location. Kriging is therefore divided into two distinct working steps: quantifying the spatial structure of the data as well as producing a prediction. A large amount of kriging options exist (e.g. Ordinary Kriging, Universal Kriging, Probability Kriging, CoKriging), depending on the stochastic assumptions the user makes and the statistical properties of the measurement data. All of these algorithms have in common that they minimise the estimation variance. Very interesting in terms of the objectives of MAR_GIS are kriging techniques that allow the use of additional

79


information for the estimation process (e.g. Fuzzy Kriging, Simple Updating Kriging (SUK) (Bardossy et al. 1989, Bardossy et al. 1998, Usländer 2003)). In terrestrial geoscience geostatistics has been proven very useful in environmental monitoring concerns. Pesch (2003) used geostatistical and multivariate statistical methods to aggregate the results of the nationwide biomonitoring project »Moss-Monitoring« to spatial indicators for atmospheric metal bioaccumulation in terrestrial ecosystems. Schröder et al. (2001) used geostatistical methods as statistical instruments to validate and optimise environmental monitoring networks. Geostatistics have also been applied in different marine science research fields. One example is the estimation of fish abundancies being applied in fisheries research (e.g. Petitgas 1997, Rivoirard et al. 2000, Rivoirard et al. 2001). Application example In MAR_GIS geostatistical methods have been applied to produce surface maps for different chemical and physical parameters (e.g. nitrate, salinity, temperature) for the sea floor of the German Bight. In figure 1 the results of variogram analysis are presented for the measured concentrations of dissolved ammonium in bottom water in the winter of 1999/2000. As holds true for all investigated parameters the results of variogram analysis display a strong spatial autocorrelation of the measurement values. The variogram map furthermore pronounces distinct North-East directed anisotropies indicating that ammonium concentrations change continuously parallel to the coastline. To produce surface maps two kriging methods were applied: Ordinary Kriging and Indicator Kriging. Using Indicator Kriging a threshold value can be used to create a statistical probability map showing the probabilities of exceeding a defined threshold (like the median of all measured values). With the help of Ordinary Kriging predictions of unknown values can be made for specific locations. Figure 2 shows the result of an Ordinary Kriging grid map for ammonium in bottom water in the winter of 1999/2000. Because of

80

the distinct anisotropies that were detected in the variogram map and implemented in the semivariogram model the map reveals a decrease of ammonium in bottom water parallel to the coastline. Literature BÁRDOSSY A., HABERLANDT U. and GRIMMSTRELE J. (1998): Interpolation of Groundwater Quality Parameters Using Additional Information. geoENV I – Geostatistics for Environmental Applications, (ed. A. Soares), Kluwer Academic Publishers, Dordrecht, pp. 189-200 BARDOSSY, A.; BOGARDI, I. & KELLY, W.E. (1989): Geostatistics utilizing imprecise (fuzzy) information. – Fuzzy Sets and Systems, 31, pp. 311-327. JOHNSTON, K.; VER HOEF, J. M.; KRIVORUCHKO K.; LUCAS, N. (2001): Using ArcGIS Geostatistical Analyst. – Redlands. Pesch, R. (2003): Geostatistische und multivariat-statistische Analyse des Moos-Monitorings 1990, 1995 und 2000 zur Ableitung von Indikatoren für die Bioakkumulation atmosphärischer Metalleinträge in Deutschland. Dissertation, Hochschule Vechta. Elektronische Ressource. SCHRÖDER, W.; SCHMIDT, G.; PESCH, R.& ECKSTEIN, TH. (2001): Konkretisierung des Umweltbeobachtungsprogramms im Rahmen eines Stufenkonzepts der Umweltbeobachtung des Bundes und der Länder. Teilvorhaben 3. – Bonn (Umweltforschungsplan des Bundesministers für Umwelt, Naturschutz und Reaktorsicherheit. FuE-Vorhaben 299 82 212) PETITGAS P (1997): Sole egg distribution in space and time characterised by a geostatistical model and its estimation variance. ICES J. Mar. Sci. 54, pp. 213-225 RIVOIRARD J, SIMMONDS EJ, FOOTE K, FERNANDES P, BEZ N (2000) : Geostatistics for estimating fish abundance. Oxford Blackwell Science.


RIVOIRARD J, WIELAND K (2001): Correcting for the effect of daylight in abundance estimation of juvenile haddock (Melanogrammus aeglefinus) in the North Sea: an application of kriging with external drift. ICES J. Mar. Sci. 58, pp. 1272-1285 USLÄNDER, T. (2003): Benutzerhandbuch SIMIK+ ArcView-Erweiterung Version 1.0zur flächenhaften Darstellung der Grundwasserbeschaffenheit. Fraunhofer Institut für Informations- und Datenverarbeitung. Karlsruhe

Figure 1: Variogram Analysis incl. Variogram Map of NH4 (Bottom Water) in Winter 1999/2001.

Figure 2: Ordinary Kriging Map of of NH4 (Bottom Water) in Winter 1999/2001.

81


Advancement of Geoservices – Augmented Reality GIS Client Staub G., Wiesel J., Brand S., Coelho A.H. Institute of Photogrammetry and Remote Sensing (IPF), University of Karlsruhe, Englerstr. 7, 76128 Karlsruhe, Germany, E-Mail: {staub | wiesel | brand | coelho}@ipf.uni-karlsruhe.de

Same abstract as Wiesel, J. et al. – Advancement of Geoservices - Augmented Reality GIS Client; this volume.

82


Design of a Database System for Linking Geoscientific Data Tiedge M., Lipeck U., Mantel D. Institut fuer Informationssysteme - FG Datenbanksysteme, Universitaet Hannover, Welfengarten 1, 30167 Hannover, Germany, E-Mail: {mti | ul | dma} @dbs.uni-hannover.de

1. Introduction There are many ways to represent the earth surface in geo-information systems (GIS) reflecting different interests and applications. But multiple representations of corresponding or similar real world objects often result in discrepancies and inconsistencies. To give an improved access to such different data sets and to enhance the data quality, data integration is applied. Additionally to identifying and linking corresponding geometric objects, it becomes possible to propagate geometric changes between the different geoscientific databases automatically. In the »Geotechnologien«-Project »New Methods for Semantic and Geometric Integration of Geoscientific Data Sets with ATKIS« [Sester et al. 2003], carried out at the University of Hanover, data integration is applied to different geoscientific data sets. This project is divided into three sub-projects and the main task of the database group is to develop and prototypically realize an integrated access to the given heterogeneous data sets according to the paradigm of federated databases. The federated database architecture is adjusted to handle the geometric data sets specifically used within this project, but it shall be transferable to similar data sets. Therefore general methods for federated databases are utilized and specialized to identify corresponding spatial objects. The federation paradigm for database integration [Conrad 1997] was chosen, since it at the same time gives a close coupling and keeps the databases autonomous. Hereby global applications are given an integrated view to the diffe-

rent databases via a global database schema, nevertheless local applications remain unchanged as they still access the databases locally. The federation service requires an »integration database« on its own to keep imports from the component databases and integration data to link them. It is structured according to a global schema derived by schema integration and addition of integration concepts like links between objects. In order to fill this integration database, data integration methods are needed to merge identical objects and to match or relate corresponding objects. For standard data, integration methods are compiled and classified in the database literature like [Conrad 1997]. Spatial data, however, needs extra treatment to be matched or related. The integration database system shall provide a choice of integration rules and methods developed by specialists from the geoscientific application fields. Our first goal is to support matching of spatial objects of comparable types by our database. The following three sections discuss the schema design for the imports from the component databases, i.e. the spatial objects to be matched for, the results from matching, i.e. the links between the objects, and for the meta data controlling the matching process. The final two sections give an overview of the system architecture and future work. 2. Spatial Objects The geoscientific data sets provided for this project arrived with heterogeneous kinds, i.e. different formats and data schemas. At first all those data were imported into an object-rela-

83


tional database system (Oracle9i with Spatial Cartridge), after being converted into a processible format, for both geometric and thematic data. These include following data sets, partly in different scales: - ATKIS maps - soil science maps (BK) - geological maps (GK) - field boundary and wind obstacle datasets, aerial images The imported data was normalized according to rules of database theory, to remove redundancies, recognize and solve inconsistencies. Next export schemas are designed to give unified access to the heterogeneous spatial and thematic attributes in the data sets – this is not yet a full schema integration according to the paradigm of federated databases, but focused an spatial matching. The authors have already gathered experience with geoscientific data, especially with ATKIS datasets (»Amtliches Topographisch-Kartographisches Informationssystem«) [Kleiner et al. 2000] with respect to object-relational data-

classified through object types, and a more precise characterization is possible via object attributes. These export schemas are realized by views on the imported data and serve as a basis for matching. The import schemas of the other geoscientifc data sets are transformed to analogous export schemas. 3. Linking Schema Linking geoscientific objects not only involves simple one-to-one relationships (1:1) but also more complex correspondences ([Gösseln, Sester 2003], [Mantel, Lipeck 2004]), as realworld objects are represented differently on different maps, often decomposed into various segments. Figure 2 (b) shows an exemplary real-world object, that is represented differently in two geoscientific databases. The designed link model copes with such general relationships between corresponding sets of geoscientific objects – to represent general many-to-many relationships (n:m), the link between identified object sets are interpreted as links between aggregated objects.

Figure 1: Import and export schema for the soil science map.

base systems in modelling, importing and processing. A simplified version of the developed object-relational database schema for ATKIS data serves as a pattern for the export views on the heterogeneous geoscientific databases due to its flexibility, i.e. support of complex spatial objects and a general method to manage thematic attributes. Figure 2 shows the (simplified) import and export schemas for the soil science map (BK). In the export schema the objects are coarsely

84

Figure 2 (a) shows a specification of the link model, where ATKIS objects are linked with both objects from the soil science map (BK) and geological map (GK). Furthermore this link model holds the topological relationships (i.e. overlaps, contains etc.) between the particular segments of identified objects, as they will be useful in the subsequent process of propagating updates between different identified objects.


Figure 2: (a) Schema for links (b) Instance of links (n:m relationship).

4. Meta-Data for Controlling the Matching Process In order to find correspondences between geometric objects from heterogeneous data sets with geometric matching (i.e. performing the object-related integration) only objects of similar type should be compared – for instance it is not meaningfull to identify waters with rocks. In one partner's subproject [GÜsseln, Sester 2003] matching water objects seem very promising, but the attributes that identify water objects vary in the different geoscientific databases. For this purpose the thematic attributes of the different databases are semantically integrated, by classifying different kinds of objects into different object classes, which are independent from the heterogeneous databases. The classification takes place in the integration database system, which stores the classification criteria. After semantic integration, the geometric matching processes the geometric candidates, that are chosen upon this classification, without needing to know how the thematic attributes are represented originally. The integration database stores methods and parameters for the matching process. Thus, another design aspect of the integration database is rule management for supporting the process of classification, matching and conflict resolution. Later it additionally shall store rules to specify update propagation along linked

spatial objects, e.g. when unifying geometries of water objects and adjacent regions. This requires a third database schema, namely for meta-data, which is omitted here for space reasons. 5. System Architecture Figure 3 shows the proposed architecture of the prototypical system. The different component databases (soil science map (BK), geological map (GK), topographic map (ATKIS), boundaries) are linked via corresponding export schemas. The integration database manages exports, links and meta-data as described above. These can be accessed by stored procedures implemented in PL/SQL and are reached through standard database connectors (e.g. JDBC and ODBC) or through the middleware ArcSDE as part of the widely used ArcGIStools. Apart from registering and matching exported data, later updating applications like adaptions of geometries along links shall be supported. Oracle is used as the underlying object-relational database management system – it offers spatial data types natively (Spatial Cartridge) conforming with the OpenGIS consortium specification for the representation of vector data. Furthermore it supports efficient methods to query spatial objects, e.g. topological relationships.

85


Figure 3: System architecture.

6. Future Work Next a model for control rules, methods and parameters will be designed, to support the matching process and enable the propagation of updates. Furthermore methods will be developed, to involve and store extracted boundary data from aerial images [Butenuth 2003]. More general relationships, like shared boundaries, will be analyzed, as they promise further knowledge about corresponding objects. The authors have proposed to extend spatial databases by topological data models that consider shared geometries and topological relationships between spatial objects ([Tiedge 2003]). In particular such models will allow to keep the spatial integrity between objects when updating them. 7. Acknowledgements This work is part of the Geotechnologien project funded by the Federal Ministry for Education and Research (BMBF) and the German Council (DFG) under contract no. 30F0374A. 8. Literature Butenuth M., Heipke C.: 2003, Modelling the integration of heterogeneous vector data and aerial imagery, in: Schiewe J., Hahn M., Madden M., Sester M. (Eds.), ISPRS Commission IV Joint Workshop »Challenges in geospa-

86

tial analysis, integration and visualisation«, Fachhochschule Stuttgart, pp. 55-60, on CDROM. Conrad, S.: 1997, Föderierte Datenbanksysteme, Springer-Verlag, Berlin. Gösseln, G. v.; Sester, M.: 2003, Semantic and geometric Integration of geoscientific Data Sets with ATKIS - Applied to Geo-Objects from Geology and Soil Science, ISPRS Commission IV Joint Workshop »Challenges in Geospatial Analysis, Integration and Visualization II«, Proceedings, September 8 - 9, Stuttgart, pp. 111-116. Kleiner, C.; Lipeck, U.; Falke, S.: 2000, ObjektRelationale Datenbanken zur Verwaltung von ATKIS-Daten, 2000, pp. 169-177. In: Bill, R.; Schmidt, F.: ATKIS - Stand und Fortführung, 2000, Verlag Konrad Wittwer, Stuttgart, 2000, pp. 169-177. Mantel, D.; Lipeck, U.: 2004, Datenbankgestütztes Matching von Kartenobjekten. To appear in: Mitteilungen des Bundesamtes für Kartographie und Geodäsie, Bundesamt für Kartographie und Geodäsie, Frankfurt am Main.


Mantel, D.: 2002, Konzeption eines Föderierungsdienstes für geographische Datenbanken, Hannover, Institut für Informationssysteme, Universität Hannover. OpenGIS Consortium, Inc.: 1999, OpenGIS Simple Features Specification for SQL Revision 1.1, Open GIS Consortium. Sester, M.; Butenuth, M.; Gösseln, G. v.; Heipke, C.; Klopp, S., Lipeck, U.; Mantel, D.: 2003, New Methods for Semantic and Geometric Integration of Geoscientific Data Sets with ATKIS – Applied to Geo-objects from Geology and Soil Science. In GEOTECHNOLOGIEN Science Report, Part 2, Koordinierungsbüro GEOTECHNOLOGIEN Potsdam, pp. 51-62. Tiedge, Michael: 2003, Entwicklung und Implementierung einer topologischen Erweiterung für objektbasierte räumliche Datenbanken, Hannover, Institut für Informationssysteme, Universität Hannover.

87


Web-Based Marine Geoinformationservices in MarGIS Vetter L. (1), Köberle A. (2) (1) Departments of a) Geoinformatic and b) Landscape Architecture and Environmental Planning, FH Neubrandenburg – University of Applied Sciences, P. O. Box 11 01 21, 17041 Neubrandenburg, Germany, E-Mail: vetter@fh-nb.de (2) Department of Landscape Architecture and Environmental Planning, FH Neubrandenburg – University of Applied Sciences, P. O. Box 11 01 21, 17041 Neubrandenburg, Germany, E-Mail: koeberle@fh-nb.de

1. Introduction The advantages of web-based information are not new and they have been widely discussed. The internet has evolved very rapidly from delivering essentially static information into a very dynamic information resource. In the last years the demand of spatial web-based information has been strongly increased that means to deliver maps interactively on the fly for the user in the internet. In order to follow this requirement the large GIS manufacturers – and meanwhile the open source group, as well are devolping special software tools called Map Server technology for managing such dynamic geospatial information. Nowadays it is state-of-the-art in the GIS community to work in or to have a Map Server environment. Up to now there has been generated a lot of web based geodata bases in the land-derived field, but in the marine context the devolpment is just at the beginning. One main goal of the MarGIS project is to design and to install a pertinent web-based system for the dissimination of marine geoinformation. 2. Metadata Due to the numerous amount of different formats in the IT-world it is suitable to consider and to work with a worldwide accepted standard. Unfortunately there are existing comprehensive sources as: Dublin Core Metadata Element Set (basic standard for the resource description), FGDC standard of the U.S. (Federal Geographic Data Committee) or ISO. In this project we decide us for the ISO 19115

88

(Metadata) because the different used softwares have started making the ISO 19115 as part of their system software and support it. The documentation of metadata for geographic information is described in the ISO 19115 (Metadata) providing information about the identification, the extent, the quality, the spatial and temporal schema, the spatial reference, and the distribution of digital geographic data (Kresse & Fadaie, 2004). One metadata ISO 19115 document describes each data set. ISO standard organizes the structure of the metadata as core metadata (cf. fig.1) and comprehensive metadata elements. The metadata are stored as XML document in the MSSQL database. The dialog between the user of the client and the metadata in the database are realised via ArcIMS. 3. Mar_GIS System Design The figure 2 gives an overview how the different software components, the spatial elements and the data model as well work together. In this case a combination of a MSSQLdatabase and the ESRI Map Server technology are reasponsible for the web-based spatial anaylses. The dynamic presentations of the maps in the internet and the availability of the metadata are realized by ESRI-software tools as the ArcIMS (cf. fig. 2). This design allows to view the results of a spatial data query in the internet with the help of the most current webbrowsers. For the client the basic functions as pan, zoom, query, find and the possibilty of printing are implemented. The Internet Map


Figure 1: UML package-diagram from ISO 19115 (Metadata) (source: ISO, 2001, p. 10).

Figure 2: Visualisation of the MarGIS System Design.

89


Server ArcIMS based on an Apache Server with a Tomcat as Servlet Engine. The various geodata (shapes, coverages, grids, raster images) and the metadata are stored in a MSSQL database and the ArcSDE software works as a gateway. The processing chain for the user is: Web Browser as client via ArcIMS and by ArcSDE to the data. The ArcIMS Servlet Connector is reasponsible for the communication between the Web Server and the Application Server. The Application Server organizes the process of the incoming queries and transfers these to the Spatial Server. Different services as Image-, Feature-, Metadata-, Query-, Geocode- or Extract-Services are carrying out on the Application Server. MarGIS uses Image-, Query- and Metadata-Services. The image service creates raster images in the following data formats: jpg, png and gif. Whenever a user zooms or pans a query is sent to the image service with the coordinates of the new map window. The image service generates the new image with the new coordinates and send the URL of the image back to the client. The communication between client and application server is realized by ArcXML – a derivation of XML guaranteering a structural information flow. The client produces and analyzes the requests and responses of the server with the help of a javascript. 4. References Czegka, Braune, Palm, Ritschel, Klump, Lochter (2003): Beispiele ISO 19115 DIS konformer Metadaten in Katalogservices. Zwei Anwendungen aus dem Bereich umwelt- und geowissenschaftlicher Geofachdaten im Rahmen der Metadatencommunity der »GIB«, www.unigis.ac.at/club/u2/2003/ UP_Beitrag_ Czegka_Braune.pdf ESRI (2002): Using ArcIMS, Redlands Green, D. R., King, S.D. (2003): Coastel and Marine Geo-Information Systems, Kluwer Academics Publishers, Dordrecht

90

Green, D. R., King, S.D. (2003): Internet-Based Information Systems: The Forth Estuary Forum (FEF) System, in: Green, D. R., King, S.D. (2003): Coastel and Marine Geo-Information Systems, Kluwer Academics Publishers, Dordrecht, p. 451-465 Green, D. R., King, S.D. (2003): Access to Marine Data on the Internet for Coastel Zone Management: The New Millenium, in: Green, D. R., King, S.D. (2003): Coastel and Marine Geo-Information Systems, Kluwer Academics Publishers, Dordrecht, p.555-578 ISO (2001) : Draft International Standard ISO/DIS 19115 (ISO/TC211) Geographic information - Metadata (Version 2001-02-20). ISO, Genève W. Kresse, K. Fadaie (2004): ISO Standards for Geographic Information, Springer, Berlin Heidelberg


The Implementation of an Analytical Model for Deep Percolation Waldow H. von Institute of Hydrology, Centre for Agricultural Landscape and Land Use Research (ZALF), Eberswalder Str. 84, 15374 Müncheberg, Germany, E-Mail: waldow@zalf.de

The European Union Water Framework Directive (European Parliament and Council, 2000) calls for the assessment of vulnerable zones with respect to contaminant inputs and and the forecasting of effects of proposed measures to reduce the contaminant load of Europe's waters. A recent report of the European Commission indicates that over 20% of groundwater in the EU and at least 30 to 40% of lakes and rivers are showing excessive nitrate concentrations, and that nitrogen from agricultural sources accounts for between 50 and 80% of the nitrates entering Europe's waters. The commission points out the need for a better assessment of nutrient losses to waters, and for the development of models which correlate environmental impacts and causative factors to enable the forecasting of the impacts of proposed measures (Commission of the European Communities, 2002). For this end, the Centre for Agricultural Landscape and Land Use Research (ZALF) is developing a software tool which will be able to simulate the transport and fate of nitrate from agricultural non-point sources over long periods at the meso-scale. The sub-model »UZmodule« is responsible for the one-dimensional simulation of flow and nitrate – transport between the root zone and the phreatic zone. Existing large scale distributed models involving unsaturated zone flow usually conceptualise the soil column as a series of reservoirs through which the percolating water is routed (i.e. SWAT, SWRRB, SWIM). This reservoir-routing approach uses the empirical concept of field-capacity and an

empirical rate-constant. Furthermore, the model performance depends on the correct choice of the layer thickness and the time step (Emerman, 1995). The optimal choice of these parameters is a function of the hydraulic properties of the medium (Gabrielle and Bories, 1999). Therefore, besides the calibration of its empirical parameters, the performance of this approach depends on the proper, material specific parameterization of its scale parameters (Gabrielle and Bories, 1999). Given the large temporal- and spatial scales of the targeted application, it is unlikely to achieve a reliable parametrization of this model type. The correct physical description of unsaturated flow is given by the Richards equation (RE), a highly non-linear parabolic partial differential equation which has to be solved numerically. The implementation of RE-solvers in large-scale distributed hydrological models is problematic. The iterative algorithm tends not to converge under some conditions and the solution is computationally expensive. Even though a large body of literature deals with numerical schemes to solve RE, a robust, mass conserving algorithm applicable to heterogeneous and variably saturated conditions which are not known to the modeller is not yet available (Miller et al., 1998). In this work, RE is simplified in a way that enables an analytical solution. In the region under consideration (North Central-European Lowlands) and in the depth encountered (below -2 m), the hydraulic gradient can be assumed to equal unity (Eulenstein et al., 2003). In other

91


words, the driving force for water flow results solely from the gravitational potential. RE then simplifies to a quasi-linear first-order PDE, which can be solved analytically using the method of characteristics (Charbenbeau, 1984). The model takes a step-function of inflow as upper boundary condition. If the inflow increases, a shock wave starts travelling downwards. Its speed depends on the inflow and the moisture content of the underlying material. If the inflow decreases, a centred simple wave originates at the surface. At each point in time, the moisture status of the profile can be described as a sequence of regions with a specific constant soil-moisture, or a moisture content resulting from a simple centred wave. Shock waves, which represent discontinuities of the moisture profile, cannot be treated by the method of characteristics and are traced by applying a mass balance across the wetting front (Charbenbeau, 1984). Figure 1 shows exemplarily the moisture conditions in the base characteristic plane which result from two wetting events. Between time t0 and time t1, an infiltrating front resulting from an influx q1 at the upper boundary invades the profile of an initial moisture content ı0. The region above that wetting front has a constant moisture content ı1 which corresponds to q1. At time t1, a trailing wave originates at the surface. The moisture content in the region (0,t1)ACB(0,t2) is given by the method of characteristics solution. A second wetting from time t2 to time t3 results in a similar pattern. This analytical treatment of deep percolation has the advantage of being universally applicable, robust and physically based. Nitrate transport is modelled by solving the advectiondispersion equation (ADE) with seepage velocities from the analytical solution of the flow equation as input. Because the analytical solution can deliver an arbitrarily fine spatial and temporal resolution of the input to the ADE, the grid used for solving the transport equation can be freely chosen to deal with numerical dispersion and/or artificial oscillation.

92

The model is implemented in the C++ programming language to enable easy maintenance and extensibility of the code. Since the flow model does not allow for upward flow, it was augmented by an algorithm which extracts water from the profile while keeping the moisture status compatible with the requirements of to account for a possible evaporative demand. Heterogeneities of the substrate are dealt with by dividing the profile into homogeneous layers. The flow of water leaving one layer is the input of the next layer. Preliminary comparisons with the full model validate the correctness of this new approach. This approach takes the middle ground between conceptual models and models based on the Richards equation in terms of computational cost and the physically correct description of the process. References: Charbeneau, R. J., 1984. Kinematic Models for Moisture and Solute Transport. Water resources Research, 20(6), 699-706. Commission of the European Communities, 2002. Implementation of Council Directive 91/676/EEC concerning the protection of waters against pollution caused by nitrates from agricultural sources: Synthesis from year 2000 Member States reports (COM(2002) 407 final). European Commission, Brussels. Emerman, S. H., 1995. The tipping bucket equations as a model for macropore flow. Journal of Hydrology 171, 23-47. Eulenstein, F, U. Schindler and L. Müller, 2003. Feldskalige Variabilität von Frühjahrsfeuchte und mineralischem Stickstoff in der ungesättigten Zone sandiger Standorte Nordostdeutschlands. Archives of Agronomy and Soil Science 49(2). European Paliament and Council, 2000. Directive 2000/60/EC. Official Journal of the European Communities, L327/2000: 1-73.


Gabrielle, B. and S. Bories, 1999. Theoretical Appraisal of Field-Capacity Based Infiltration Models and their Scale Parameters. Transport in Porous Media 35, 129-147. Miller, C. T., G. A. Williams, C. T. Kelley and M. D. Tocci, 1998. Robust solution of Richards' equation for nonuniform porous media.Water Resources Research, 34(10), 2599-2610.

Figure 1(adapted from Charbeneau (1984)): z-t – diagram of soil moisture development resulting from two infiltration events with flux q1 at t0 and flux q2 at t2.

93


Advancement of Geoservices – Augmented Reality GIS Client Wiesel J., Staub G., Brand S., Coelho A.H. Institute of Photogrammetry and Remote Sensing (IPF), University of Karlsruhe, Englerstr. 7, 76128 Karlsruhe, Germany, E-Mail: {wiesel | staub | brand | coelho}@ipf.uni-karlsruhe.de

Objectives On the basis of Augmented Reality (AR) techniques a mobile GIS Client for the collection and update of 3D databases shall be developed. By superimposing feature data with the real world, these techniques can significantly improve quality and productivity of GIS client components. A first prototype has been developed in co-operation with the Project C6 of the joint research centre 461 »Strong Earthquakes« financed by DFG, which performed first experiments to use AR-Hardware and Software (Bähr, Leebmann 2001) in disaster management scenarios. The aim is to develop a system where all hardware components are mounted on a backpack as a tool to collect and update 3-D data for applications in the geosciences. A possible setup is shown in figure 1. Current setup At this stage all the hardware is mounted on a tripod, except the Notebook Computer. This mokkup ensures a stable environment for calibration and functional tests of the system and reduces errors which are caused by a moving user. Presently a video camera is used instead of an Head Mounted Display (HMD). This makes it easier to develop and to test the system. Figure 2 shows the prototype development system. Sensors To develop an AR client, sensors and display systems have to be integrated into a portable hard- and software system. Ready to use software is not available and therefore had to be created.

94

Orientation First of all an AR client has to be oriented in its environment. To determine the 3 orientation angles an Inertial Navigation System (INS) is used. It continuously computes all necessarily needed navigation information and is read out in realtime. The rotation is described using three angles, according to Euler's rotation theorem. Positioning Using AR for a mobile client in the real world it is important to know the user's position in space. To achieve this we use the Global Positioning System (GPS). Real time kinematic GPS (RTK) is the most accurate solution that can be implemented. The idea behind RTK is that one GPS receiver is placed on a control point (reference station) with known coordinates, while the second one (rover) delivers X-, Yand Z-coordinates of its position. A radio connection between the reference station and the rover is used to transmit calibration data from the reference station to the rover. NMEA0183format (NMEA0183) data records containing the position of the rover in geographical coordinates (WGS84 reference system) are transmitted in real time to the AR client yielding cmlevel precision. Data for Testing the System To set up the system, we use an existing database of very precise DEM data of the Karlsruhe University Campus. These data have been created by means of airborne laser scanning. The height accuracy is nowadays ±10 cm (Baltsavias, 1999). The DEM has been cut into tiles of 100m width


Figure 1: Backpack AR Setup.

Figure 2: System test mock-up.

95


and 100m height and converted to VRML files. Caching strategies are used to minimize data transfer from the database to the AR-client. During the outdoor operation the visualisation system receives positioning coordinates from the GPS and orientation angles from the INS to merge the database VRML data (light grey cf. Fig 3) to the real world buildings. Figure 3 shows a screen shot, where a building is superimposed by the DEM.

bodies, DEM and attribute data. The AR system can help in data acquisition (e.g. mapping the micro morphology, measuring the width of the clefts between the landslide bodies) and assessment of the current state of the movement by visualising additional data like geological structures, soil parameters, borderlines and textual information. The application scenario can also provide data in 4D (several epochs of data have been gathered in the

Figure 3: Building superimposed by DEM.

Application Scenario A use case was chosen in the field of Geosciences. The AR system allows for the visualisation and measurement of underground geo objects in a mobile real time environment which makes it possible to analyse geological and geo-morphological processes directly in the terrain. To show the utility and feasibility of such a system, data of a project investigating landslide processes will be used. Data are made available by the »Landesamt für Geologie, Rohstoffe und Bergbau« (Land Office for Geology, Resources and Mining) in Freiburg i.Br., Germany. The investigation site is an area of mass movement on the Alb which has to be routinely monitored due to a road crossing the risk area. Data are available as points, lines, three-dimensional landslide

96

past), which will be stored into the 4D database system being developed by the Vechta group and therefore allows to »see« the landslide moving in the past. This application scenario allows to show the integrated use of the joint developments based on a common architecture (Heidelberg, München), 2D realtime data display and collection (München), AR based display and data collection based on a common 4D Geo Database (Vechta). Outlook Some problems are created by the limited computing power of the mobile computers, as we have to compute orientation elements and data display in near real time to achieve smooth operation of the system. One possibility to speed up parts of the positioning pro-


cess is to use Quaternions instead of Euler angles because Quaternions do not use any trigo-nometric functions. The current Java trigonometric operations (JRE 1.4.2) are up to sixteen times slower than in the C++ language (Cowell-Shah 2004). Other problems result from sensor inaccuracies. There are various reasons for these errors which have to be eliminated. Noise and especially typical GPS problems like shadowing effects and loss of satellites are problems that have to be solved. Further testing of the video based system and implementation of the HMD has to be done. This will raise other questions and problems, like the limited field of view or the lack of display colour of the HMD. Literature Baltsavias, E. P.(1999): Airborne laser scanning: existing systems and firms and other resources. ISPRS Journal of Photogrammetry and Remote Sensing, No. 54 (2-3), pp. 164-198. B채hr, H.P.; Leebmann, J. (2001): Wissensrepr채sentation f체r Katastrophen-management in einem technischen Informationssystem (TIS). Report of SFB 461, pp. 597-638. Cowell-Shah, C.W. (2004): Nine Language Performance Round-up: Bench-marking Math & File I/O: http://www.osnews.com/story.php? news_id=5602&page=1 NMEA0183: NMEA data: http://www.gpsinformation.org/dale/nmea.htm

97


Author’s Index

A Arndt O. . . . . . . . . . . . . . . . . . . . . . . 6 Azzam R. . . . . . . . . . . . . . . . . . . . . . 10

B Bär W . . . . . . . . . . . . . . . . . . . . . . . 35 Bernard L. . . . . . . . . . . . . . . . . . . . . 15 Betteray F.v. . . . . . . . . . . . . . . . . . . . 67 Bogena H. . . . . . . . . . . . . . . . . . . . . 30 Brand S.. . . . . . . . . . . . . . . . . . . 82, 94 Breunig M.. . . . . . . . . . . . . . . . . . . . 35 Butenuth M. . . . . . . . . . . . . . . . . . . 40

C Coelho A.H . . . . . . . . . . . . . . . . 82, 94

D Dannowski R.. . . . . . . . . . . . . . . . . . 45

E Egenhofer M.J. . . . . . . . . . . . . . . . . 49 Einspanier U. . . . . . . . . . . . . . . . . . . 15

G Gösseln G.v.. . . . . . . . . . . . . . . . . . . 50

98

H Häußler J. . . . . . . . . . . . . . . . . . . . . 55 Haubrock S. . . . . . . . . . . . . . . . . . . . 15 Hecker J.-M. . . . . . . . . . . . . . . . . . . 59 Heier Ch. . . . . . . . . . . . . . . . . . . . . . 62 Heipke Ch.. . . . . . . . . . . . . . . . . . . . 40 Hübner S. . . . . . . . . . . . . . . . . . . . . 15

J Jerosch K. . . . . . . . . . . . . . . . . . . . . 63

K Kandawasvika A. . . . . . . . . . . . . . . . 73 Kappler W.. . . . . . . . . . . . . . . . . 10, 67 Kersebaum K.C.. . . . . . . . . . . . . . . . 59 Klien E. . . . . . . . . . . . . . . . . . . . . . . 15 Kiehle Ch. . . . . . . . . . . . . . . . . . 10, 67 Köberle A. . . . . . . . . . . . . . . . . . . . . 88 Kuhn W. . . . . . . . . . . . . . . . . . . . . . 15 Kunkel R.. . . . . . . . . . . . . . . 10, 30, 67

L Leppig B. . . . . . . . . . . . . . . . . . . . . . 30 Lessing R.. . . . . . . . . . . . . . . . . . . . . 15 Lipeck U. . . . . . . . . . . . . . . . . . . . . . 83 Lutz M. . . . . . . . . . . . . . . . . . . . . . . 15


Author’s Index

M Mantel D. . . . . . . . . . . . . . . . . . . . . 83 Meiners H.-G. . . . . . . . . . . . . . . . . . 67 Merdes M. . . . . . . . . . . . . . . . . . . . . 55 Michels I. . . . . . . . . . . . . . . . . . . . . . 45 Mirschel W. . . . . . . . . . . . . . . . . . . . 59 Müller F.. . . . . . . . . . . . . . . . . . . . . . 30

P Pesch R. . . . . . . . . . . . . . . . . . . . 70, 79 Plan O. . . . . . . . . . . . . . . . . . . . . . . . 73

W Waldow H.v. . . . . . . . . . . . . . . . . . . . 91 Wegehenkel M. . . . . . . . . . . . . . . . . 59 Wendland F. . . . . . . . . . . . . . . . . . . . 30 Wieland R. . . . . . . . . . . . . . . . . . 45, 59 Wiesel J. . . . . . . . . . . . . . . . . . . . 82, 94

Z Zipf A. . . . . . . . . . . . . . . . . . . . . . . . 55

R Reinhardt W.. . . . . . . . . . . . . . . . . . . 73 S Schäfer A. . . . . . . . . . . . . . . . . . . . . . 63 Schlüter M. . . . . . . . . . . . . . . . . . 63, 77 Schröder W. . . . . . . . . . . . . . 70, 77, 79 Sester M. . . . . . . . . . . . . . . . . . . . . . 50 Staub G. . . . . . . . . . . . . . . . . . . . 82, 94 Steidl J. . . . . . . . . . . . . . . . . . . . . . . . 45

T Thomsen A. . . . . . . . . . . . . . . . . . . . 35 Tiedtge M. . . . . . . . . . . . . . . . . . . . . 83

V Vetter L. . . . . . . . . . . . . . . . . . . . 77, 88 Visser U. . . . . . . . . . . . . . . . . . . . . . . 15

99


GEOTECHNOLOGIEN Science Report’s – Already published

No. 1 Gas Hydrates in the Geosystem – Status Seminar, GEOMAR Research Centre Kiel, 6-7 May 2002, Programme & Abstracts, 151 pages. No. 2

Information Systems in Earth Management – Kick-Off-Meeting, University of Hannover, 19 February 2003, Projects, 65 pages.

No. 3

Observation of the System Earth from Space – Status Seminar, BLVA Munich, 12-13 June 2003, Programme & Abstracts, 199 pages.

100


Notes


Notes


Therefore the German national research programme »Information Systems in Earth Management« (Informationssysteme im Erdmanagement) was launched in late 2002 as part of the R&D programme GEOTECHNOLOGIEN. The programme aims to improve the basic knowledge, to develop general tools and methods to improve the interoperability, and to foster the application of spatial information systems at different levels. The funded projects focus on the following key themes: semantical interoperability and schematic mapping, semantical and geometrical integration of topographical, soil, and geological data, rule based derivation of geoinformation, typologisation of marine and geoscientifical information, investigations and development of mobile geo-services, and coupling information systems and simulation systems for the evaluation of transport processes. The abstract volume contains a collection of first research results and experiences presented at a science meeting held in Aachen, Germany, in March 2004. The presentations reflect the multidisciplinary approach of the programme and offer a comprehensive insight into the wide range of research opportunities and applications, including solid Earth sciences, hydrology, and computer sciences. In conjunction with the Science Report No. 2 from the »Kick-off meeting« it gives a good overview on the progress achieved so far.

Science Report GEOTECHNOLOGIEN

At all levels of public life geoinformation and geoinformation systems (GIS) - as tools to deal with this type of information - play an important role. Huge amounts of geoinformation are created and used in land registration offices, utility companies, environmental and planning offices and so on. For a national economy geoinformation is a type of infrastructure similar to the traffic network. Although GIS are used in daily practice there is still an ongoing demand for research especially in the geoscientific context.

Informations Systems in Earth Management

Information System in Earth Management

GEOTECHNOLOGIEN Science Report

Information Systems in Earth Management Status Seminar RWTH Aachen University 23-24 March 2004

Programme & Abstracts

The GEOTECHNOLOGIEN programme is funded by the Federal Ministry for Education and Research (BMBF) and the German Research Council (DFG)

No. 4

ISSN: 1619-7399

No. 4


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.