SR02

Page 1

Information System in Earth Management Geoinformation and geoinformation systems (GIS) - as tools to deal with this type of information - play an important role at all levels of public life. Daily a huge amount of geoinformation is created and used in land registration offices, utility companies, environmental and planning offices and so on. For a national economy geoinformation is a type of infrastructure similar to the traffic network. Various scientific disciplines investigate spatial patterns and relations, other disciplines are developing concepts and tools for doing these types of investigations. In Germany a national programme ÂťInformation Systems in Earth ManagementÂŤ (Informationssysteme im Erdmanagement) was launched in late 2002 as part of the R&D Programme GEOTECHNOLOGIEN. Goal of the programme is to improve the basic knowledge, to develop general tools and methods to improve the interoperability, and to foster the application of spatial information systems at different levels. The currently funded projects focus on the following key themes: (i) Semantical interoperability and schematic mapping, (ii) Semantical and geometrical integration of topographical, soil, and geological data, (iii) Rule based derivation of geoinformation, (iv) Typologisation of marine and geoscientifical information, (v) Investigations and development of mobile geo-services, and (vi) Coupling information systems and simulation systems for the evaluation of transport processes. This abstract volume contains the descriptions of the funded projects which have started so far. The internal kick-off-meeting was held at the University of Hannover, 19th of February 2003 as a get-together of all participants. Upcoming meetings will be open to a broader spectra of interested visitors.

GEOTECHNOLOGIEN Science Report

Information Systems in Earth Management Kick-Off-Meeting University of Hannover 19 February 2003

Projects The GEOTECHNOLOGIEN programme is funded by the Federal Ministry for Education and Research (BMBF) and the German Research Council (DFG)

ISSN: 1619-7399

No. 2


GEOTECHNOLOGIEN Science Report

Information Systems in Earth Management Kick-Off-Meeting University of Hannover 19 February 2003

Projects

Number 1

No. 2


Impressum

Schriftleitung Dr. Alexander Rudloff Dr. Ludwig Stroink © Koordinierungsbüro GEOTECHNOLOGIEN, Potsdam 2003 ISSN 1619-7399 The Editors and the Publisher can not be held responsible for the opinions expressed and the statements made in the articles published, such responsibility resting with the author. Die Deutsche Bibliothek – CIP Einheitsaufnahme GEOTECHNOLOGIEN; Information Systems in Earth Management, Kick-Off-Meeting University of Hannover 19 February 2003, Projects Potsdam: Koordinierungsbüro GEOTECHNOLOGIEN, 2003 (GEOTECHNOLOGIEN Science Report No. 2) ISSN 1619-7399 Bezug / Distribution Koordinierungsbüro GEOTECHNOLOGIEN Telegrafenberg A6 14471 Potsdam, Germany Fon +49 (0) 331-288 10 71 Fax +49 (0) 331-288 10 77 www.geotechnologien.de geotech@gfz-potsdam.de Bildnachweis Titel: Bundesamt für Kartographie und Geodäsie, 2003


Preface

Geoinformation and geoinformation systems (GIS) – as tools to deal with this type of information – play an important role at all levels of public life. Daily a huge amount of geoinformation is created and used in land registration offices, utility companies, environmental and planning offices and so on. For a national economy geoinformation is a type of infrastructure similar to the traffic network. Various scientific disciplines investigate spatial patterns and relations, other disciplines are developing concepts and tools for doing these types of investigations. In Germany a national programme »Information Systems in Earth Management« (Informationssysteme im Erdmanagement) was launched in late 2002 as part of the R&D Programme GEOTECHNOLOGIEN. Goal of the programme is to improve the basic knowledge, to develop general tools and methods to improve the interoperability, and to foster the application of spatial information systems at different levels. In an initial phase (2002-2005) a total sum of nearly € 4 million will be invested by the Federal Ministry of Education and Research (BMBF).

The currently funded projects focus on the following key themes: (i) »Semantical interoperability and schematic mapping« (ii) »Semantical and geometrical integration of topographical, soil, and geological data« (iii) »Rule based derivation of geoinformation« (iv) »Typologisation of marine and geoscientifical information« (v) »Investigations and development of mobile geo-services« (vi) »Coupling information systems and simulation systems for the evaluation of transport processes« The main objective of the kick-off meeting »Information Systems in Earth Management« was to bring together the scientists and investigators of the funded projects to present their ideas and proposed work plans to each other; several projects are interlinked and could therefore benefit from synergies. For all upcoming meetings further visitors from Germany, Europe and overseas are welcome to share their interests and results. Ralf Bill Alexander Rudloff



Table of Contents

Projects Semantic Interoperability by Means of Geoservices Semantic Problems in Three Use Cases and Approaches for Potential Solutions .......................................................1 - 16 MAR_GIS Marine Geo-Information-System for Visualisation and Typology of Marine Geodata .......................................................17 - 22 Implementation of the European Water Framework Directive: ISSNEW – Developing an Information and Simulation System to Evaluate Non-Point Nutrient Entry into Water Bodies .....................23 - 30 Geoservice Groundwater Vulnerability Development of an Information Infrastructure for the Rule-based Derivation of Geoinformation from Distributive, Heterogeneous Geodata Inventories on Different Scales with an Example Regarding the Groundwater Vulnerability Assessment ....................................................................31 - 36 Advancement of Geoservices..............................................................37 - 50 New Methods for Semantic and Geometric Integration of Geoscientific Data Sets with ATKIS – Applied to Geo-objects from Geology and Soil Science ...........................................................51 - 62 List of Participants ..............................................................................63 Authors’ Index....................................................................................64


Semantic Interoperability by Means of Geoservices Semantic Problems in Three Use Cases and Approaches for Potential Solutions Bernard, Lars (1); Haubrock, Sören (2); Hübner, Sebastian (3); Kuhn, Werner (1); Lessing, Rolf (2); Lutz, Michael (1); Visser, Ubbo (3)

(1) Institute for Geoinformatics (IfGI), Münster, E-Mail: bernard@ / kuhn@ / lutzm@ifgi.uni-muenster.de (2) Delphi InformationsMusterManagement (DELPHI IMM), Potsdam, E-Mail: soeren.haubrock@ / rolf.lessing@delphi-imm.de (3) Center for Computing Technologies (TZI), Bremen, E-Mail: huebner@ / visser@tzi.de

1. Introduction This paper describes the main goals and the first results of the research project Semantic Interoperability by means of Geoservices (meanInGS). The project started in October 2002 as part of the research and development programme Geotechnologies. The three partners working on this project are Delphi InformationsMusterManagement (DELPHI IMM) Potsdam, Center for Computing Technologies (TZI) Bremen and Institute for Geoinformatics (IfGI) Münster. The Problem – Schematic and Semantic Heterogeneity The results of geoscientific research projects supply large, valuable datasets and powerful tools for the accomplishment of existing research tasks. A continuing use of these results, especially by institutions that were not involved in the original projects, often proves to be difficult. In order to overcome these problems syntactic, schematic and semantic interoperability have to be achieved. The problem of syntactic heterogeneity emerged as a result of mostly native data formats and the development of monolithic or proprietary systems. The World Wide Web (WWW) supplies the basic infrastructure for the distributed use and multiple exploitation of data and systems (systems interoperability), while approved geoinformation technology standards developed by the OpenGIS-Consortium (OGC) and the International Organisation for Standardi-

1

sation (ISO) provide the essential basis for syntactic interoperability and cataloguing of geoservices and -data. Developing geodata infrastructures like GDI-NRW (Kuhn et al. 2001) give examples of what can be accomplished by this approach and point out which challenging interoperability issues (e.g. GI service chaining) remain. Although the basis for syntactic interoperability exists in many cases the usability of information that results from geoscientific research projects for institutions from different information communities (ICs)1 will remain limited, because they are confronted with schematic and semantic heterogeneity. These kinds of heterogeneity are a basic characteristic of all information sources. The use of individual parameters during the process of data collection, modifications and complements in the nomenclature, other or improved data collection methods and last but not least another sense (another viewpoint) of the 'world' are causes for this heterogeneity. 1 »An Information Community is a collection of people (a government agency or group of agencies, a profession, a group of researchers in the same discipline, corporate partners cooperating on a project, etc.) who, at least part of the time, share a common digital geographic information language and share common spatial feature definitions. This implies a common world view as well as common abstractions, feature representations, and metadata.« (OGC 1999)


In order to overcome the problems resulting from schematic and semantic heterogeneities in typical geodata infrastructures, the meanInGS project will develop and implement a concept to use existing and newly created technologies. Building on an existing data infrastructure intelligent geoservices will be designed and developed for achieving schematic and semantic interoperability in the geodata processing. A great importance is attached to the practical usability of the results. The project aims at the following advantages for geoscientific aspects: - exchange of results between different geoscientific projects, - utilisation of legacy geoscientific databases, - utilisation of the databases for projects outside of the geoscientific context, - avoidance of not updateable secondary data repositories.

tional context within the framework of OpenGIS and ISO. First implementations exist in the form of so called »geoportals« and catalog services. In the following illustration the state of the art model is described shortly. A number of ICs has gathered sets of basic and domain-specific data for their own use. Furthermore, each IC has implemented a system to offer their data to other ICs using multiple components (see Figure 1). Apart from the data itself, a metadata catalogue is necessary as well as a thesaurus and a gazetteer. In order to enable users to access the data from »outside« the system, the functionalities of selecting, transforming and map serving have to be realised. One major problem is the fact, that not all ICs do have all of these components. If they do so, the entries in the metadata catalogue will often not be matchable among different ICs.

Starting Situation The basis for the developments within the project is a model for geodata infrastructures that currently emerges in the national and interna-

Figure 1: Information flow in current implementations of a geoportal.

2


Offering metadata and data as described in figure 1 is already possible. However, in order to link multiple implementations to each other, intensive coordination efforts are necessary. In order to make it possible to compare and evaluate results from different ICs, some major semantic problems have to be solved. Structure In the following section the aims of the three project partners, Delphi InformationsMuster Management (DELPHI IMM) Potsdam, Institute for Geoinformatics (IfGI) M端nster and Center for Computing Technologies (TZI) Bremen will be described. The main part of the paper will then outline three use cases that have been developed. These will serve as a source for identifying and classifying practical problems that are caused by schematic and semantic heterogeneity and provide the framework for the development and implementation of methods and technologies to overcome these problems.

2. Aims of the Project Partners Delphi IMM It is the aim of Delphi IMM to extend their technology MSPIN (Software Tools for the mediation of spatial information) with semantic functionality for facilitating the searching in catalogues and the rendering of geodata. IMM focuses on the procedures of mapping geodata as well as the definition of user-specific viewpoints. The latter is a special goal of IMM. By implementing this, a customer could phrase his queries for geodata so that data providers can carry out a direct mapping of their data to that query. IMM emphasizes on the integration of remotely sensed data into a service chain. The classification of this data has to be implemented as an automatic service. Through an iterative approach in the following use cases issues of semantic interoperability

3

are addressed step by step. The use cases described below serve as a guideline with respect to the investigation of the available data sources and the query formulations. An intensive information exchange with the two project partners is intended. IfGI The goal of the Institute for Geoinformatics in the meanInGS project is to specify services for semantic translation and to test them within geodata infrastructures. The methods and techniques will be developed and tested in the context of well-defined use cases from the domain of geosciences. This will ensure that the application will be pragmatic and that the results will be useful for the domain of geosciences. The first aim of the IfGI project work is to identify, analyse and classify problems in the use cases caused by semantic heterogeneity. These problems will form the basis for the development of services for semantic translation between catalogues, and between user requirements (or questions) and the services registered in these catalogues. This will require the extension of existing metadata models and catalogue services to include information on services (rather than only on data). This extension will provide the information that is necessary for an algebraic model of the semantics of user data. Finally, for testing the functionality of existing Web Map Services (WMS), Web Feature Services (WFS) and Web Catalogue Services (WCS) might have to be extended. TZI The goal of the Center for Computing Technologies (TZI) is to apply methods and technologies that were gathered in a number of recent research- and PhD projects to a practical use-case that is rooted in a functional geodata infrastructure. Thematically, these methods and technologies are focussed on intelligent information retrieval and semantic data integration of geospatial data. Technically, they are centred around logic reasoning


based on qualitative conceptual, spatial and temporal models.

- dynamic selection and coupling of multiple models for simulation and evaluation.

Within the meanings project, the TZI will analyse the chosen use-case regarding the need of semantic data integration and intelligent information retrieval. The methods and tools mentioned above will then be customised to fit within the chosen geodata infrastructure to fulfil the relevant tasks identified in the use case. This includes the implementation of conceptual, spatial and temporal ontologies specific to the use case. All components will be integrated in the geodata infrastructure, tested and, if necessary, modified and improved to meet the requirements of the use case.

The aim of this use case is to evaluate how a service can be used as an interface for gathering spatially distributed and heterogeneous data and providing this data to a user-selected modeller. However, it is not the goal of the following scenarios to create a complex model that could lead to realistic forecasts in its simulations.

3. Use Case Descriptions Use Case I – Detecting Hazard Areas during Flooding Events This use case is about visualising and simulating the water levels in a river catchment as well as detecting and visualising potential hazard areas due to flooding. Information systems in the context of civil protection have to use comprehensive and up-to-date datasets covering the whole catchment area. Instead, current applications in this field are at most very specific and static approaches. Furthermore, these systems are established after the incident has happened due to the problems in quickly gathering the appropriate data. Remote sensing components have not been integrated in such systems in an operational manner so far. A fundamental difference between this approach and established ways is the implementation of the following aspects: - integration of heterogeneous datasets from different information communities, - use of nearly real-time data, derived from · quasi-continuous measurements and · remotely sensed data, - integration of services processing remotely sensed data,

In the context of this project it is rather important to work on the integration of heterogeneous datasets, to implement new automated services, to establish a generic service chain and to overcome further semantic problems, e.g. the issue of mapping between different nomenclatures. Hereby, the results of other research projects can be integrated. An intensive information exchange between the partners is intended. In the following scenarios, an infrastructure will be proposed for the catchment of the Elbe river. This stream covers multiple regions resulting in different competences for the data. Currently, a great effort is put into several research projects targeting an integrated information system on the Elbe river (Bundesanstalt für Gewässerkunde 2000). Due to the severe flooding events back in summer 2002, the benefit of an effective and fast-working information system became clear (Ministerium des Innern des Landes Brandenburg 2001). Depending on the stage in the implementation process two different user groups exist. In the first and second scenario, a basic description of the current situation in the river catchment area has to be visualised. Apart from the general public using the technology of the Internet, the potential users include experts in the field of river management. These experts have a certain interest in rapidly gathering the most up-to-date datasets to get an overview of the current situation in the catchment.

4


In the third and fourth scenario, generic components are used in the service chains. This enables scientists to apply their own model with the most up-to-date and realistic datasets. Furthermore, different models can be compared to each other by dynamically choosing the appropriate model at runtime. In the following the single components and interactions of a dynamic approach are sketched in four scenarios. Starting with a rather static approach, the generic aspect is going to increase in the second and especially in the third and fourth scenarios, leading to new semantic difficulties. Scenario 1: Visualising the Hot Spots In this first step, the objective is to visualise the water levels and potential hazard areas due to flooding in the catchment. The necessary geodata comprise the river network itself (as a GML feature collection (OGC 2002b)) and the current water level measurements (as spatially

Figure 2: Flow of information in the first scenario.

5

anchored datasets or features). The major task in this approach is to collect the distributed datasets and assign them to certain segments in the river network (spatial join). The hydrological model in this scenario is static. Thus, only the spatial extent selected by the user and the current time are dynamic parameters in the process chain. To receive the user requirements an input form is provided by the client service (step one). The user can select a certain area of interest and choose between different periods of time (e.g. current situation or several past situations). With this approach the syntactic frame of the request can be standardised. Since the user can only choose between given options, no semantic problems occur when the query builder compiles the OGC-conformant query files (step 4). Only one specific hydrological model is used, so the datasets needed for its simulations can be statically implemented in the system.


In step five the query is passed to the geoportal, which serves as a hub to the single information communities providing their metadata and data repositories. Due to the fact that several institutions are responsible for the datasets of a large river network, they must be compiled from several databases and synchronised with each other. In this scenario the query for up-to-date water levels must be expanded to the public authorities of at least four federal states and at the national level. Only the union of these datasets provides an optimum coverage of the whole area. It cannot be expected that the datasets are compatible with each other: in two catalogs some attributes can describe the same content while being labelled differently and vice versa. Therefore, the syntactically conformant query has to be mapped to the metadata of the catalogs. This mapping can take place on the client side (i.e. geoportal) or on each of the server sides (metadata catalog system). The technology to perform the mapping has to be developed in the scope of this project. In the next step, the different databases can be queried using the specific metadata descriptions that have been mapped to the base query in the step before. Once the data have been collected in the geoportal, another service (the semantic data consolidator) will have to compile them to a single homogenous dataset. Especially the mapping of relevant attributes to each other (in this case the measurements) is an issue to be solved. Problems can occur in the case that the metric units are different from each other or the temporal context of the measurements is not specified equally. Some institutions might hold data about the same gauge, so duplicates have to be filtered out. In the next step, the modelling service has to inter- and extrapolate the discrete water level measurements over the whole stream section which has to be evaluated. Finally, the resulting

estimation will be composed to a map indicating the user which areas are potentially at high risk. The assessment whether or not a river segment is at high risk will be based on a very simple reasoning provided by the model in this scenario. The scenario described above is rather static, but already faces some semantic problems. In the next scenario, this concept is extended by another service leading to new aspects of semantic interoperability to be solved. Scenario 2: Integration of Remotely Sensed Data For estimating the situation in the river catchment more realistic, the datasets received by some tens of discrete gauge measurements unequally distributed along the river are insufficient. In order to get the Âťoverall pictureÂŤ, remote sensing data can make a crucial contribution. The methods used in remote sensing provide a very fast and objective assessment of the situation in a large area of interest (in this case in the whole river catchment). Thus, the integration of remotely sensed data can show the flooded areas in the river catchment as well. On the other hand the measurement data is still necessary since remotely sensed data only provide a view in two dimensions (in the case of optical sensors), i.e. the areas covered by water can be extracted, but not the water level. For this reason, a new service has to be established in the chain. The raw image data cannot help the normal user to find the regions that have been flooded. To interpret the data correctly classification experience is required. Therefore a classification service has to fill the blank between data collection and interpretation of the data by the user. This classification is intended to be fully automatic in order to provide a service at runtime. As a result, the classified flooded areas can be visualised together with the water levels in a map.

6


Figure 3: Integrating the classification service.

After the steps 1 to 11 from the first scenario have been passed through, the extracted river segments can be used as input for the classification service. In step 13 the image data is requested for the area of interest. Those parts of the image that are overlapping the river segments can then be used as training data for calibrating the classification rule network. Additional information on how to construct the classification rules can be derived from a knowledge base. In a last step, the classification itself takes place and creates a feature collection of the areas that are flooded (see figure 3). Scenario 3: Dynamic Modelling and Prediction In this scenario the hydrological model is exchangeable. The user is intended to select the model of choice from a list. More sophisticated models need many data inputs, especially if they aim to make forecasts. The information about the datasets needed for the simulations is therefore depending on the model that will be used and has to be acquired at runtime. The hydrological model needs to compose formalised requirements in terms of the data needed for the simulation. The syntax of

7

these compilations can be standardised using a certain XML-scheme, but the meaning of the attributes needed remains vague. They have to be mapped to the available datasets in the geoportal in the next step. In this scenario a simple model has to be built which uses additional datasets, e.g. soil types with different transmissibility rates for estimating the runoff in an area. These additional datasets lead to more semantic problems: either the classification of derived attributes is already existing in the different datasets and has to be mapped to each other, or it has to be classified by another model using empirical data from a knowledge repository. By choosing the model at runtime, the service chain has to be established dynamically. The appropriate workflow is similar to the one described in the road blockage estimation use case (ÂťAd-hoc Service ChainingÂŤ, see below).


Scenario 4: Coupling Multiple Models The latter scenarios comprise the hydrological modelling of the river network. In the next step the advantages of a standardised generic service chain structure are exploited by adding a further modelling component, e.g. a system for the estimation of chemical exposure rates in the aquatic environment. For this scenario an existing model will be used and adapted to the required interface format. One option is to use and adapt the Geography Referenced Regional Exposure Assessment Tool for European Rivers (GREAT-ER) (Matthies et al. 2001). Problems Caused by Semantic Heterogeneity In order to establish a flexible, generic system for time-critical disaster management some semantic problems occur and have to be solved. The data to be processed in this use case originate from multiple information communities resulting in heterogeneous formats, different spatial and temporal resolutions as well as other types of semantic heterogeneity (e.g. naming conflicts). Additional services are required and have to be implemented. One new service is the automatic extraction of the flooded areas out of the raw image data. Thus, the major problems in terms of semantic interoperability refer to the semantic metadata

Figure 4: Workflow with pluggable proceeding models.

interpretation (e.g. do the datasets representing water level measurements belong to the same variable?), the semantic data interpretation (mapping between different nomenclatures, e.g. porosity of soil or vegetation classes) and data fusion (pre-processing the data for further utilisation). Furthermore, the integration of remotely sensed data by implementing a new service for automatic classification is associated with tremendous semantic problems. Use Case II – Estimating Road Blockage after Storms The use case is set in the area of disaster management and mitigation. Storm events may cause severe road blockage by windfall timber, especially on roads that run through forest areas. In December 1999 the winter storm »Lothar« caused an accumulation of about 30 million solid cubic metres in windfall timber in Baden-Wuerttemberg. For a few days some villages were cut off from the rest of the world because of road blockage and it remained extremely dangerous to stay in the affected areas because of the devastated state of the forest (Gemeindetag Baden-Württemberg 2000). The main actor in the use case is a person in

8


the agency responsible for ensuring road safety. In Baden-Wuerttemberg this is the Forestry Directorate. After a heavy storm they coordinate the assignment of the Governmental Disaster Relief Organisation (Technisches Hilfswerk, THW) and of the Federal Armed Forces in the disaster area. They have to keep track of where and how much help is needed by the local authorities to clear the road blockages as quickly as possible. In order to coordinate the clearing operations effectively a rough estimation of the roads that are most likely to be affected by fallen timber is required. GIS Support The estimation of roads likely to be affected by the storm can be supported through GIS analysis. The analysis is based on the assumption that those forest stands with overaged trees be most strongly affected by storms. At what age a tree can be called overaged depends on its species. At first a selection of forest areas that are potentially at risk from wind throw based on the information of species and age is made. Secondly the result is intersected with road data in order to identify the roads that run through stands with overaged trees. In order to perform this analysis the user needs to have access to data of the local street network as well as data of forests that contain detailed information about tree species and age. If this information is not available, the analysis cannot be accomplished. GI Web Service Support The application of GI web services providing and displaying the information could help to achieve a more flexible flow of information instead. GI web services can provide access to up-to-date data as well as the flexibility to extract the required information out of several data resources by combining them. In the next section the state-of-the-art scenario is described where generic Web Feature Services (WFS) and Web Map Services suppor-

9

ting Styled Layer Descriptor (WMS/SLD) are statically chained to form a complex service for estimating road blockage after storms. This scenario was realized in the just finished GDI NRW Testbed II, the second testbed of the Geodata Infrastructure Initiative of North Rhine-Westphalia (Bernard 2002). Starting from this scenario, two future scenarios are developed describing how (1) remote sensing data could be incorporated to gain additional information and (2) a more flexible way of service chaining can be achieved by approaching the problem of semantic interoperability. Scenario 1: Chaining Distributed Web Feature and Web Map Services In the context of the GDI NRW Testbed II a »road blockage« service for the use case described above was implemented. This section describes the existing implementation and sketches how including remote sensing data could extend it. Scenario 1.1: Current Implementation The »road blockage service« (http://xtra.interactive-instruments.de/demo/demo-wfs.html) lets the user select a lower age limit for trees, a certain tree species on which to constrain the query and a maximum number of forest features to be returned. The service returns a map of the road network against a topographic map with those roads highlighted for which a blockage is most likely. According to (ISO/TC-211 & OGC 2002) the »road blockage« service represents an aggregate service (opaque chaining). In a black box manner it accesses a number of OGC-conformant WFS (OGC 2002b) and WMS/SLD (OGC 2002c;2002a) provided by different members of the GDI NRW: - a WFS hosted by the Institute for Geoinformatics in Münster serving forest features from a database provided by the North Rhine-Westphalian Department for Forestry, - a WFS hosted by interactive instruments


(http://www.interactive-instruments.de/) serving road features from a database containing the North Rhine-Westphalian road network, - a WMS hosted by the North Rhine-Westphalian agency for data processing and statistics (LDS, http://www.lds.nrw.de/) providing topographic maps, and - a WMS hosted by interactive instruments for displaying the above-mentioned features in a map. The data and services are annotated according to the current International Standard for Geographic Information Metadata Services (ISO/TC-211 2001) resp. for Geographic Information Services (ISO/TC-211 & OGC 2002). The implemented data model is conforming to the XML schemas of the OpenGIS Geographic Markup Language Implementation Specification (GML 2.1.1). Scenario 1.2: Including Remotely Sensed Data The first query in the chain described in the previous section looks for forest areas that are

potentially at risk from wind throw, i.e. that contain trees of a specified species and age. The database on forests of the Forestry Department offers this information, but only for forests that are state property. It does not, however, contain any information on privately owned forest areas. This is a fundamental problem with geographic data because the content (i.e. attributes) are intimately tied to the application context or discipline for which the data was collected. In order to get a fully satisfactory response for a request (i.e. one containing privately and publicly owned forest stands) it is desirable to have access to additional data sources that contain information about forest areas. Thus, it will eventually be possible to dynamically choose the most suitable source of information or to combine several sources to answer the given question. In the Âťroad blockageÂŤ use case the dilemma that precise information is only available for publicly owned forests could be solved with the help of remote sensing data. In this extended scenario at least three additional services are required: - a service providing the remote sensing data

Figure 5: Flow of information in the state-of-art scenario.

10


for the given area, - a service for identifying forest areas in remote sensing data, and - a service for estimating areas with overaged trees from remote sensing data The information on age and tree species that is available for the forest areas that are state property (provided through the Forestry Department’s WFS) could be used as training data for the classification and the estimation services. However, even with this training data an automatic classification for identifying areas with overaged trees from remote sensing data is a very difficult task. Therefore the estimation service might require a »human in the loop« for controlling the classification. Limitations As the services used in the chain were not developed specifically for the service chain described, this scenario illustrates the benefits of interoperability standards, at the same time representing the state of the art in the area of GI service chaining. However, the scenario also has a number of limitations. In order to be able to implement the client

application the provider had to have knowledge of the services’ existence, their capabilities and their application schemas. The client application only works with these specific services and application schemas. Furthermore it cannot be reused for the execution of different chains as its workflow management is tied to this specific chain. These limitations are addressed in a second, more flexible scenario, which is presented in the following section.

Scenario 2 – Ad-hoc Service Chaining The second scenario describes a workflowmanaged (translucent) service chain (ISO/TC211 & OGC 2002). While the ISO/TC 211 standard only states that in such a scenario a predefined chain is selected by the user, we further assume that the chain definition does not exist at the time the user poses his question, but that it is assembled in an ad-hoc fashion by a workflow (composition) service. Accordingly, the user should be able to specify his question in a generic user interface, provided by appropriate human interaction services. Registry services should help the user to further specify his question by providing (semantically rich) infor-

Figure 6: Flow of information in the »remote sensing« extension to the state-of-the-art scenario.

11


mation on available services. Workflow services will then be responsible for composing and executing a chain that answers the question thus, that the semantics of the answer match the semantics specified by the user. This includes the subtasks of translating the user’s question into formal requirements, generating a solution strategy to solve a complex problem with several smaller tasks, matching the requirements to the capabilities of the available services and offering some quality measurements to evaluate the fitness of use of the service chain's results. The flow of information is depicted in Figure 7. First, the user has to enter his query. In order to assist the user in this task, this will have to be a highly interactive and iterative rather than a linear process (1a-d). In order to do that, capability descriptions of available services are matched against the user requirements. That will require information on the available services and data from the registries as well as formalized domain knowledge. The query entered by

the user is then translated into a workflow describing a service chain (2). Queries against the components of the chain have also to be formulated in appropriate query languages (3). The execution of the actual service chain (as described in the first scenario) is controlled by the workflow management service. Problems Caused by Semantic Heterogeneity The forestry data as well as the road data that is to be processed in this use case can originate in multiple information communities, resulting in heterogeneous data models. The main focus in this scenario will be to integrate information from different sources (e.g. Forestry Directorate and remote sensing data) to derive new information (e.g. likelihood of storm damage for roads) and to automate this process. Thus, the focus is on semantic metadata interpretation (e.g. do the datasets representing forests refer to the same kinds of forests?), the semantic data interpretation (mapping between different nomenclatures, e.g. forest vs.

Figure 7: Flow of information in the Âťremote sensingÂŤ extension to the state-of-the-art scenario.

12


woodland) and matching of service capabilities to human (for service discovery) and ultimately service requirements (for automatic service composition). Use Case III: European Water Framework Directive This use case deals with the semantic problems that will arise from the implementation of the Water Framework Directive (WFD) of the European Union (European Commission 2000) from December 22nd 2000. Within three years this directive will have to be implemented into national law. The WFD aims at bringing all surface water bodies and the ground water to at least a »good state« until the year of 2015. The definition of contiguous River Basin Districts (RBD) should prohibit that administrative and political borders obstruct the water protection and encourages an integrated view on the rivers and their catchment areas. The foundation for all decisions to be made and tasks to be carried out in the scope of the WFD is an extensive river basin management (Vogel 2002) which is based on the repetitive control of a variety of biotrophical and abiotrophical parameters. A combination of these criteria provides a possible rating between »bad« and »very good« for each river basin. However, the exact specifications of these parameters are still in progress. The survey of the required data is no single task but implies a permanent monitoring of the river state and results in the creation of regular reports for the European Commission. Additionally, the WFD attaches great importance to providing the collected information to the interested citizens of the countries involved in the project. The implementation of the WFD results in the necessity of governmental organisations at all levels (from local authorities, local environmental agencies and water federations, to federal

13

environmental ministries, national environmental agencies, and the responsible institutions in the EU) to exchange and aggregate substantial data. Furthermore, it is reasonable to provide the collected data for scientific purposes and third parties. Additionally, it is important to be considerate to the demands of informing the public, since only by ensuring a simple access to the information, transparency can be guaranteed. Thus, the citizens of the countries involved form a user group with its own, specific characteristics. Due to the necessity of agencies and organisations to co-operate across all administrative and political borders, manifold semantic problems on several levels occur. On the one hand, the data are of a very heterogeneous nature, because they have been gathered by diverse institutions with varying methods and therefore exist in different formats, temporal resolutions and with different meanings. These data have been collected over years and should be made available for future needs. However, no exact specifications on the methods for collecting the data have been formulated until today, making a further usability of the data difficult. Solving these problems by providing innovative approaches is the goal of this project. On the other hand there are considerable demands on the interoperability of metadata. Up to now, all participating institutions use their own classifications and ratings in order to describe water bodies, their basins and protected areas, and to indicate the measured water quality. In the same way, the consequences of human influences (settlements, sources of pollution, etc.) have to be categorised into object classification schemes. Also on this level, extensive datasets have been built up over the years, which should be made available for further use, too. Although national standards for metadata specifications will be established in the future, the need of a semantic translation


of the describing data between the European countries will exist further on. In the current stage of the implementation of the WFD, the responsible local and regional authorities are creating an inventory of the available natural resources. This phase is due to last until 2004. The activities related to the creation of an inventory include the cataloguing of legacy data as well as intensive acquisition of new data. As the standardisation of a number of methods and criteria for key parameters is not yet completed, these activities cause the need for data integration in the future. One example is the classification of river types, for which several competing classification schemes (using fish populations, using geomorphologic parameters, etc.) exist. Another example is the evaluation of water quality using biological parameters. Here, some indicator organisms are classified, but the respective analysis methods are not. The following use case scenarios will explore the specific needs of translating between different classification schemes, providing these tasks as services, aggregating several services into service chains and using remote sensing methods to support the monitoring process. Scenario 1: Semantic Translation of the River Unit Classification Criteria In the framework of the WFD each river basin has to be subdivided into several smaller river basin units, which will be classified independently from the other parts. This subdivision is oriented on meaningful biological or geomorphologic factors including the prevalent fish population (Böcker 2002) or the sediment type of the river. This is done mainly because it is not useful to compare a region that is near to the source of the river to a region that is near to the estuary. Since several classification schemes are already in use and will be used in the future in the participating countries, the translation between these schemes has to be carried out. Therefore this scenario will lead to a semantic translation service that is based on ontologies describing the used classification schemes. With the help

of the service two tasks can be carried out: - A given river unit can be (re-)classified using a different classification scheme. - A given classification can be translated into a different classification scheme. Thereby the interoperability of organisations using various schemes will be guaranteed.

Scenario 2: Search for the Reference River Unit Based on the service developed in the first scenario, a very important task can be carried out more easily: the search for the reference river unit. The rating between »bad« and »very good« is not an absolute but a relative assessment, which is based on a comparison to another river or river unit. This river with a given rating has to be as similar as possible, i.e. it should own the same biological and geomorphologic characteristics. It can run, however, – maybe partly – in a country that uses a different classification scheme. This leads to the demand of translating a particular classification from one scheme to another one as described in the scenario above. Some additional information is needed to decide whether a selected river unit is suitable as a reference, including parameters like altitude and climate. The result of this scenario is intended to be a map of Europe with all eligible rivers or river units being marked as potential candidates. Using the technology of OGC-conformant services, a service chain has to be implemented and extended by the service of searching for the reference unit. Hereby, two different sub-scenarios are conceivable. In the first one a classification is configured and arranged based on the whole range of known schemes and parameters. In the second scenario, an already classified river or river unit acts as the starting point for searching for all equivalents. In both sub-scenarios the query can be modified in several steps

14


including the degree of accordance in order to control the number of resulting hits. Scenario 3: Semantic Translation of the Water Quality Criteria In contrast to the factors that are used to divide a river into several river units, the parameters that have to be measured to determine the water quality are already specified in the WFD. Although the parameters themselves are fixed, the methods to perform these measurements are not. For example, when counting elements of plankton, organisms from different levels of the biological taxonomy can be selected as indicators. Since institutes performing the examinations have used and will use different organisms, a translation service is needed. This scenario is similar to scenario 1 and will produce in a first step a single service. In the next step, this service will be inserted into the existing service chain to allow a transparent mapping between the used quality criteria schemes. Scenario 4: Remotely Sensed Data for Monitoring Tasks Another very important task prescribed by the WFD is the permanent monitoring of the water quality and the anthropogenic changes of the water bodies. Especially sources of pollution (point sources or diffuse sources) need to be controlled intensively in order to improve the basic conditions of the rivers. This is an expensive task, both in terms of time and money. The usage of remotely sensed data for monitoring campaigns will be evaluated in this scenario. In a first step, the applicability of normal optical data will be examined, which includes the detection of pollutants like oil and chemical substances. In a later step, infrared and hyperspectral data could be used to detect pollutions that are not visible in the normal spectrum. As with the other scenarios above, these tasks will be provided as services, with the opportunity to include them into static or ad-

15

hoc service chains. Problems Caused by Semantic Heterogeneity The Water Framework Directive of the European Union, which has to be implemented within three years by all member states, causes a variety of semantic problems. In order to establish interoperability between both governmental and non-governmental organisations through all administrative and political borders and on several levels, the syntactic and semantic heterogeneity arising from the different classification schemes used by the participants has to be overcome. Therefore the WFD is not only an ideal basis for further research in semantic translation and for meaningful use cases but also a task for which a concrete solution is required within a restricted period of time.

4. Conclusions Exemplarily, several semantic problems that arise in the framework of geodata infrastructures have been pointed out. In order to solve these problems, the next step for the project partners is to refine the structure of the use cases and to work out the semantic problems more precisely. The services have to be specified and matched against the current specifications of the OGC and ISO. In order to apply their concepts, the project partners are going to establish an internal geodata infrastructure that serves as an environment for testing the scenarios and implementations as well as for providing their services.

5. References Bernard, L. (2002): Experiences from an implementation Testbed to set up a national SDI, in: Ruiz, M., M. Gould & J. Ramon (Ed.): 5th AGILE Conference on Geographic Information Science 2002, pp 315-321. Bรถcker, K. et al (2002): Zwischenergebnisse


der Erarbeitung des Flussgebietsplans des Wupperverbandes. In: 5. Symposium Flussgebietsmanagement beim Wupperverband - Regionales Wasserwirtschaftsforum, pp 27-32. Wupperverband, Wuppertal. Bundesanstalt für Gewässerkunde (2000): Internationaler Workshop im Rahmen der Machbarkeitsstudie für ein computergestütztes Entscheidungsunterstützungssystem (Decision Support System, DSS) für die Elbe, URL: http://www.bafg.de/html/aktuell/ workshop_dss.htm European Commission (2000): Directive 2000/ 60/EC of the European Parliament and of the Council of 23 October 2000 establishing a framework for Community action in the field of water policy, URL: http://europa.eu.int/ comm/environment/water/water-framework/ index_en.html Gemeindetag Baden-Württemberg (2000): Zum zweiten Mal innerhalb eines Jahrzehnts: Verheerende Stürme in Baden-Württemberg.

Implementation Specification (GML), Version 1.0.0 OpenGIS Project, URL: http://www. opengis. org. OGC (2002b): Web Feature Server Interface Implementation Specification, Version 1.0.0 OpenGIS Project, URL: http://www.opengis.org. OGC (2002c): Web Map Server Interface Implementation Specification, Version 1.1.1 OpenGIS Project, URL: http://www.opengis.org OGC (1999): The OpenGIS Abstract Specification. Topic 14: Semantics and Information Communities, Version 4 OpenGIS Project, URL: http://www.opengis.org Vogel, W. R. (2002): The EU – Water Framework Directive – A Challenge for Information Management. In: Pillmann, W. & Tochtermann, K, (Ed.): Environmental Communication in the Information Society, EnviroInfo Vienna 2002 16th Int. Conference: Informatics for Environmental Protection, part 2, pp 556-560. GI Gesellschaft für Informatik, Bonn.

ISO/TC-211 (2001): Text for DIS 19115 Geogaphic information – Metadata, International Organization for Standardization. ISO/TC-211 & OGC (2002): Geogaphic information – Services, Draft ISO/DIS 19119. OpenGIS Service Architecture Vs. 4.3. Draft Version, International Organization for Standardization & OpenGIS Consortium. Kuhn, W. et al. (2001): Referenzmodell 3.0 GDI Geodaten-Infrastruktur Nordrhein-Westfalen. Land NRW (Ed.), Media NRW, Band 26. Matthies, M et al. (2001): Georeferenced simulation and aquatic exposure assessment, in: Wilderer (Ed.): Water Science & Technology 43(7), pp 231-238. Ministerium des Innern des Landes Brandenburg (Ed.)(2001): Brandenburg kommunal, 34, pp 3-9. OGC (2002a): Styled Layer Descriptor

16


MAR_GIS Marine Geo-Information-System for Visualisation and Typology of Marine Geodata Schlüter, Michael (1); Schröder, Wilfried (2); Vetter, Lutz (3)

(1) Alfred-Wegener-Institut für Polar und Meeresforschung, 27515 Bremerhaven, Am Handelshafen, PO-Box 120161, Germany, Tel.: (0471) 4831 1840, Fax: (0471) 4831 1425, E-Mail: mschlueter@awi-bremerhaven.de (2) Institut für Umweltwissenschaften (IUW) und Forschungszentrum für Geoinformatik und Fernerkundung, Hochschule Vechta, Postfach 1553, 49364 Vechta, Germany, Tel.: (04441) 15 - 559, Fax: - 464, E-Mail: wschroeder@iuw.uni-vechta.de (3) Fachbereiche Geoinformatik sowie Umweltplanung, Fachhochschule Neubrandenburg, Postfach 11 01 21, 17041 Neubrandenburg, Germany, Tel.: (0395) 5693 - 222, Fax: - 299, E-Mail: vetter@fh-nb.de

Extended Abstract The general availability of basic information, here especially marine geodata, is one foundation for a pertinent and fruitful discussion within social communities. Subjects like environmental protection or sustainable development and use of marine and terrestrial natural resources require access to a multitude of different detail information as well as scientific expertise and management tools. Besides environmental topics well established in ongoing discussions, like species protection or water quality issues, it is foreseeable that the economic use of the seabed is gaining considerably more public attention in very near future. Examples for economic use are: offshore wind power plants, offshore platforms, sand and gravel dredging, and waste dumping. For environmental management and to reconcile the intentions of different stakeholders, marine habitat mapping is applied in the US and Canada to identify different sediments and provinces at the seafloor. By such means the interests of fishery and natural protection might be combined. Currently, the multitude of information about marine geological, geochemical, and biological data and spatial patterns as well as economic

17

use in coastal areas and along continental margins increases tremendously. Consequently, several national and international projects aspire the built up of data base management systems combining meta-data and measured values for scientific needs and management purposes. One example for such an attempt is the information system PANGAEA of the Alfred-Wegener-Institut (AWI, Bremerhaven) and Center for Marine Environmental Science (MARUM, Bremen); since 2001 part of the World Data Centre. Delivering project data to the general public and scientific community is also supported by the funding guidelines of EU projects. Compared to the increasing amount of data and information about marine research (Fig. 1), only very few concepts and techniques are applied for efficient visualisation and optimal utilisation of present and upcoming data sets. There is, for example, a considerable need for a generalised analysis and synthesis of seafloor data, clustering the multitude of detail information. This includes spatial budgets of geological and biogeochemical cycles and characterisation of provinces at the seafloor based on the combination of several information layers. Especially the classification (typology) of the seafloor, an approach well established in terre-


Figure 1: Examples for different data types derived on very different scales by marine research: (A) large scale bathymetric data, (B) sediment bottom profiling and high resolution bathymetry, (C) multibeam, side scan sonar images and still photography obtained by AUV and ROC (see Fig. 3), and (D) point measurements as sediment and pore water profiles and spatial distribu-tion of benthic communities.

strial geoscience and documented in form of geological maps, soil maps, and other thematic maps, is often a prerequisite for management needs and is the main objective of MAR_GIS. The typological approach, combining by multivariate statistics and geostatistical means several information layers for the assignment of areas of the seafloor to types, allows comparisons of geographical different regions. Therefore, this approach is relevant for assessment and modelling of temporal and spatial changes of the marine environment. This is one of the key issues of the BMBF rese-

arch program ÂťInformation Systems in Earth ManagementÂŤ. Besides scientific needs, the typological approach supports management decisions related to upcoming economic use of the seafloor as offshore wind power plants, offshore platforms and pipelines, seafloor cable deployments, sand and gravel dredging, and declaration of protection zones. The superordinate objective of the MAR_GIS project is the development of a general concept for the analysis of spatial data including a typological approach suitable to identify different provinces at the seafloor. This objective will be verified by regional case studies. For these purposes, coastal regions of the Baltic Sea, the North Sea and continental margin of the Norwegian Sea were selected.

18


Figure 2: Typology concept to derive benthic provinces in coastal areas, by combination of different information layers in GIS and multivariate statistics.

Objectives and Approach of MAR_GIS The MAR_GIS project has three closely related tasks: (1) coupling of Geo-Information-System (GIS) to local data base management systems (DBMS) and geodata base server, (2) integration of point and polygon information, raster data, and meta-data about marine-geodata, marin-biological, and bathymetric data for the coastal proper of the Baltic Sea, North Sea, and continental margin of the Norwegian Sea into the GIS, (3) spatial subdivision of the seafloor into distinct provinces, based on measured data, GIS technology, multivariate statistics, and geostatistics.

19

By combination of Geo-Information-Systems like ArcGis objectives as (1) the integration of marin geoscience and life science data within the GIS environment, (2) the compilation of specific regions such as protected areas and regions of economic use and integration in GIS, (3) the combination of different information layers by GIS techniques (e.g., overlaying of bathymetry with maps of sediment facies or carbon content) to derive an overview about spatial interrelations, (4) the comparison of different multivariate statistical techniques (which are applied e.g., in terrestrial geoscience for typological purposes) with respect to their applicability for marine geoscience, and (5) the application of selected multivariate procedures and multicriteria decision analysis to decipher the typology and to assign provinces at the seafloor of our target areas (Fig. 2).


The generalised consideration of large data sets by a typological approach aims (1) to identify different provinces at the seafloor (2) to support the scientific synthesis of various marine geoscience information, (3) to provide a kind of reference frame for multi-disciplinary research, (4) to enhance considerations and modelling of temporal and spatial relationships, and (5) to provide a first step towards improved coastal zone management of the seafloor. The realisation of our objectives includes the application and modification of commercial GIS-software as well as the coupling of multivariate statistics and advanced geostatical procedures with GIS. The applied methods are closely linked to the different objectives and targets of the project: Target 1: Coupling of Geodata-DBMS to MAR_GIS: The commercial and widely spread GIS software ArcInfo and ArcView as well as the freeware program ArcExplorer (ESRI) will be applied as basic framework of MAR_GIS. Interfaces to geo-database-server and local DBMS like MS ACCESS or DBase will be provided by OOP (object oriented programming) source codes written in C++ and Delphi. One example for the integration of data base management systems and GIS functionality is a front end program, written by one of the MAR_GIS partners as part of the EU Project Sub-GATE, allowing to select profile data by SQL queries and GIS-functionality, visualisation sediment and pore water profile data, and the export for numerical modelling. The development of specific import interfaces, connecting DBMS and GIS provides flexible data access for end users of geodata. Complex routines and procedures associated to the data import into GIS will be encapsulated and simplified by software modules. Target 2: Integration of measured data and meta-data: Especially for surface sediments of the Baltic Sea, North Sea and continental margin of the Norwegian Sea published and unpu-

blished marine geodata will be integrated in the GIS. For example, following parameters will be considered: grain size distribution, clay mineral composition, accumulation rates, organic carbon content, opal and CaCO3 content. For the Norwegian Sea, a considerable part of this data compilation was gathered within the EU project ADEPD. Additional data mining will be done by Geo-DBMS queries (e.g., PANGAEA, EU projects, CZM departments) and by incorporation of georeferenced thematic maps. These will be considered as classified raster data sets within the GIS. Disadvantages of commercial GIS Software with respect to specific visualisation of marine data like concentration-depth diagrams or triangle diagrams are coped with by providing flexible program routines. Object oriented programming and application of COM (Common Object Model) provides high modularity and portability of the source code. Integrated part of MAR_GIS is a mata-data base management system, which was developed within a previous research program. This system provides a direct relation between meta-data and real-world items (points, geographic objects, facts) and allows MAR_GIS enhanced information capabilities. This means, based on specific research tasks MAR_GIS provides information about all available data (including maps, charts, photos, and data tables), applied methods for data acquisition, and date of survey or measurement. The system allows distributed acquisition of metadata and is windows- and network-based and available via internet to users of MAR_GIS, therefore. Target 3: Classification of geodata by multivariate-statistics and geostatistical techniques. In marine geoscience data acquisition includes measurements of metric and non-metric scale types. An example for non-metric data sets are arbitrary (with respect to physical-chemical SI units) classified raster data like sediment types and facies. Metric data sets are items as organic carbon content or grain size which can be derived by measurements at distinct sites

20


Figure 3: Innovative marine technology as Remotely Operated Vehicles (ROV) Victor 6000 (IFREMER; France) and Cherokee (MARUM, University of Bremen) and Autonomous Under-water Vehicle (AUV) as Odyssey III (AWI in 2003) caries a multitude of sensors and pro-vides detailed information of the seafloor. These devices are used for marine research as well as by off-shore industry. Coupled data managements systems and GIS allows efficient access and analysis of these geodata.

(point data sets). By geo-statistical methods as variogram analysis and Kriging, contour plots can be derived. Combination of different information layers requires statistical procedures capable to handle data of metric and nonmetric scale, to derive a spatial classification scheme and to identify provinces at the seafloor. Therefore, for the analysis and aggregation of data within MAR_GIS, following 窶電ependent on scale types- techniques will be applied: Association- and Correlation analysis, variogramm analysis and kriging, cluster analysis, classification and regression trees (CART), and Chi square Automatic Interaction Detection (CHAID). From our perspective, the time and cost efficient management of geodata and the provision of geoservices is of specific importance considering the upcoming availability of innovative marine technology as Remotely Operated Vehicles (ROV) and Autonomous Underwater Vehicles (AUV). These devices (Fig. 3) are equipped with a multitude of sensors as microbathymetry, high performance video, and chemical sensors providing access to very detailed information about the seafloor in coastal areas and the open ocean. Besides marine research, including habitat mapping, such devices are applied by offshore industry for surveys, underwater constructions, pipeline inspection,

21

search and rescue operation, and repair. As part of the specific research program ツサInformation Systems for Earth Systems: From Geodata to Geoservicesツォ of BMBF and DFG the identified targets and the typological approach of MAR_GIS is considered as a prerequisite for management issues related to the coastal seafloor. It provides a frame work for improved application of large environmental data sets, allows enhanced visualisation of multiple information layers, and supports modelling of temporal and spatial interrelations of coastal and ocean regions.


22


Implementation of the European Water Framework Directive: ISSNEW – Developing an Information and Simulation System to Evaluate Non-Point Nutrient Entry into Water Bodies Dannowski, Ralf (1); Michels, Ingo (2); Steidl, Jörg (1); Wieland, Ralf (1); Gründler, Rainer (2); Kersebaum, Kurt-Christian (1); Waldow, Harald von (1); Hecker, J. Martin (1); Arndt, Olaf (2)

(1) Zentrum für Agrarlandschafts- und Landnutzungsforschung (ZALF) e. V., Müncheberg, Institut für Landschaftswasserhaushalt, E-Mail: rdannowski@zalf.de (2) WASY Gesellschaft für wasserwirtschaftliche Planung und Systemforschung mbH, Berlin

Introduction On 22.12.2000 the »Directive of the European Parliament and of the Council 2000/60/EC Establishing a Framework for Community Action in the Field of Water Policy« – the European Water Framework Directive (WFD) – came into effect as a basic framework for European procedure in the area of water management. It is now being integrated into national law. Implementing the legal and obligatory guidelines of the WFD has lead to a number of new requirements to be met by water management administrations. Not only does additional information about the state of water management resources need to be collected and systematically prepared but also, extending on this data, all relevant water bodies in Germany need to be documented and assessed. If the water does not hold to the called for ecological and the physical/chemical parameters, then steps are to be set up and undertaken appropriate to meet WFD requirements. All of these activities are to be carried out within a governmentally controlled plan undergoing a multi-step process of open participation from the public. With regard to time EC has stipulated that all such measures must be set up by 2009 with the fulfilling of WFD requirements to take

23

place no later than 2015. If one considers, however, the processes in the water cycle over time, it appears questionable whether it is possible to meet the requirements in every case within six years, and with the currently available technology to take the most sound, optimal measures in meeting the WFD requirements. This leads to two consequences, apart from the earliest possible undertaking of appropriate initial measures: (1) Measures should be introduced based on efficiency and differentiated according to specific place and time so that within the given time frame at least initial effects (e. g. a reverse trend in the water pollution levels) can be demonstrated, and (2) In determining the appropriate measures, tools must be available that explicitly support the specific location and time requirements in allocating the courses of action. The input of plant nutrients into water bodies over the last decades has contributed to a long-term worsening in their chemical and ecological state: nitrogen (in the form of water soluble nitrates having extensive effects on groundwater composition) and phosphorus (usually bound with soil particles entering lotic water through water erosion) have lead to eutrophication thereby damaging the habitat function of much of the surface water up to the coast areas of the Baltic Sea and the North


Figure 1: Illustration of possible paths of non-point source nitrogen flow from agriculture, modified acc. to Dannowski et al. (2002).

Sea. The goal of countries bordering the Baltic Sea (HELCOM) to reduce nutrient input into the Baltic Sea by 50 % between 1985 and 1995 was reached in Germany just for phosphorus. For nitrogen – which despite a gradual reduction in the total load stems more and more from non-point agricultural sources – efforts came up short. Halving the amount of nitrogen within a ten year period is illusory because of decades of delay regarding the subsurface transport alone. Long-term change in excess nitrogen, nitrate transport, and nitrate decomposition in the groundwater are combined in »spacetime behaviour«, in complicated and location-specific ways (Fig. 1). Predicting the development of nitrogen load can thus only be based on time-consuming and expensive so-called »emissions assessments.« These input analyses, which track the subsurface transport processes of nitrogen from the root zone to surface water, require high-resolution, complex model studies, if possible supported by GIS. The necessary location and time specific evaluation of measures to reduce non-point nitrogen input into water bodies requires scenario analyses based on process-oriented and distributed hydro-geological models connected with capacious geo-databases.

Furthermore, the WFD is the cause and basis of rethinking environmental procedure. A new aspect consists of river-related regions projects, something which the WFD unconditionally stipulates. As a rule multiple organizations and countries will be needed to work together. From the perspective of IT, this entails such topics as multiuser access, client/server, and data storage, among many others. Yet there are also large hurdles concerning non-coordinated data collection, non-standard data structures, unknown access paths to the data, or the problematic further application of the data, to name just a few. If one examines the current practice of using GIS systems in water management, one will find primarily individual locations and file-based works including filesharing over network data servers, heterogeneous, inconsistent data records, missing links of individual topics and much more which prevents efficient and high-quality work. Some GIS and data banks are linked where specific applications or a specific person is usually responsible for maintaining the data. This list could go on and on. All of these practices eventually lead to problems with the upkeep and use of spatial reference information thus ultimately preventing the implementation of this technology as a strategic instrument to obtain information.

24


Objectives ISSNEW aims at the preparation of equipment that meets the essential parameters of the WFD in the form of a market-ready product family to supply many of the current WFD obligations with efficient solutions. Therefore, the following components must be included: 1. GIS and Data bank based information system for the gathering, structuring and visualization of geo-data and simulation results on non-point nutrient input in water bodies (ground and surface waters). 2. Simulation system for the evaluation of the effects on the water quality that the measures against non-point nutrient flow from agricultural lands and input into water bodies have. 3. Bi-directional intelligent interfaces between the information and simulation systems. Consequently, through the components themselves and especially through their effective collaboration the following goals are to be reached: · Extending on a standardized information structure taking into account the importance of simulation models, a platform-independent, completely component based software system for data storage, analysis, and presentation to be made available. · The already existing large geo databases as well as the ones still to be collected as planned for river catchment areas to be supplied performantly by the implemented information system, improved, and optimally further processed. · Against the background of the WFD implementation, an improved management of knowledge to be made possible through the integration of space-time-based, scenariocapable simulation and modelling software for non-point nutrient input into the ground and surface waters.

25

· For the further development of geo services, internet-oriented solutions for general access to the planned data bases independent of location to be made available.

Method and Conception Information system The basis for the ISSNEW information system is to be the ArcGIS8 product family in connection with the new, integrated geo database format from the ESRI company. Graphics, attributes, and meta information will be consistently administered through the Basis-GIS without additional applications or even a person to maintain them. If the information is stored in a Data Base Management System (DBMS) then all the possibilities offered are per se at one’s disposal: client/server architecture, backup and recovery functionality, access protection on the user (login) level, and allocation of access rights in the server, to only name a few. With the use of a DBMS, a further component necessarily moves into the foreground: data modelling (DM). An intensive and well thought-out DM is crucial for the performance of a system according to various criteria (content, consistency, performance). With the classic version (ER-modelling), information groups are displayed with their attributes (entities) and their relationships to each other. This is the first essential requirement for a standardization of the data to be collected for general use or for simple exchangeability. For the development of applications and the inclusion of process or behavioural models (e. g. for nitrogen transport), moreover, it is effective to be objectoriented already in the phase of the data modelling. The Unified Modeling Language (UML) has been established for the description language. Along with the use of relationships (associations) there also arises the possibility of using so-called generalisation, compositions, and aggregations. Class diagrams, which represent the found business classes necessary for the system with their entities and methods


as well as their relationships to each other, form the basis for using these features. They are created in the framework of ISSNEW through a case-tool (Microsoft’s Visio 2000). In this way all constructs of object-oriented programming are available, leading to a model based system that can also supply guidelines for the development of applications and integrating models. The consequence is a strong parallelism of the processes. While on the one hand the person modelling the data can still fine-tune the model using specialization relationships, the basis model remains unmodified and stays ready for the application developer. There is no longer any more waiting for a finished data model. Through the use of ESRI products along with the standardized, object-oriented, softwareindependent documentation of the future system, functioning components from the UML documentation can be generated automatically via UML. The geo database (GeoDB) guarantees the constant storage of geoobjects (topology and entities) in a databank that can be related in any way, and also the integration of relationships, rules, and so-called value lists. The advantage of this approach is that the complete system can be used directly after the data modelling through the Arcinfo8 or ArcView8 component ArcMap, even when specific user interfaces are not yet available. One can thus expect a fast »return of investment«, something which is of essential importance in view of the extremely narrow time frame of the WFD. Along with the data modelling and implementation of data models for WFD-relevant information, specific user interfaces will be realised in the course of ISSNEW for the administering, evaluating, analysing, and aggregating of the integrated data within the information system. It is also planned to develop methods which allow entry masks to be generated dynamically at runtime on the basis of data models and to allow these to be available to the user over

the internet for data processing. This paves the way for a three step user system. The first step uses exclusively the implemented data model over ESRI products without specific interface programming (work in tables with alphanumeric information). In the second step, mask generators are used to make generic interfaces available from the existing meta information in the geo database (relationships, value lists, rules) at runtime. In the third step, specific user interfaces are programmed via component technology. Simulation system On the basis of previously illustrated process characteristics of non-point nutrient input, a separation of the transport paths into independent or almost freely standing compartments is possible. This leads to the result that every individual process can be described with separate yet linked simulation models: · SOCRATES for the process-oriented modelling of plant growth, of the surface water balance and of nutrient conversion processes in the plant roots zone (Wieland et al. 2002, Mirschel et al. 2002), · FEFLOW for the deterministic-numeric simulation of flow and transport processes in the groundwater (Diersch 2002), · MODEST – Modelling of non-point nitrogen input via subsurface trails – for GIS based distributed quantification of non-point subsurface nitrogen input from agricultural areas into the lotic waters of unconsolidated rock regions (Dannowski et al. 2002). The modelling of nitrate transport in the groundwater (long term components) can optionally be carried out with the FEFLOW procedures or the MODEST approach. Denitrification can be modelled through a first order kinetics (Michaelis-Menten). Additional reaction relationships can be integrated into the groundwater simulation systems via chemical sub-models. As a supplement to both the groundwater simulators, it is planned to

26


modularly add further model components for non-point nitrogen input into lotic waters that have been developed and tested in the previous work of the ISSNEW team. For instance, the nitrate influx from the unsaturated zone needs a further source instruction with the accompanying field data. In order to determine the load and time course of nitrate entering groundwater, it is necessary to integrate a flux calculation for the groundwater recharge and for nitrogen that can be realised through the SOCRATES model or an additional vertical component from MODEST to be developed. In this way the model components make up the current state of research for formation and input processes in agricultural areas. This state of knowledge is to be incorporated into the project for the implementation of the WFD in the form of software products. The interrelations between the ZALF model components (ground-

Figure 2: Illustration of the interactive graphical mapping of input data and the corresponding model parameter using a dialog of the simulation system FEFLOW.

27

water recharge, nitrogen input into the groundwater) and the WASY software FEFLOW, the ways in which they can compliment the analysis and evaluation of non-point nutrient input for the WFD as well as the references to the information system in general – all of these are to be carried out and used in ISSNEW and will lead to a noticeable enhancement and increased applicability of the project results. For development, verification, and validation of the models to be integrated, there is a pilot area in northeast Germany (catchment area of the Uecker river) that likewise serves as the basis for ZALF research. An extensive data basis also exists for this pilot area that is continually supplemented through further results from experimental systems analyses and from the monitoring of ground and surface waters. From there data can be called up directly by the simulation system.


Intelligent Interfaces For connecting the information system with the simulation system, there are presently two alternative standard approaches. In the first approach, the data is exported via routines of the information system in such a way that they can directly be read in the simulation system with its own model input routines. In the second approach, the simulation system has an interface that is able to read the data stream in the respective format of the GIS system. These approaches are principally always tied with multiple and explicit activities such as export/ import of the data and work with at least two different programs. This procedure does not, however, correspond to the requirements made of an integrated system. Modern software architecture offers possibilities to adjust the interface processes to the demands of such systems. For ISSNEW, therefore, an intelligent data exchange between models and/or external applications is planned. The basis for this exchange is provided by the GIS system which makes available standard component-based access modules, and the information system, which makes available meta information in relation to its structure. Thus it is possible to select the structure of the system dynamically with the help of the access modules of the information system which are integrated as components in the simulation system. The simulation system must be able to display the structure of the information system within its program interface as clearly and efficiently as possible on the one hand, and likewise efficiently display its own model parameter-structure on the other. The task of the user consists in planning a socalled mapping. This means assigning individual information levels of the information system to their corresponding model levels. This takes place with the help of interactive graphics. If, for example, the nitrate input concentration into the ground water, as output of the SOCRATES or MODEST modules, exists as a polygon distribution with attributes in the information system, then the user must assign

this data to the model parameter “initial concentration�. The interface employs the regionalisation methods available in the simulation system. Parameterisation (selection of GIS data and execution of methods) will only then be started when the parameter-mapping has been completely defined by the user. The goal, finally, is to arrive at a completely automatic generation and parameterisation of a model via these mapping methods (Fig. 2). In doing so, the generation of the model takes place in the same manner. All relevant input information needs likewise in this case to be mapped. This includes, among other things, the state of the lotic and lentic waters, parameter distribution borders, (subsurface) catchment borders, relevant water drawings and discharges, etc. This information is used in order to generate automatically an optimal FEM network that then goes through parameterisation. Before beginning the concrete generation and parameterisation for a new model, the outer boundary of the modelling domain must be set that then serves as a spatial selection criteria for the transfer of data from the information system. The spatial selection is in turn realised with the help of the components of the information system integrated into the simulation system. All in total, this method contributes to a reduction of the time necessary to construct and implement scenario models, this meaning also decision support systems.

Conclusion From the conception presented for the ISSNEW project it is concluded that a new standard of GIS-based tools for scenario computations related to water and nutrients at the landscape scale can be provided. The preservation of water bodies resulting from the WFD implementation is part of the

28


ZALF context as a partial goal for long-term land development. Model components will be used and further developed in a research and model association of ZALF that form the basis of a decision support system for long-term land use management. This program is complemented by experimental systems analysis and field monitoring in the Uecker pilot area. WASY GmbH is bringing into ISSNEW especially its worldwide leading 3D simulation system for groundwater flow and transport processes FEFLOW and an extensive knowledge of the ESRI company’s products (ArcSDE, ArcInfo, ArcView, ArcObjects, ArcIMS). Co-operation between a Research Centre and an SME, both of them qualified in hydrological modelling, water management, as well as GIS and geo database management, appears a very powerful form of developing technological progress by merging their experiences for the requirements of the WFD implementation.

References Dannowski, R., J. Steidl, W. Mioduszewski & I. Kajewski (2002): Modelling Subsurface Nonpoint Source Nitrogen Emissions into the Odra River. Proc. Int. Conf. »Agricultural Effects on Ground and Surface Waters«, 1.10.-4.10. 2000, Wageningen, The Netherlands. In: IAHS Publ. 273 »Agricultural Effects on Ground and Surface Waters«, Wallingford, 219-225. Wieland, R., W. Mirschel, H. Jochheim, K.C. Kersebaum, M. Wegehenkel & K.-O. Wenkel (2002): Objektorientierte Modellentwicklung am Beispiel des Modellsystems SOCRATES. In: Gnauk, A. (ed.): Theorie und Modellierung von Ökosystemen – Workshop Kölpinsee 2000. Shaker Verlag, Aachen. Mirschel, W., R. Wieland, H. Jochheim, K.C. Kersebaum, M. Wegehenkel & K.-O. Wenkel (2002): Einheitliches Pflanzenwachstumsmodell für Ackerkulturen im Modellsystem

29

SOCRATES. In: Gnauk, A. (ed.): Theorie und Modellierung von Ökosystemen – Workshop Kölpinsee 2000. Shaker Verlag, Aachen. Diersch, H.J.G. (2002): FEFLOW Finite Element Subsurface Flow and Transport Simulation System – User’s manual/Reference manual/ White papers. Release 5.0. WASY Ltd, Berlin; 2002.


30


GEOSERVICE GROUNDWATER VULNERABILITY Development of an Information Infrastructure for the Rule-based Derivation of Geoinformation from Distributive, Heterogeneous Geodata inventories on Different Scales with an Example Regarding the Groundwater Vulnerability Assessment Azzam, Rafig (1); Bauer, Christian (1); Bogena, Heye (2); Kappler, Wolfgang (3); Kiehle, Christian (1)*; Kunkel, Ralf (2); Leppig, Björn (1); Meiners, Hans-Georg (3); Müller, Frank (3); Wendland, Frank (2); Wimmer, Guido (1)

(1) RWTH Aachen, Lehrstuhl für Ingenieurgeologie und Hydrogeologie (LIH), Lochnerstr. 4-20, 52064 Aachen, Germany. * Contact: kiehle@lih.rwth-aachen.de, www.geodienst-schutzfunktion.de (2) Forschungszentrum JülichGmbH (FZJ), Systemforschung und Technologische Entwicklung (STE), 52425 Jülich, Germany (3) ahu AG, Wasser - Boden - Geomatik, Kirberichshof 6, 52066 Aachen, Germany

1. Introduction The increasing demand for geodata, both on the part of planning authorities and also from the general public as well as the federal soil and groundwater protection legislation requires an integrated procedure to an extent that has not been achieved yet. This is reflected in the Water Framework Directive adopted by the European Parliament, which proclaims a transmedia, sustainable water protection and commits the participating countries to manage resources relative to river basins. Here various disciplines are working together on the same planning object thus overcoming administrative boundaries. Difficulties arise in the field of spatial data gathered on different scales by various institutions and therefore frequently only available with different degrees of accuracy or different data structures. The present project intends to eliminate some of these obstacles to utilize geodata inventories for example by systematically interlinking internet technology and geoinformatics. The project benefits from the different perspectives of the institutions involved fromthe Helmholtz

31

Association of National Research Centres, universities and the private sector. 2. Objectives The overall goal of the project is to develop an information infrastructure in order to process distributive, heterogeneous geodata inventories into geoinformation in a rule-based manner and independent of scale. A geoservice will be made available going beyond the actual provision of geodata by developing approaches for interlinking distributive geodata irrespective of scale, which can then be transferred to various issues. The geoservice to be developed should provide all future users an integrated system that informs them about all available data and enables criteria to be selected on the basis of which suitable data can be defined, analysed and linked. The new developed infrastructure will be demonstrated by an example regarding groundwater vulnerability. Rules will be compiled and implemented permitting consistent spatial information to be derived and displayed from geodata recorded on different scales and in different formats. Three scale levels will be


considered: the microscale (~1:5000), the mesoscale (~1:25,000) and the macroscale (<1:50,000). The results on the individual scale levels are conditioned by different input data, different geometrical accuracies and algorithms. In the work, priority is given to the realization of a flexible geodata network constructed with the involvement of the data providers and potential users. In order to identify the wide range of requirements of the potential end users, are to be intensively involved in the project by means of several workshops. This includes, for example, the Geological Survey in North Rhine-Westphalia, the North RhineWestphalian State Environmental Agency, the State Surveyor's Office in North RhineWestphalia, the government environmental agencies as well as local authorities and private institutions (e.g. water utilities).

3. Groundwater vulnerability assessment The »concept for determining groundwater vulnerability« (after Hölting et al. 1995) in two catchment areas of North Rhine-Westphalia serves as a geoscientific case study. Heterogeneous, geological conditions (areas of solid and unconsolidated rock) occur in the catchment areas of the Rur and Erft as well as regions characterized by intensive anthropogenic activities (lignite and hard coal mining, agriand forestry, urban areas). culture Furthermore, the study area crosses federal and national boundaries.Three scale levels are considered (see Fig. 1, page 33): - Saubach catchment area, 16 km2 (microscale, scale considered 1:5,000), - Inde catchment area, 353 km2 (mesoscale, scale considered 1:25,000), - Rur and Erft catchment area, 4125 km2 (macroscale, scale considered <1:50,000) The input data are incorporated in a staggered resolution corresponding to the individual scale ranges.

The Hölting method consideres certain soilphysical, hydrological and geological parameters with respect to their influence on the residence time of the leachate and the degradation potential in soil. The assessment is performed as a standardized, parametrical assessment system. The subareas of »soil«, »rate«, »rock type of the individual strata« and their thickness, as well as »suspended groundwater layers« and »artesian pressure conditions« are considered separately and given a dimensionless point value. The subpoint values are offset by an assessment algorithm and then reclassified in five intervals (vulnerability function classes). Determination of the groundwater vulnerability represents a suitable framework for developing a prototype systems architecture in two respects: 1. The groundwater vulnerability is of actual interest for different disciplines and at different levels of consideration. Groundwater vulnerability maps are being increasingly used as a basis for environmental compatibility studies, for planning and licensing procedures and for the estimation of diffuse pollutant inputs into the groundwater.

2. Basic data from different scale levels also enable the input data, parameters and algorithm to be varied in a meaningful manner. It is to be expected that the assessment of the groundwater vulnerability on different scale levels will supply different results for one area. These differences are determined by the parameter variation, which in spatial terms essentially depends on two factors. On the one hand, the content-related differentiation of the input data concerning soil properties and land use diverges on the different scale levels (accuracy); on the other hand, the data differ with respect to spatial precision. Both factors jointly determine the significance and validity of the information derived.

32


Figure 1: Location of the study areas.

In order to investigate the influence of the parameters on the groundwater vulnerability at different scale levels, the influence of one of the area parameters regarded as particularly important on the groundwater vulnerability is studied on each scale level. ahu AG will thus undertake a detailed study of anthropogenic influences, e.g. due to urbanization and abandoned industrial sites. LIH will make a more detailed study of the influence of the geological structure, whereas in a subproject STE at FZJ will place special emphasis on the regionalization of climate data and on determining the

33

leachate rate required for calculating the groundwater vulnerability according to Hรถlting et al. (1995). Ultimately, a method in the sense of link rules will be determined by processing the individual scale levels and studying selected scale effects. Requirements for the individual system components will be derived from these rules and integrated into the information infrastructure.


4. Development of an Information Infrastructure The information infrastructure to be developed will be implemented according to the threetier model on the available data , analysis and presentation of geoinformation. The three individual tiers are the presentation -tier, the so-called Âťbusiness logic-tierÂŤ and the data tier (see Fig. 2). Establishing this information infrastructure starts by defining the tools and standards to be used. Considerable space is occupied here by the specification of interfaces for communication between the system components. The overall system will access distributed data and consist of various services. On the user side, access to the assessment maps generated on the fly from current basic data will be provided via a web interface.

4.1. Data-tier: Use of distributed heterogeneous geodata inventories In order to determine the protective function, distributive data are used which have their own internal structure and specific content (subject-related and/or spatial). The system will be able to process data from various providers.

In order to simulate the distributive data storage, the geodata are stored at the different sites of the three project partners. The interfaces to be applied must be defined in order to use distributive data. This refers both to the management of the data inventories and their description (metadata). Interoperability is ensured by primarily using standards from the OpenGIS Consortium, the Object Management Group and the WorldWideWeb Consortium for access to the different data pools. Data are made available to the geo-application server by introducing them into the system via a standardized metadata format which is evaluated in the business logic-tier by a catalogue service (see Section 4.2).

4.2. Business logic-tier: Processing geodata into geoinformation The functional performance of the business logic-tier involves a linking of the required base data from distributive data providers, depending on the level of resolution, to obtain the groundwater vulnerability. It thus forms the core of the information infrastructure; it is activated via the user interface and accesses the data -tier. Since previous forms of information

Figure 2: System diagram.

34


supply, which were generally exclusively restricted to the provision of base data, are not sufficient for interlinking data on different scales, an intelligent information infrastructure is created here that generates useful information from base data. In this way, the user does not obtain a further data set but rather specific information which is made more concrete by text definitions, diagrams and tables. In all processes within the business logic-tier, priority is given to the aspects of interoperability, independence of manufacturer and prospects for the future. In the final phase of implementation, the business logic-tier will provide three web services: 1. Catalogue services which process and store registration information for the other services and the available data. Apart from the features of the services, information on the data available is also stored here; both services and also data are managed. 2.Control services that ensure the scalability of the information infrastructure, the sessionmanagement, web site generation and data mining.

3.Geoservices that process distributive data into homogeneous information; they are the core of the information infrastructure. Geoservices are divided into: a. Base services: combining heterogeneous data formats into uniform, open and vendor-independent data formats. This includes, for example, the derivation of a parameter from a certain data inventory (e.g. nFK from BK 50). This also includes services for geodata processing (neigborhood analyses, union, etc.) b. Integrative services: compilation of base services for uniform derivation of information from geodata (e.g. the derivation of the groundwater vulnerability from

35

various input parameters) as well as generalization and validation.

4.3. Presentation-tier: interface between user and geoservice The presentation -tier enables the user to obtain information on the basis of a specific problem definition; the widest possible range of forms of representation should be available here in a freely selectable mode (e.g. diagrams, PDF documents, geoinformation etc.). Communication with the geoservice »groundwater vulnerability« will be performed via a graphical user interface which communicates with the business logic-tier via the internet. In this way, the presentation -tier is platformindependent.

5. Pilot Operation During the development of model applications, the information infrastructure will be coordinated and evaluated with users from local government, universities and the private sector. Pilot operation after the development phase will serve to validate the information provided for defined applications with respect to content appropriateness for the geosciences and also technically. Ultimately, this will enable an assessment of the system to be made also with respect to its transferability and practicability.

6. References Hölting, B., Haertlé, T., Hohberger, K.-H., Nachtigall, K.-H., Villinger, E., Weinzierl, W., Wrobel, J.P. (1995): Konzept zur Ermittlung der Schutzfunktion der Grundwasserüberdeckung. Geologisches Jahrbuch Series C, No. 63. Schweizerbartsche Verlagsbuchhandlung, Stuttgart.


36


Advancement of Geoservices Breunig, Martin (1); Malaka, Rainer (2); Reinhardt, Wolfgang (3); Wiesel, Joachim (4)

(1) Research Centre for Geoinformatics and Remote Sensing, University of Vechta, P.O. Box 1553, 49364 Vechta, Germany, E-mail: Mbreunig@fzg.uni-vechta.de (2) European Media Laboratory GmbH, Villa Bosch, Schloss-Wolfsbrunnenweg 33, 69118 Heidelberg, Germany, E-mail: Rainer.Malaka@eml.villa-bosch.de (3) Universit채t der Bundeswehr M체nchen, GIS Study Group, Werner-Heisenberg-Weg 39, 85577 Neubiberg, Germany, E-mail: Wolfgang.Reinhardt@UniBW-Muenchen.de (4) Institute of Photogrammetry and Remote Sensing (IPF), University of Karlsruhe, Englerstr. 7, 76128 Karlsruhe, Germany, E-Mail: Wiesel@ipf.uni-karlsruhe.de

Abstract The combination of the internet and mobile communication technologies offer new perspectives for the geosciences, by providing location independent access to distributed geodatabases. New geoservices will contribute to a faster availability and an increasing data quality of environmental information. These services have to be designed for usage by static and mobile users through internet based networks. Beside the important aspect of data management for geoservices, data acquisition und visualization of multidimensional geodata are predominant factors. They are still topics of further research. Particularly the low performance and heterogeneity of not yet standardized mobile devices represent major problems. The project partners from Vechta, Karlsruhe, Heidelberg and Munich complement their different expertises in the areas of acquisition, management, usage and visualization of geodata, respectively.

1. Objectives and conception Objective of this project is to develop an overall concept for acquisition, management, usage and visualization of geodata for mobile geoservices. The technical feasibility and acceptance of the used methods within the geosciences will be shown by components of a prototype system.

37

Within the project different aspects of geodata processing have to be examined: on the one hand a central server unit should allow geodata access from different sources. Data are provided to another kind of components, the mobile devices, through standardized geoservices from heterogeneous data sets as separate objects. The mobile device, which is carried by the user, allows for online communication to the data provided by the server unit. Using the connection to external sensors and measuring units the mobile terminal can use the transmitted data on site directly. Functionalities which are adapted to the used object structures are offered to the user. This allows an on site manipulation of proper data sets, when the necessary acquisition and transmission of geoobjects to the server are not possible online. Furthermore, augmented reality components (AR) have to be taken into account for visualizing the database query results. This should allow for mobile on site acquisition of spatial objects in the final stage of the project. Fields of application are all tasks where no visual objects (e.g. geological structures, soil parameters, DTM, upcoming engineering projects, boundaries of parcels, supply lines in the underground) can improve the on site data acquisition. In figure 1 the proposed system architecture is presented. The client applications at the Karlsruhe, Heidelberg and Munich nodes focussing on AR viewing, acquisition, update, use, analysis and visualization of geodata, are connected with a


database component for a service-oriented 3D/4D-GIS (node Vechta) and with other geoservices at the server sites. The project partners have separated the development of the above-described tasks as follows: - Development of web based GIS components for online access of spatio-temporal objects in geoservices (Vechta); - mobile acquisition, updating, usage/analysis and visualization to geodata (Heidelberg, Munich); - online visualization, processing and acquisition of 3D datasets with a mobile terminal backed up by augmented reality (AR) (Karlsruhe);

- definition of standardized interfaces for geoservices (Munich). During the development of all components the main intention is to use intensive and methodical research to ensure an ideal usage of new technologies within newly developed work flows. The need for a mobile geographic data management tool (particularly for the mobile data acquisition) is obviously existing in all different disciplines of the geosciences. A few examples shall be pointed out exemplarily. The qualified and methodical specification of theses scenarios will be described in a following paper. - The transmission of just measured values into remote databases facilitates the use of the new findings and information to other users and this data is available for further data processing. The results of this processing has to

Figure 1: System architecture.

38


be made at the disposal of the field worker immediately. So data with high relevance to the present situation (e.g. meteorological data) is fast and easily achievable for the mobile user. - In the opposite direction, all the data stored on the heterogeneous distributed databases are placed at the field workers disposal. There is no necessity to take all potential needed data out to the field. On the contrary, you can decide on the spot which data is needed and should be transferred to the mobile device. Newly acquired data could be compared with data stored in the databases and the attained knowledge supports the determination of the next measurement spot, without the necessity to interrupt the process of data acquisition by returning to the office. - The integration into the geographic data infrastructure (GDI) currently under development facilitates furthermore the possibility to achieve data without knowing its concrete location of storage. E.g. a geologist who has problems to determine the actual strata due to a highly complex terrain morphology will be able to get further information about the area by sending requests against a catalogue server. Data registered at this server will be delivered to the user by further services. The user will receive parts of satellite images for instances. To take these images to the field is commonly restricted due to their immense size. If it happens in the field that only small parts are needed, the necessary request (specifying the required sector) could be sent to a remote server which starts the transmission of the demanded data. Of course transmission capacities have to be taken into consideration.

39

2. State of the scientific and technical knowledge In the mid nineties a new generation of laptops enabled users to use spatial data offline without using a wired connection to a database. Due to the lack of mobile communications, offline data had to be synchronized before and after the mobile data acquisition process. The improvements in wireless communication enabled the user to connect to a spatial database in the field in general and even to manipulate the data (see ORACLE’s and Autodesk’s MAUI project: http://www.gis-news.de/news/autodesk_ora_palm.htm). But these approaches rely completely on proprietary systems and are focussed on technical implementation. There has not been any scientific investigation on these questions yet. Naturally the heterogeneity of multiple spatial data sources is not taken into account – applications for common usage of spatial data have not been provided so far. Progress in the field of mobile geocomputing, online access to spatial databases and integration of external databases brings us in a leading position within the spatial scientific community (Caspary and Joos 2000). Geodata access over internet and wireless communication is just in the beginning of its development. But there are several standardization efforts around which try to accommodate W3C standards (W3C 1998). OGC (Joos 2000, OGC 2000) is currently working on a XML-based representation of simple geographic features: GML. Further investigations have to be made, if these specifications meet the requirements in practice. Prerequisites for mobile geoservices are innovations in the field of mobile technology and wireless communication. Recently a new data communication standard GPRS (General Radio Packet Service) was launched in Germany. In the next couple of years the changeover from GSM/GPRS to UMTS (Universal Mobile


Telecommunications System) is awaited. Mobile devices for location based services are currently in rapid development and have very short innovation cycles. Additionally, the mobile hardware platforms are diverse (display size, computing performance, data rate) and there are many different system architectures around (Laptops, PDAs, cell phones, card phones). Because of that, it has to be investigated which data transformations and server applications have to be developed to achieve a common representation of spatial data on mobile systems.

3. Integration of the project in research programs and networks Within the 5th frame program (5.RP) of the European Community the research in the field of ”User-friendly information society (IST)” has been funded. The objective of the first main task “Services for citizens” is to provide users with an easy and cheap access to top quality services for different purposes. The goal is to allow an easy internet access to common comprehensible information within the EU. Supplementary to this main goal, during the project “Advancement of Geoservices” the development and access to high-end geoservices especially designed for geoscientific experts is intended. Within the project the main focus is not to provide EU citizens with location based services, but rather the development of location independent services for internet based and mobile access to geoscientific relevant information.

4. Expected research results and references to actual social discussions An efficient handling of the valuable resources of geoinformation is one of the major challenges nowadays. In 80% of all decisions in private and public areas direct or indirect spatial information are involved, but there are still no systems, not to mention applications, which

are capable of processing distributed and heterogeneous data and provide them for further use. Still there are no possibilities to modify existing heterogeneous data sets simultaneously by ensuring the consistency of the different databases during those updates. The importance of this topic is reflected in the »grossen Anfrage« of 29 representatives in the German Bundestag from 12th April 2001 (printed matter 14/3214). The economy also pays great attention to this matter, which results in significant investments in standardization promoted by the OGC and ISO. By active participation in these panels the latest results in these areas are available and should be taken as the basis of the project development. The intended development of new concepts and techniques for the management of spatial and time based objects for mobile geoservices could be groundbreaking for other areas of applications too (e.g. life services and bioinformatics). Moreover an efficient internet based and mobile access to spatial and time based information offers new possibilities for the usage of geoinformation systems (GIS). Particularly mobile geoservices are enabled to access by the development of open system architectures basic functionalities of GIS, e.g. data visualization or data base queries. Furthermore, the development of concepts for new efficient access technologies, database queries and user interfaces for mobile location independent services offer a great innovation potential. Especially in this area intensive research is required on the national level. It could be expected that other key points of geotechnologies will benefit of the developed software concept in an advanced state of the project. The prospects of success are considered as excellent as partners from informatics and the geo-environment are involved. The high availability and rising quality of information on our environment can play a major role in the management of social tasks in times of increasing globalization. The new technolo-

40


gies for usage of mobile geoservices developed within this project can easily be transferred to Internet services. Furthermore, the public availability of documented geodata results in a cost-saving for upcoming data acquisitions and will become a major aspect for the maintenance of our information- and knowledge society.

close cooperation with the project partners, it is planned to transfer a set of selected functions of a platform-independent mobile geoservice prototype with the help of a laptop or handheld client.

5.1.1. Preparatory work 5. Description of project parts 5.1. Project »Development of componentsoftware for the internet-based access to geodatabase services« responsible: Martin Breunig, Research Centre for Geoinformatics and Remote Sensing (FZG), University of Vechta Abstract The project aims to make a contribution to 3D/4D geodatabase research for the development of new component-based and mobile geoinformation systems. The application of reusable database software shall enable the user to compose own geoservices with pre-defined components. This procedure is similar to the software engineering process supported by a CASE tool. The idea is to compose the functionality of different geoservices by object-oriented editors, user-defined data types and access methods in a service framework. The approach intends to eliminate one of the most obvious weak points of today´s geoinformation systems: their closed system architecture. This situation can only be improved by providing an open data and function access. Therefore the efficient spatial and temporal access to geodata managed by new geoservices is a central task for the development of new geoscientific information systems. Furthermore, the project aims at examining the efficient access to geodatabase services via the WWW. The representation and the efficient management of static and dynamic geoobjects in databases have to be examined in detail. Easy-to-use plug-in technology shall ensure a high acceptance of the developed software by the user. Finally, in

41

The group of Martin Breunig has been working in several projects developing extensible geodatabase and geoinformation systems (Waterfeld and Breunig 1992; Bode et al. 1994; Balovnev et al. 1997; Alms et al. 1998; Breunig 2001; Breunig et al. 2003). Furthermore, the experiences gained in the completed IOGIS project (Voss und Morgenstern 1997) and in the Collaborative Research Centre 350, both at the Institute of Computer Science III of Bonn University (group of Armin B. Cremers), open new perspectives for the development of internet-based access techniques for spatial and temporal objects (Breunig et al. 1999; Breunig 2001; Breunig et al. 2001). Concerning the development of open spatiotemporal database services the group also promotes a close exchange with the groups of the former European CHOROCHRONOS project and with other international groups (Worboys 1992; Snodgrass et al. 1996; Brinkhoff 1999; Sellis 1999; Güting et al. 2000).

5.1.2. Objectives and Conception The objectives of the project can be formulated in the following three steps: 1) Concept for the management of spatiotemporal objects within client-server architectures for (mobile) geodatabase services: Hitherto, such objects being variable in space and time, cannot be managed efficiently by mobile clients of a database management system. Therefore new ways for the representation and the retrieval of these objects have to be developed. Furthermore, the distribution of the functions for


high-class database queries upon the client and the server have to be examined. 2) Development of component software for the access to geodatabase services: Databases will play a central role in the development of new geoservices (Brinkhoff 1999; Dittrich and Geppert 2001; Friebe 2001). However, the efficient access to spatio-temporal databases from the WWW and the filtering of geodata for mobile geoservices are still subject of research. For example, functions for the processing of geometric 2D and 3D objects (e.g. in boundary representation) have to be developed and composed in a single component as part of a geoservice. 3 Evaluation of the concepts developed in the steps 1) and 2) for the application field of geology: The software developed in the project shall be evaluated with a suitable application coming from the field of geology. We expect that by the evaluation new impulses will be given for the development of mobile services. The experiences are also expected to be transferable to other spatial application fields within and outside the geosciences like geomatics and bioinformatics. In future, mobile geoservices will also be more important for the flexible management of early diagnosis and the handling of environmental monitoring. Methodically, one of the main ideas of objectoriented software technology shall be pursued: the re-use of geospecific database component software shall be used for the examination of the internet-based database access to spatial and temporal objects in new geoservices (Snodgrass et al. 1996; Szypersky 1998; Sellis 1999; Dittrich and Geppert 2001; Friebe 2001). Within a layer-architecture, application independent basic and advanced services shall be developed as tailored services for spatial applications. As OGC and AGILE member, the University of Vechta is also up to date

concerning the current international standardization efforts.

5.2. Project ÂťDevelopment of mobile components and interfaces for geoservicesÂŤ responsible: Wolfgang Reinhardt, AGIS, University of the Bundeswehr Munich; Rainer Malaka, European Media Laboratory GmbH, Heidelberg Abstract This part of the project deals with the development of a mobile client for visualization and manipulation of spatial data. The potential arising from the online access to multiple heterogeneous spatial databases, will be investigated. In respect of international standardization efforts, required interfaces will be designed. Further investigation is required on assuring the consistency of the database during data transfers between database and mobile systems. During the conceptual phase, different system architectures for the mobile client will be considered and a concept for a mobile client-server spatial data infrastructure will be developed. A very new approach, supporting the user during the data acquisition process is combined under system aided data acquisition. The results of these investigations will be tested by the development of a prototype for the proposed system.

5. 2.1. Preparatory Work The GIS lab at the University of the Bundeswehr (AGIS) is a member of the OGC and takes part within the standardization process of ISO/TC211. AGIS is experienced in the field of mobile GIS and location based services through the projects VISPA, ALOIS and PARAMOUNT (Caspary and Joos 2000; Heister et al. 2000; Leukert and Reinhardt 2000; LĂśhnert et al. 2000; Reinhardt 2001; Reinhardt and Sayda 2001; Sayda and Wittmann 2001).

42


The Project VISPA (Virtual Sports Assistant) funded by the EU is accomplished by AGIS and IfEN GmbH. In this project a prototype for a mobile value added service for mountaineers is developed. The system is based on a mobile device, which can be used by the mountaineers to connect to several GIS-based services like mapping or emergency call services. ALOIS is a navigation system for the localization of locomotives on railway networks. The system consists of an onboard navigation unit and an office controlling segment. The position is obtained based on sensor fusion technologies with DGPS, inertial sensors and odometer. Additionally a communication protocol was established which allows the transmission of both position information and other textual information. PARAMOUNT aims at improving user-friendly info-mobility services for over 150 Mio. Mountaineers (EU) by combining telecommunications (GSM/UMTS) and satellite navigation (GNSS) with geographic information systems (GIS). The main intention is to use these technologies to provide a LBS, which increases the security of mountain hikers and climbers.

5.2.2. Objectives and Conception Six main research topics are considered within this project: 1) Methodical examination The change from mobile geoinformation systems used nowadays to on site mobile online data acquisition results in a series of scientific questions that should be topic of further research. First of all it should be investigated, which methods are capable of mobile analysis and modification of heterogeneous data. Therefore solutions should be developed on how emerging data inconsistencies could be resolved. Starting from the current available and planned data communication standards, research should be focussed on how the per-

43

formance of the system can be improved depending on the data transfer rate. Within the workflow that has to be defined, new quality assurance methods should be implemented. This includes the registration of quality control parameters during data acquisition as well as a revision by a controlling instance. 2) Definition of standard interfaces The specifications, which are currently released, or being under development by the OpenGIS Consortium (data services, feature service, coverage service, catalogue service, exchange service, mapping services) do not meet the requirements of mobile computing. Additional interface specifications have to be developed during this project. Within this development process we aim to contribute our experiences to the OGC standardization process. In addition, the research topic of differential updating of spatial databases has to be picked up. This means standardized interfaces have to be designed allowing modification of single objects in the database. Beyond that, the location services (location application servers, location data servers) currently under development by the OGC have to be examined and advanced. 3) ecure and consistent data transmissio Data consistency is a major issue within the domain of mobile geographic data management systems. Recorded data have to be stored on the mobile device until the end of the data transmission to ensure the consistency even in the case of a connection breakdown. Techniques and methods to ensure the lossless data transmission form an important aspect of the project. Those techniques have to be independent from the underlying protocol, no matter if it is a packet-switched (GPRS, UMTS) or contrasted circuit-switched network connection (GSM, HSCSD). 4) Concept for mobile clients and the clientserver architecture Different probable kinds of architecture will be analyzed during the first stage of the conceptual phase. Mandatory and needful functiona-


lities of the mobile devices will be surveyed (e.g. data capture, object building, etc.) to figure out the demands on the system architecture and hardware. The data capturing on site is accomplished by measuring instruments equipped with digital outputs or with analogue instruments. The analysis and the interpretation of analogue devices are supported by graphical user interfaces, generated automatically by distinct applications running on the server side. The spectrum of supported instruments could contain GPS-receiver, tachymeter, digital cameras, seismographs and other devices specified at a later date within the projects timeframe. The captured data will be made usable to the application instantly. The conversion of the proprietary measured values, a necessity up to now, will be dispensable. Successful use of mobile data capturing systems within distributed client-server architectures depends mainly on two distinctive components. Firstly the performance and stability of the cellular phone network connection established, secondly the performance of the mobile device. Therefore some preliminary inspections have to be carried out, particularly to explore the mobile radio transmission capacity and stability in different terrains. The use of interfaces and transmission protocols as well as the dispersion of the intelligence and functionality on the mobile client or the server respectively depend on the experiences made during this early stage. The goal is to tweak the system to a maximal performance. Further on, the dispersion of the functionality will be adjusted to the type of mobile devices in use. Experiences made during the prototypical implementation and performance testing will be provided to update the used interface and protocol specifications.

viously recorded data in the field, the applications will generate graphical user interfaces automatically. The definitions will be read from metadata information. Consequently, the system ensures the maintenance of all attributes attached to the geographic data stored in the databases. This brings a first quality control mechanism to reality. Further services will be developed to support the user recording data in the field and to ensure a high level of quality assurance. These could be: (1) A service generating optimal measuring nets for specific tasks, providing propositions for the next location the next measurement will take place. (2) Interpolation services that process the already captured data and provide a huge bulk of information to the user in the field. The user will be able to determine the follow on measuring point location interpreting the interpolation results provided by the server side. (3) Quality assurance services use further information stored in the databases (e.g. DGM) to inform the user if the location of the actual measuring point could not be taken within a fixed level of discrepancy using GPS-facilitated positions due to shadowing effects. In all cases as much functionality as possible will be placed on the server side to reduce the requirements for the mobile devices. In any case, the transmission capacity has to be taken into consideration. 6) Prototypical developments and field inquiry The developed concepts and architectures will be tested and proved together with the EML Heidelberg. The prototype will demonstrate the capacity and efficiency of the chosen technologies and architectures as well as the technical realization of the entire project. EML has long lasting experiences in the field of databases and mobile technologies.

5) Server-side supported data capture The applications to be developed, provided by application servers, simplify the read and write access to inhomogeneous distributed databases. To facilitate a simple maintenance of pre-

44


5.3. Project »Mobile Augmented Reality GIS-Client« responsible: Joachim Wiesel, Institute of Photogrammetry and Remote Sensing (IPF), University of Karlsruhe Abstract A mobile GIS Client for the collection and update of 3D databases shall be developed on the basis of Augmented Reality (AR) techniques. AR Techniques can significantly improve quality and productivity of GIS client components by superimposing feature data with the real world.

5.3.1. Preparatory Work A multi-tier 2D/3D GIS Architecture (GISterm) has been developed in the research projects GLOBUS and AJA (Hofmann et. al. 2000a; Hofmann et al. 2000b; Veszelka and Wiesel 2000). The Ministry for the Environment of the State Baden-Wuerttemberg is financing and deploying this technology into its state agencies. GISterm is a platform independent Java framework, which allows it to place different functionalities on several nodes of a federated spatial information system. Recent developments are the integration of 3D-visualization functions for hydro geological applications and for studying the impact of planned buildings and infrastructures on the groundwater system using Java-3D. In Project C6 of the joint research center 461 »Strong Earthquakes« financed by DFG, first experiments have been performed to use ARHardware and Software (Bähr and Leebmann 2001) in disaster management scenarios. In the project »Geodetic Deformation Analysis«, sponsored by DFG, 3D-Visualization methods are studied and implemented for 3D/4Dvisual inspection of spatial movements and their interactions with variations of model parameters (Faulhaber and Wiesel 2001).

45

In Project C5 of the joint research center 461 »Strong Earthquakes« methods for the extraction and modeling of topographical objects (Bähr et al. 2001) from laser scanner data are studied.

5.3.2. Objectives and Conception Mobile data communication is nowadays common at quite low bandwidth (GSM, HSCD, GPRS up to 48kb/s), future 3G cell phone networks (UMTS) will transport data up to several hundred kbits/s. These techniques will enable multi media applications on hand held computers in real time. The goal of this project is to use AR methods, as they are already used in e.g. CAD, facility maintenance (Müller 2001) and GIS (Afshar 1997; Zlatanova and Verbree 2000), to support and improve 3D data capture and update in the field – wireless in real time (Hollerer et al. 1999). To reach this goal, we can define 6 work packages: 1) Selection, evaluation and test of hard- and software for 3D-projection in a mobile environment. Design and experimental implementation of a multi tier software architecture taking under consideration the limited resources of mobile systems (speed, storage, communications capacity). Porting and adaptation of the GISterm framework to the selected hard- and software environment. 2) Orientation and navigation of the AR visualization and data capture system using DGPS and INS. Evaluation of methods to improve the precision of the system by using control features from the surrounding natural objects (e.g. by imaging, range finding) 3) Study of methods for 3-D measurement in a mobile real time environment Examination of which methods are economic, feasible and precise enough (e.g. Laser Scanners, Cameras, Range Finders)


4) Development of an interface for real time access to a 3D spatial database system for data capture and update. Study of the impacts generated by slow and less stable communication links. Development of protocols to deal with this environment.

Literature

5) Visualization of data and features in an AR environment. Study on how 2D cartographic concepts can be transferred into the 3D case, which display techniques will result in readability of features, what is the impact of feature abstraction and simplification. Study of how user interfaces have to be implemented in the AR environment, how to precisely overlay displayed features and real world, study eye tracking and other AR technologies.

Alms, R., Balovnev, O., Breunig, M., Cremers, A.B., Jentzsch, T., Siehl, A. (1998): SpaceTime Modelling of the Lower Rhine Basin supported by an Object-Oriented Database. In: Physics and Chemistry of the Earth, Vol. 23, No. 3, Elsevier Science, pp. 251-260.

6) Test of the developed technologies (hardand software) embedded into the common mobile testbed (2D mobile clients, spatial data base). Evaluation of commercial markets for the developed solutions.

Bähr, H.P.; Leebmann, J. (2001): Wissensrepräsentation für Katastrophenmanagement in einem technischen Informationssystem (TIS). Report of SFB 461, pp. 597-638.

6. Outlook In future, new mobile geoservices could help to give solutions to one of today´s most challenging requirements of geoscientists concerning information technology: the capture, modification and visualization of underground geoobjects to analyse planning processes or geological processes directly in the terrain. This vision could come true by providing efficient geodatabase management systems and modern augmented reality methods within a web-based geoinformation infrastructure. Without any question, then the interactions between micro-, meso- and macro-scale geological processes referred to limited areas could be better understood and analysed. Furthermore, new insights could be obtained by the synopsis of underground observations and the application of new information technology methods.

Afshar, M. (1997): Mapping the Real World for Virtual and Augmented Reality. 4th CaberNet Radicals Workshop, Sept. 17th20th, Rethimnon, Crete.

Bähr, H.P.; Steinle, E.; Vögtle, T. (2001): Bildanalyse in Geowissenschaften und bei Ingenieurmaßnahmen. Report of SFB 461, pp. 549-596.

Balovnev, O., Breung, M., Cremers, A.B. (1997): From GeoStore to GeoToolKit: the Second Step. In: proceedings of the 5th Intern. Symposiuim on Spatial Databases, Berlin, LNCS No. 1262, Springer, Berlin, pp. 223-237. BMBF (2001): Bekanntmachung der Förderrichtlinien »Informationssysteme im Erdmanagement: von Geodaten zu Geodiensten« im Rahmen des BMBF/DFG-Sonderprogramms GEOTECHNO-LOGIEN vom 05. Juli 2001. In: Bundesanzeiger – Vol. no. 128 of July 2001, pp. 14 371. Bode, T., Breunig, M., Cremers, A.B. (1994): First Experiences with GEOSTORE, an Information System for Geologically Defined Geometries. In: proceedings of the Intern. Workshop on Advanced Research in Geograph. Inform. Systems, Monte Verita, Schweiz, LNCS No. 884, Springer, Berlin, pp. 35-44.

46


Breunig, M. (2001): On the Way to Component-Based 3D/4D Geoinformation Systems. Lecture Notes in Earth Sciences, No. 94, Springer Verlag, 199p.

Faulhaber, K., Wiesel, J. (2001): Visualisierungstechniken bei der geodätischen Deformationsanalyse. Presented Paper, Geodätische Woche, Cologne.

Breunig, M., Cremers, A.B., Götze, H.J., Schmidt, S., Seidemann, R., Shumilov, S., Siehl, A. (1999): First Steps Towards an Interoperable GIS - An Example From Southern Lower Saxony. In: Physics and Chemistry of the Earth, Elsevier Science, Oxford, pp. 179-189.

Friebe, J. (2001): Architekturen für komponentenbasierte Geographische Informationssysteme im Internet, Logos Verlag, Berlin.

Breunig, M., Türker, C., Böhlen, H., Dieker, S., Güting, R.H., Jensen, C.S., Relly, L., Rigaux, P., Schek, H.J., Scholl, M. (2003): Architecture and Implementation of Spatio-Temporal DBMS. In: Spatio-Temporal Databases – The CHOROCHRONOS Approach, Academic Press, 75p., in print. Breunig, M., Cremers, A.B., Müller, W., Siebeck, J. (2001): New Methods for Topological Clustering and Spatial Access in Object-Oriented 3D Databases. In: proceedings of the ACM GIS Conference, Atlanta, GA, 6p. Brinkhoff, T. (1999): Requirements of Traffic Telematics to Spatial Databases. In: Proceedings of the 6th Intern. Symposium on Large Spatial Databases, Hong Kong, China. In: Lecture Notes in Computer Science, Vol. 1651, pp. 365-369. Buhmann,E., Wiesel, J. (2001): GIS-Report 2001 - Software Daten Firmen; BernhardHarzer-Verlag Karlsruhe, 2001, ISBN 3-9803128-7-9, 302 p. Caspary, W., Joos, G., Mösbauer, M. (2000): Multimedia und mobile GIS. In: Zeitschrift für Vermessungswesen, no. 125, pp. 272-279. Dittrich, K.R., Geppert, A. (2001)(Eds.): Component Database Systems, dpunkt Verlag, Heidelberg.

47

Güting, R.H., Böhlen, M.H., Erwig, M., Jensen, C.S., Lorentos, N.A., Schneider, M., Vazirgiannis, M. (2000): A Foundation for Representing and Querying Moving Objects. In: ACM Transactions on Database Systems, Vol. 25, No. 1, March. Heister, H., Koppers, L. Musäus, S. ; Plan, O. ; Reinhardt, W. (2000): ALOIS – Ein integriertes Lokortungssystem. In: Brunner, F.-K. ; Ingensand, H. ; Schilcher, M. ; Schnädelbach, K. (Hrsg.): Ingenieurvermessung 2000 (XIII. International Course on Engineering Surveying). Stuttgart : Verlag Konrad Wittwer, contributions, pp. 329-333. Hofmann, C., Hilbring, D., Veszelka, Z., Wiesel, J., Müller, M. (2000a): GISterm Weiterentwicklung des flexiblen Frameworks zur Analyse und Visualisierung von raumbezogenen Daten; in R. Mayer-Föll, A. Keitel, A. Jaeschke (Hrsg.): Projekt AJA - Anwendung JAVA-basierter Lösungen in den Bereichen Umwelt, Verkehr und Verwaltung. Phase I 2000, Forschungszentrum Karlsruhe, Wissenschaftliche Berichte, FZKA 6565, Dec. 2000, pp. 123-144. Hofmann, C., Weindorf, M., Wiesel, J. (2000b): Integration of GIS as a component in federated information systems; Archives of the International Society for Photogrammetry and Remote Sensing (ISPRS), Vol. XXXIII, Part B4, Proc. ISPRS Congress, Amsterdam, July, pp. 1173-1180. Hollerer, T., Feiner, S., Terauchi, T., Rashid, G., Hallaway, D. (1999): Exploring MARS: Developing Indoor and Outdoor User


Interfaces to a Mobile Augmented Reality System. Computers & Graphics 23(6), pp. 779-785. Joos, G. (2000): Internet-GIS: Entwicklungen beim OGC. In: FH Karlsruhe - Hochschule für Technik (Veranst.): Web Mapping 2000 Symposium, pp. X.1-X.12. Leukert, K., Reinhardt, W., Seeberger, S. (2000): GIS-Internet Architekturen. In: Zeitschrift für Vermessungswesen 125 (2000), no. 1, pp. 23-28. Löhnert, E., Wittmann, E., Pielmeier, J., Sayda, F. (2000): VISPA - A Mobile Digital TourGuide for Mountaineers. In: Proceedings of 7th ECGI & GIS Workshop. Potsdam, Germany. Müller, S. (2001): Virtual Reality - Augmented Reality. INI- Graphics-Net. Brochure, Fraunhofer - IGD, Darmstadt. OGC (2000): OpenGIS Consortium: Geography Markup Language (GML) 1.0, May, http://www.opengis.org/techno/specs.htm. Pasman, W., van der Schaaf, A., Langendijk, R.L., Jansen, F.W. (2000): Information Display For Mobile Augmented Reality. Delft University of Technology, Faculty for Information Technology and Systems, UBICOM Project Report. Reinhardt, W., Sayda, F. (2001): GIS and location based service for mountaineers - concept and prototyp realization. Digital Earth Conference, Frederikton, CA, July. Reinhardt, W. (2001): Concept of a GIS and location based services for mountaineers, Proceedings 4th AGILE Conference. Reinhardt, W. (2000): Principles and Application of Geographic Information Systems and Internet/Intranet Technology. In: Proceedings of New Information Processing Techniques for Military Systems. Istanbul.

Reinhardt, W. (2001): Zur Bedeutung der Internettechnologie in Geoinformationssystemen. In: Proceedings of Internationale Geodätische Woche. Obergurgl. Sayda, F., Wittmann, E. (2001): VISPA-Virtual Sports Assistant. In: Band 10: Photogrammetrie - Fernerkundung - Geoinformationen: Geodaten schaffen Verbindungen. Berlin: Publikationen der Deutschen Gesellschaft für Photogrammetrie und Fernerkundung, 21th Wissenschaftlich-Technische Jahrestagung der DGPF. 0942-2870 ISSN, pp. 215-222. Sellis, T. (1999): Research Issues in SpatioTemporal Database Systems. Proceedings of the 6th Intern. Symposium on Large Spatial Databases, Hong Kong, China. In: Lecture Notes in Computer Science, Vol. 1651, Springer Verlag, pp. 5-11. Snodgrass RT, Böhlen MH, Jensen CS, Steiner A (1996): Adding Valid Time to SQL/Temporal. ANSI X3H2-96-152r1, ISO-ANSI SQL/Temporal Change Proposal, ISO/IEC JTC1/SC21/WG3 DBL MCI-142, May. Szypersky C (1998): Component Software Beyond Object-oriented Programming, Addison Wesley, Essex, England, 411p. Veszelka, Z., Wiesel, J. (2000): Performance Issues in Design and Implementation of GISterm; Archives of the International Society for Photogrammetry and Remote Sensing (ISPRS), Vol. XXXIII, Part B4, Proc. ISPRS Congress, Amsterdam, July, 2000, pp. 1122-1129. Voss, H.H., Morgenstern, D. (1997): Interoperable Geowissenschaftliche Informationssysteme (IOGIS). In: GIS 2/97, Wichmann-Verlag, Heidelberg, pp. 5-8. W3C (1998): World Wide Web Consortium: Extensible Markup Language, Feb., http://www.w3.org/TR/REC-xml.

48


Waterfeld, W., Breunig, M. (1992): Experiences with the DASDBS Geokernel: Extensibility and Applications. In: From Geoscientific Map Series to Geo-Information Systems, Geolog. Jahrbuch, A(122), Hannover, pp. 77-90. Worboys, M. (1992): A Model for SpatioTemporal Information. In: Proceedings of the 5th Intern. Symposium on Spatial Data Handling, Charleston, SC, Vol. 1, pp. 602-611. Zlatanova, A.; Verbree, E. (2000): A 3D topological model for augmented reality, Second International symposium of Mobile Multimedia Systems and Applications, 9-10 November 2000, Delft, The Netherlands, pp.19-26.

49


50


New Methods for Semantic and Geometric Integration of Geoscientific Data Sets with ATKIS – Applied to Geo-objects from Geology and Soil Science Sester, Monika (1)*; Butenuth, Matthias (2); Gösseln, Guido von (1); Heipke, Christian (2); Klopp, Sascha (3); Lipeck, Udo (3); Mantel, Daniela (3)

(1) Institut für Kartographie und Geoinformatikk, Universität Hannover (2) Institut für Photogrammetrie und GeoInformation, Universität Hannover (3) Institut für Informationssysteme – FG Datenbanksysteme, Universität Hannover * corresponding address: Univ.-Prof. Dr.-Ing. Monika Sester, Institut für Kartographie und Geoinformatik, Universität Hannover, Appelstr. 9a, 30167 Hannover, Germany, E-Mail: monika.sester@ikg.uni-hannover.de

1. Overview Although in geoscientific applications the topography of the Earth surface and thus topographic data sets constitute a common base for most related data sets, discrepancies and even disagreements often arise when inspecting one and the same object in different data sets. This is visible when superimposing different data sets of identical objects in reality. The reason is that the different data sets are typically based on different data models and have been collected for different purposes. Thus, different aspects of reality are important and have consequently been mapped. Also, different sensors are being used, data acquisition takes place at different dates, data representation differs (for example in terms of vector or raster data), and so does the resolution and the quality of the data. Data integration is a big issue today, when more and more digital data sets are being collected and made available – also in the »Geotechnologien-Programm« the improved access and use of data is an important and crucial aspect. Due to the heterogeneity of the data it is complicated and sometimes even impossible to handle them in a coherent manner. In some cases this even leads to new data acquisition.

51

The integration of inhomogeneous data is therefore becoming more and more important. The benefits of an integration are: - To use the stored data for various purposes and applications. The information which is not contained in one data base, can be taken from another one. - To complete and enhance the data bases thematically. For instance from the integration of a data set with another one new thematic information can be derived. - To automatically verify the stored data regarding their quality, to correct them or to improve their accuracy. Basically, this means that new data acquisition – typically the most expensive part of spatial analysis tasks – can be largely reduced and is only required if no data are available or changes in the reality have occurred. Consequently, a considerable saving of cost and labour is obtained by adding significant value to the existing data. The work undertaken in the proposed project aims at providing methods which can be used by different applications for an integrated use of data of different sources. The integration will be treated on the basis of the combination of general data types


(vector-vector, raster-vector), and will be conducted in three sub-projects.

1.1. Project director, co-operation partners The project will be undertaken in a co-operation of three institutes of the University Hanover, together with the Bundesamt für Kartographie und Geodäsie (BKG) in Frankfurt, as well as the Niedersächsiches Landesamt für Bodenordnung (NLfB) in Hannover. Main contractors are the Institutes of the University: - Institut für Kartographie und Geoinformatik (ikg): Prof. Dr.-Ing. Monika Sester, Dipl.-Ing. Guido von Gösseln, project management - Institut für Photogrammetrie und GeoInformation (IPI), Prof. Dr.-Ing. Christian Heipke, Dipl.-Ing. Matthias Butenuth - Institut für Informationssysteme, Fachgebiet Datenbanksysteme, Prof. Dr. Udo Lipeck, Dipl.-Inform. Sascha Klopp, Dipl.-Inform. Daniela Mantel. Co-operation Partners: - BKG: Dr.-Ing. Heinrich Jochemczyk (official geo-data) - NLfB: Dr. Horst Preuß (thematic information systems) - NLfB: Dr. Henning Bombien (geology), - NLfB: Dr. Jan Sbresny (soil science).

In close co-operation among the partners new concepts and methods for data integration will be developed and tested. The co-operation is particularly important during the first project phase, when the characteristics and semantics of the objects of interest are defined. At a later stage the developed techniques will be evaluated by the co-operation partners, and if possible they will be integrated in their daily production work flow. Thus, the scientific research and development is verified by means of applications relevant for practical work.

1.2. Project goal and conceptual aspects In this project the following data sets will be integrated: geo-scientific digital vector data sets Geological Map (GK) as well as Soil Science Map (BK), topographic geo base data (Basis-DLM) from ATKIS (Authoritative Topographic-Cartographic Information System) of the State Surveying Authorities, as well as aerial imagery in digital form. The objectives of this project are (1) The development of techniques for the integration of the digital Soil Science Map and the digital Geological Map of the State Geological Survey with the Basis-DLM. (2) The automated enhancement of the digital Soil Science Map by information of current aerial images, also in combination with the ATKIS Basis-DLM.

Figure 1: Comparison of different data integration techniques.

52


(3) The access to these integrated data sets in a federated spatial data base. These objectives lead on the one hand to two sub-projects in which the problems of the particular data combination on the basis of specific tasks will be focused on. On the other hand, in the third project general techniques for the integration of databases will be developed. The objects and data types dealt with in the sub-projects are general enough to be transferable to further related problems. Due to the common spatial reference a transformation of geological and soil science data onto the ATKIS Basis-DLM or an orthophoto is in principle possible with relatively simple techniques such as overlay. In this project, however, the aim is to achieve an object-related data integration which will allow for the exchange of data stemming from different sources, different representations and structures, and thus will establish a base for performing combined, integrated analyses. For the object-related integration the corresponding objects in the different data sets have to be identified. Once these correspondences are established, various possibilities for information exchange between the data arise: Integrated access on data sets: after integration, processes like queries, adaptations or updates can be performed based on the established links. Adaptation and transfer of geometry: after matching two representations are available for each object in the reality. If the different objects were collected with different accuracy, a new combined geometry can be created, which takes the original accuracies into consideration. In this way the data with lower accuracy can be adapted to data of higher accuracy. Adaptation and transfer of thematic information: the thematic characteristics derived from the particular descriptions can be exchanged. This leads to a refinement and enrichment of the data sets.

53

The advantage of such possibilities for information exchange will be explored and proven in close collaboration with the project partners of the geo-scientific arena.

1.3. Combination of the results The results of the three sub-projects lead to an enrichment of the relevant data sets. This information can be used by all project partners. For example, one can imagine to use first the ATKIS-information for the interpretation of a given vegetation phenomenon in the images, and to subsequently extract the corresponding geological information, which may provide evidence about the sub-surface characteristics. Thus, the integration of all available data sources can lead to new results. The federated data base will allow for the integrated analysis of all available information.

2. Integration of different vector data In the first sub-project, carried out at the Institut fĂźr Kartographie und Geoinformatik (ikg) of the University Hanover in co-operation with the NLfB and BKG, techniques for the matching of objects of the basis data set ATKIS on the one hand, and the digital Soil Science Map and the digital Geological Map on the other hand will be developed. This allows for the explicit and direct linking of geo-scientific base data to the ATKIS-data. Such a link is so far only implicitly given and established by the fact, that the geo-scientific data is originally acquired based on the topographic maps. This interrelation can be made explicit by identifying common objects in both data sets. First of all, it has to be defined on a object class level. Based on a semantic correspondence, individual objects be matched later on can. For instance, for the integration of the Soil Science Map and ATKIS, the object class ÂťwaterÂŤ is relevant, as it is represented in both data sets (cf. Figure 2, right). Furthermore, roads are


Figure 2: Overlay of the different vector data-sets: left-image: soil science map and geological map, right image: soil science map, ATKIS and selected object »water« in ATKIS.

important, as they form the natural and artificial border of the land utilization. Overlaying the three different maps reveal different kinds of objects that are candidates for the matching methods. Certain areas of the soil science map show the same geometry as objects in the geological map (Figure 2, left). In the first phase of the project, the possibly corresponding object classes have to be identified. First investigations show that the ATKIS objects which will be the main target for the first research will be »water«, »settlements«, »streets« and »borders«, as well as the objects which show fairly the same geometry in the geological and the soil science map. In the next step there will be a comparison of the three datasets with focus on these four object classes. The matching can be divided into the following two steps: 1. Matching, i.e. the identification ofcorresponding individual objects using semanticgeometrical matching algorithms: A mutual linking of the objects already allows for a multitude of options, like the exchange of attributes. 2 . Integration of the linked objects by harmonization of geometry and thematic information: In this way, ONE consistent object geometry can be obtained. The harmonization is based on transformations that will take the

accuracy of the original data into account. The result of the adjustment is a new, common, object geometry as well as quality measures with respect to the transformation. This adaptation requires knowledge about the relative significance of the objects as well as their accuracies. The generation of the common geometry can be achieved in different ways: - The object of the theme A has a higher importance or accuracy, respectively, so that the object of theme B will be adapted. - The importance or accuracy values of the respective objects are given – the new object geometry is gained as »weighted mean« of the two original geometries. Matching techniques will be developed exemplarily for important object types in the given data sets.

2.1. A brief description of the international state-of-the-art in vector data integration The matching problem can be solved in different ways. One of the first approaches of matching vector data sets of different sources – also named as conflation [Lynch and Saalfeld 1985] - was carried out by the Bureau of Census in Washington DC [Saalfeld 1988]: the census data were integrated with data of the United States Geological Survey (USGS) with the objective of improving the quality, eliminating errors, as well as exchanging attributes. In Geodesy and Geoinformatics often geometric features like form and position of the objects

54


are used [Gabay and Doytsher 1994]. If, however, unknown transformation between the data sets have occurred, or no unique matching candidates can be found, also binary object characteristics, i. e. relations, have to be applied in order to constrain the search process [Walter 1997]. Integration problems arise on the one hand in the domain of the integration of heterogeneous data, on the other hand they are also investigated when data of different scales have to be combined [van Wijngaarden et al. 1997, Badard 1999]. Devogele et al. [1998] analyse the theoretical discrepancies of different data sets and present potential solutions. It can be summarized, that up to now integration approaches were primarily investigated for the integration of artificial objects like buildings and roads – natural objects like rivers or borders of vegetation are missing up to now. These objects are representing a special challenge, as the fuzziness of their boundaries plays a decisive role.

3. Integration of raster and vector data 3.1. Project goals The goal of this project, being carried out at the Institute of Photogrammetry and GeoInformation (IPI) together with NLfB and BKG is to automatically enhance the applicability of the digital Soil Science Map by integrating it with information of up-to-date aerial images for the following two geo-scientific problems: 1. Derivation of field boundaries: Field boun– daries are important for various soil science problems. Furthermore, this information is also required in other areas, like in the agricultural sector. The field boundaries – as far as they are visible in the images – will be extracted automatically. In this task particularly the geometric information of the field boundaries in form of polygons is of importance. Additional attributes may also be collected.

55

2. Derivation of wind erosion obstacles: Wind erosion obstacles (hedges, rows of trees, groves etc.) are relevant for the determination of the potential damage of an area caused by wind. These obstacles will be identified in the images by automatic processes. Furthermore, the information about the height and possibly also about the permeability will be extracted for every obstacle. Generally, the combined use of raster and vector data plays an important part in the geosciences for the registration, validation, updating and visualisation of objects of the Earth surface. An important aspect is the refinement of existing vector data sets by objects that have been extracted from aerial images. At NLfB, primarily black-and-white aerial imagery from the State Survey Authorities and corresponding orthophotos are being used for such tasks at present. An important challenge with regard to research is the automatic extraction of the objects of interest incl. the corresponding attributes from these images using techniques from image analysis based on suitable semantic scene models. In this way the information implicitly contained in the images is made explicit and is thus available for object-related geo-scientific analyses. According to the state-of-the-art in image analysis the use of constraints by introducing prior information taken from the combination of different data sources is an essential element for the stabilisation of the process. In the context of the project the prior information comes from ALKIS, ATKIS Basis-DLM and the digital Soil Science Map in form of an initial scene description. This information is helpful for image analysis, e. g. the field boundaries frequently are parallel to parcel and land use boundaries and/or to the road network. The same applies to wind erosion obstacles, which are often located in parallel or at a right angle to topographic objects and to the border of vegetation areas. Figure 3 shows an orthoimage (left) and the desired results (right): Field boundaries are depicted in black lines, wind erosion obstacles in white lines.


Figure 3: Example of the project results: Extracted field boundaries (black lines) and wind erosion obstacles (white lines). Note that these results have been acquired manually, the goal of the project is to automate this acquisition process as far as possible.

Up to now, both the field boundaries and the wind erosion obstacles are manually acquired at NLfB and are subsequently imported into the existing vector data sets. This task is carried out partly at an interactive workstation based on the aerial imagery, and partly in the field. The data thus acquired as well as further base and thematic data (price per km2, land use, ground topography as well as the main wind direction) are being used as input for simulations which provide as result among other things an assessment of the wind erosion danger for each field (e.g. Thiermann et. al. 2002). The time-consuming, manual data acquisition for such tasks will be complemented by an automated process in the context of this project and thus become more effective. In this process data integration plays an important role in two ways: (1) During the construction of the semantic scene model the prior vector information (ALKIS, ATKIS, digital Soil Science Map) has to be integrated with the objects to be extracted from the aerial imagery, where the description of the latter is restricted by the observability of the corresponding attributes and the relations, (2) the process of image analysis provides the necessary pre-processing of the raster data as well as the connection bet-

ween the objects extracted from the aerial image and the existing vector data.

3.2. A brief description of the state-of-theart in integration of raster and vector data Latest developments in research in image analysis for topographic applications is brought together in [Baltsavias et. al. 2001]. According to this reference knowledge-based methods represent a very promising approach [Niemann et. al. 1990; Liedtke et. al. 2001]. As far as applications are concerned a multitude of successful approaches for the automatic extraction of man-made single objects like buildings or roads exist [see Mayer 1998 for an overview]. For the extraction of vegetation objects from high resolution images approaches have only been presented recently [e. g. Borgefors et. al. 1999, Heipke et. al. 2000, Pakzad 2001, Straub et. al. 2001], prior work mostly uses multi-spectral-classification. A significant limitation of nearly all known techniques constitutes the focus on one object class only, see also the corresponding statements in [Stilla et. al., 1998]. Work on the use of vector data of a Geographic Information System (GIS) as prior information of a scene are documented at several references. Only a few approaches [Bordes et. al. 1996, Quint, 1997, de Gunst and Vosselman 1997], which again deal with man-made

56


objects only, use this prior information for the generation of incremental hypotheses. A paper, interesting with regard to the extraction of field boundaries intended for this project, has been presented by LĂścherbach [1998], who reports on the refinement of topologically correct field boundaries.

for database federation have to be specialized and adapted to the integration of spatial databases. Additionally, new methods have to be developed for object-wise database integration, in particular for identifying related spatial objects from separate data sources.

In summary, one may say, that there is a demand for research and development in automatic image analyses. In particular in the area of vegetation there is still insufficient literature and success. It is a shortcoming of many approaches, that existing prior knowledge, often available in the form of vector data, is not adequately integrated into image analysis at this stage [see also Baltsavias 2002]. Work in the current project aims at overcoming these limitations by properly representing and integrating prior knowledge into image analysis.

4.1. State of the art

4. Federated spatial database The third subproject aims at designing and implementing a ÂťfederatedÂŤ spatial database that discloses the given heterogeneous data together with their relationships after they have been united. Therefore, general methods

Figure 4: Federated Database System.

57

In order to couple heterogeneous databases, at first so-called multi-database architectures had been discussed for loose coupling, but then so-called federated databases have been proposed and investigated to support a closer coupling. A systematic and comprehensive treatment of that subject can be found, e.g., in [Conrad 1997]. Federated databases allow integrating existing autonomous and (with respect to modeling and contents) heterogeneous databases via a common database interface. This so-called federation service refers to a global database schema that has been designed by integrating the participating local schemas. Local applications remain unchanged, but additional global applications can be developed with an integrated view on the data.


For schema integration, a broad spectrum of methods has been investigated in the literature. Typically corresponding object types have to be detected and merged on the schema level, after conflicts (on names, structures, extensions, etc.) have been detected and resolved. On the instance level, however, such methods either expect implicit unique assignments between »identical« objects (e.g., based on a world-wide numbering schema like ISBN for books) or treat objects from different sources to be different. More sophisticated correspondence relationships are usually not considered when identifying objects. There are first proposals for query languages (usually extensions of SQL) that respect specific needs of federated databases, for instance, the »multi-database language« SchemaSQL [Lakshaman et al. 1996] or the »federation query language« FraQL [Sattler et al. 2000]. They introduce language features for transforming different structures or for reconciling attribute values of already identified objects (e.g., for producing a uniform notation of bibliographic book data). Whereas there is a lot of work on spatial databases (for surveys compare, e.g., [Günther 1998], [Rigaux et al. 2002]), requirements of spatial data on database federation have hardly been investigated; exceptions are [Devogele et al. 1998] and [Laurini 1998]. At least, there is a standardized representation of vector data types according to the OpenGIS consortium [OpenGIS 1999]. The authors have already gathered some experience in processing data of the kinds required in this project by means of an object-relational database. Such a database offers extensions for spatial (vectorized) data and allows for own extension development, as it might be needed for, e.g., raster data or raster/vector combinations. In particular, modeling, importing and processing data from cartography and from soil science have already been realized within the Oracle database management system (plus Spatial cartridge) [Kleiner et al. 2000; Pfau et

al. 2000]; it was shown how to specify and implement arbitrary queries and computation methods. Thus, a flexible database environment can be utilized as a starting point for modeling and prototyping the federation service, before in later project phases dedicated GIS interfaces will be connected.

4.2. Project goals By utilizing assignments and unification rules between corresponding objects as developed in the other two subprojects, this subproject will design and prototypically realize an integrated access to the given heterogeneous data sets according to the paradigm of federated databases. First of all, the different kinds of correspondence relations between objects and/or between object parts have to modelled : direct structural assignments like 1:1, 1:n or n:m relations have to be considered as well as various instances of thematic and geometric similarity. There may even be alternative potential assignments being valid with different degrees of probability. For the applications at hand, we can on the one hand expect true identification relationships between composed objects of the same type, for instance, between subsections or subareas of water objects which »belong to the same object in the real world«, and which can be found by methods matching complete geometries. On the other hand, it will be necessary to establish more general correspondence relationships, for instance, which topographic objects and which soil areas »share boundaries« (in order to propagate exact topographic boundaries to soil maps); here, methods for partial and multiple matching will be needed. At least for identified objects stemming from several sources, conflict resolution rules have to be specified for the joint attributes; geometries, for instance, might be taken from the

58


topographic model being the more precise source. Then queries to corresponding databases (including import and export tasks) have to be supported which can utilize such object assignments by asking for thematic and geometric properties of the respective corresponding objects. As far as objects can be identified, queries should be supported that need not refer to such objects assignments explicitly. This will require appropriate extensions of the database language SQL by language features, which allow a unified access to objects of different data sources along correspondence relationships and conflict resolutions. Of course, not only thematic operators (like alphanumeric comparisons), but also geometric operators on spatial datatypes must be adapted. Later, updates that are propagated along correspondence relationships according to prespecified rules have to be supported as well. Due to the federation paradigm, the original databases keep their autonomy – as can be expected in practice since separate agencies are responsible for topographic and soil/geologic maps. The databases, however, will be disclosed by a federation service that acts like a database with an integrated global schema – it does not copy the original databases, but it imports queried extracts into a spatial working database of its own. This federation requires design of an integrated database schema for the involved databases including object assignments, matching methods, conflict resolution rules, and update rules. Then the working database can be realized such that it stores not only extracts according to the unified schema, but also correspondence relations between objects together with their kind and degree of validity, and the resolved attributes of identified objects. Additionally, it serves as a method base for import, matching, conflict resolution, and update procedures.

59

Expensive spatial operations, like, e.g., boundary matching on entire maps, will need dedicated optimizations by means of precomputed data like, e.g., topological relations, and specialized index structures. Fortunately, object-relational databases like Oracle 9i are extensible also with respect to such physical optimizations.

4.3. Interfaces To exchange data between the data sources and the federation service or, more general, between the project partners, system independent tools and object-structured views for accessing and delivering information are desirable. Here, the XML standard fits as an exchange language [Abiteboul et al. 2000] that can be specialized to the considered data by joint conventions to be specified in the meta description language XML Schema. The latter can also serve as an instrument for communicating and unifying the semantic object models of the subprojects. The OpenGIS-GML can be used as the sublanguage to exchange geometric data. To specify the (global) database tasks preceding XML-based data exchange, Java-based web interfaces to the underlying object-relational database will be developed. These have to control the import of XML data into the federation service and the export of XML data from that service as well as ad-hoc queries and updates. How to automatically generate object-structured XML data from object-relational spatial databases has been studied in [Kleiner und Lipeck, 2001a/b].


5 Literature Abiteboul, S., Buneman, P., Suciu, D., 2000: Data on the Web: From Relations to Semistructured Data and XML, Morgan Kaufmann Publishers. Badard, T.: On the automatic retrieval of updates in geographic databases based on geographic data matching tools. In: Proceedings of the 19 th International Cartographic Conference, Ottawa, ICA/ACI (Eds.), 1999, pp.47 – 56. Baltsavias, E. 2002: Object Extraction and Revision by Image Analysis Using Existing Geospatial Data and Knowledge: State-ofthe-Art and Steps Towards Operational Systems, International Archives of Photogrammetry and Remote Sensing, Vol. 34, Part 2, Comm. II, pp. 13-22. Baltsavias, E., Grün, A., van Gool, L. (Eds), 2001: Automatic Extraction of Man-Made Objects from Aerial and Space Images (III), A.A. Balkema Publishers. Bordes G., Guérin P., Giraudon G., Maitre H., 1996: Contribution of external data to aerial image analysis, International Archives of Photogrammetry and Remote Sensing, Vol. 31, Part B2/II, 134-138. Borgefors, G., Brandtberg T., Walter, F., 1999: Forest Parameter Extraction from Airborne Sensors, International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 3-2W5, Automatic Extraction of GIS Objects from Digital Imagery, Munich, September 8-10, 1999, pp. 151-158. Conrad, S, 1997: Föderierte Datenbanksysteme, Springer-Verlag. Devogele, T., Parent, C. & Spaccapietra, S., 1998, »On spatial database integration«, International Journal of Geographical Information Science, 12:4 335-352.

Gabay, Y. and Y. Doytsher: Automatic Feature correction in merging line maps. In 1995 ACSM/ASPRS Annual Convention & Exposition Technical Papers - Charlotte, North Carolina, Vol. 2, pp. 404 – 411, 1995. Günther, O., 1998: Environmental Information Systems, Springer-Verlag. de Gunst, M. and Vosselman, G.: 1997, A Semantic Road Model for Aerial Image Interpretation, SMATI '97, Workshop on Semantic Modelling for the Acquisition of Topographic Information from Images and Maps, Ed. W. Förstner, L. Plümer, Birkhäuser Verlag, pp. 107-122. Heipke C., Pakzad K., Straub B.-M., 2000: Image Analysis for GIS Data Acquisition, Photogrammetric Record, 16(96), pp. 963-985. Kleiner, C., Lipeck, U.W., Falke, S. 2000: Objekt-Relationale Datenbanken zur Verwaltung von ATKIS- Daten. in: Proc. Workshop »ATKIS - Stand und Fortführung« (Univ. Rostock, Sept. 2000). Kleiner, C., Lipeck, U.W., 2000: Efficient Index Structures for Spatio-Temporal Objects. In: A. Tjoa, R. Wagner, A. Al-Zobaidie (Eds.), Eleventh International Workshop on Database and Expert Systems Applications (DEXA 2000), IEEE Computer Society Press, Los Alamitos, pp. 881-888. Kleiner, C., Lipeck, U.W,, 2001a: Automatic Generation of XML DTDs from Conceptual Database Schemas. in: Bauknecht, K. et al. (Eds.): Informatik 2001 - GI/OCG-Jahrestagung Sept. 2001, Universität, Band I, pp. 396-405 Kleiner, C., Lipeck, U.W.,2001b: Web-Enabling Geographic Data with Object-Relational Databases. In A. Heuer et al. (Eds.), Datenbanksysteme in Büro, Technik und Wissenschaft - 9. GI-Fach-tagung, BTW 2001, Informatik aktuell, Springer-Verlag, pp. 127-143.

60


Laurini, R. (1998): Spatial multi-database topological continuity and indexing: a step towards seamless GIS data interoperability, Int. J. Geographical Information Science, 12:4, pp 373-402 Lakshaman, L.V.S., Sadri, F., Subramanian, I.N., 1996: SchemaSQL – A Language for Interoperability in Relational Multi-Database Systems. In Proceedings of the 22nd International Conference on Very Large Databases, VLDB’96, Morgan Kaufman Publishers, pp. 239-250. Liedtke C.-E., Bückner J., Pahl M., Stahlhut O., 2001: Knowledge based system for the interpretation of complex scenes, Automatic Extraction of Man-Made Objects from Aerial and Space Images (III), A.A. Balkema Publishers, pp. 3-12. Löcherbach, T., 1998: Fusing Raster- and Vector-Data with Applications to Land-Use Mapping, Inaugural-Dissertation der Hohen Landwirtschaftlichen Fakultät der Universität Bonn. Lynch, M. and A. Saalfeld: Conflation: Automated Map Compilation - A Video Game Approach. In: American Society of Photogrammetry, Auto-Carto 7, pages 343 – 352, 1985. Mayer, H., 1998: Automatische Objektextraktion aus digitalen Luftbildern, Habilitationsschrift, DGK Reihe C, Nr. 494. Niemann, H., Sagerer, G., Schröder, S., Kummert, F., 1990: ERNEST: A Semantic Network System for Pattern Understanding, IEEE Transactions on Pattern Analysis and Machine Intelligence 12(9), pp. 883-905. OpenGIS Consortium, 1999: OpenGIS Simple Features Specification for SQL Techn. Report 99-049, available from www.opengis.org

61

Pakzad, K. 2001: Wissensbasierte Interpretation von Vegetationsflächen aus multitemporalen Fernerkundungsdaten, Deutsche Geodätische Kommission, München, Vol. C 543. Pfau, J. H., Kleiner, C., Lipeck. U.W., 2000: Implementierung von Auswert-ungs-methoden der physischen Geographie mit Hilfe objekt-relationaler Datenbanksysteme. Informatik-Bericht DB-01/2000, Institut für Informatik, Universität Hannover. Quint, F., 1997: Kartengestützte Interpretation monokularer Luftbilder, Deutsche Geodätische Kommission, München, Vol. C 477. Rigaux, P., Scholl, M., Voisard, A., 2002: Spatial Databases with Application to GIS, Morgan Kaufman Publishers. Saalfeld, A.: Automated Map Compilation. International Journal of Geographical Information Systems, 2(3):217 – 228, 1988. Sattler, K.-U., Conrad, S., Saake, G., 2000: Adding Conflict Resolution Features to a Query Language for Database Federations. In Proceedings of the 3rd International Workshop on Engineering Federated Information Systems EFIS’00, Akadem. Verlagsgesellschaft, pp. 41-52 Stilla, U., Geibel, R., Quint, F., Sties, M., 1998: Analyse von Luft- und Satellitenbildern zur automatischen Ermittlung der Bodenversiegelung städtischer Siedlungsbereiche (III), Abschlußbericht zum DFG-Vorhaben Ka 414/9 und Ba 686/7 für den Zeitraum 1.1.1995 31.3.1998. Straub, B.-M., Heipke C., 2001: Automatic Extraction of Trees for 3D-City Models from Images and Height Data, Automatic Extraction of Man-Made Objects from Aerial and Space Images (III), A.A. Balkema Publishers, pp. 267-277.


Thiermann, A., Sbresny, J., Schäfer, W., 2002: GIS in WEELS, GeoInformatics, Vol. 5, Sept. 2002, pp. 30-33. van Wijngaarden, F., J. van Putten, P. van Oosterom and H. Uitermark, 1997: Map integration – update propagation in a multi-source environment. In Proceedings of the 5th International Workshop on Advances in Geographic Information Systems, Las Vegas, USA. Walter, V., 1997: Zuordnung von raumbezogenen Daten – am Beispiel ATKIS und GDF. Dissertation, Deutsche Geodätische Kommission (DGK) Reihe C, Heft Nr. 480.

62


List of Participants, Kick-Off Meeting »Information Systems in Earth Management«, Universtity of Hannover, 19 February 2003

1.

Lars Bernard, Westfälische Wilhelms-Universität Münster, bernard@ifgi.uni-muenster.de

2.

Ralf Bill, Universität Rostock, ralf.bill@agrarfak.uni-rostock.de

3.

Martin Breunig, Hochschule Vechta, E-Mail: mbreunig@iuw.uni-vechta.de

4.

Matthias Butenuth, Universität Hannover, E-Mail: butenuth@ipi.uni-hannover.de

5.

Ralf Dannowski, Zentrum für Agrarlandschafts- und Landnutzungsforschung (ZALF) e.V., rdannowski@zalf.de

6.

Regina Falk, Forschungszentrum Jülich GmbH - Projektträger PTJ-MGS, r.falk@fz-juelich.de

7.

Guido von Gösseln, Universität Hannover, guido.vongoesseln@ikg.uni-hannover.de

8.

Jochen Häußler, European Media Laboratory (EML), jochen.haeussler@eml.villa-bosch.de

9.

Sören Haubrock, Delphi IMM GmbH, soeren.haubrock@delphi-imm.de

10. Christian Heipke, Universität Hannover, heipke@ipi.uni-hannover.de 11. Sebastian Hübner, Universität Bremen, huebner@tzi.de 12. Ulf Hünken, Forschungszentrum Jülich GmbH - Projektträger PTJ-MGS, u.huenken@fz-juelich.de 13. Wolfgang Kappler, ahu AG, w.kappler@ahu.de 14. Kurt Christian Kersebaum, Zentrum für Agrarlandschafts- und Landnutzungsforschung (ZALF) e.V., ckersebaum@zalf.de 15. Christian Kiehle, RWTH Aachen, kiehle@lih.rwth-aachen.de 16. Sascha Klopp, Universität Hannover, skl@dbs.uni-hannover.de 17. Ralf Kunkel, Forschungszentrum Jülich GmbH, r.kunkel@fz-juelich.de 18. Björn Leppig, RWTH Aachen, bjoern@lih.rwth-aachen.de 19. Rolf Lessing, Delphi IMM GmbH, rolf.lessing@delphi-imm.de 20. Udo Lipeck, Universität Hannover, ul@dbs.uni-hannover.de 21. Michael Lutz, Westfälische Wilhelms-Universität Münster, lutzm@ifgi.uni-muenster.de 22. Daniela Mantel, Universität Hannover, dma@dbs.uni-hannover.de 23. Ingo Michels, WASY GmbH, i.michels@wasy.de 24. Oliver Plan, Universität der Bundeswehr München, oliver.plan@unibw-muenchen.de 25. Alexander Rudloff, Koordinierungsbüro GEOTECHNOLOGIEN, rudloff@gfz-potsdam.de 26. Jan Sbresny, Niedersächsisches Landesamt für Bodenforschung, jan.sbresny@nlfb.de 27. Monika Sester, Universität Hannover, monika.sester@ikg.uni-hannover.de 28. Michael Schlüter, Alfred-Wegener-Institut für Polar- und Meeresforschung (AWI), mschlueter@awi-bremerhaven.de 29. Winfried Schröder, Hochschule Vechta, wschroeder@iuw.uni-vechta.de 30. Ludwig Stroink, Koordinierungsbüro GEOTECHNOLOGIEN, stroink@gfz-potsdam.de 31. Lutz Vetter, Fachhochschule Neubrandenburg, vetter@fh-nb.de 32. Ubbo Visser, Universität Bremen, visser@tzi.de 33. Thomas Vögele, Universität Bremen, vogele@tzi.de 34. Frank Wendland, Forschungszentrum Jülich GmbH, f.wendland@fz-juelich.de 35. Joachim Wiesel, Universität Karlsruhe (TH), wiesel@ipf.uni-karlsruhe.de

63


Author’s Index

A Arndt, Olaf. . . . . . . . . . . . . . . . . . . . 23 Azzam, Rafig . . . . . . . . . . . . . . . . . . 31 B Bauer, Christian. . . . . . . . . . . . . . . . . 31 Bernard, Lars. . . . . . . . . . . . . . . . . . . . 1 Bogena, Heye. . . . . . . . . . . . . . . . . . 31 Breunig, Martin. . . . . . . . . . . . . . . . . 37 Butenuth, Matthias. . . . . . . . . . . . . . 51 D Dannowski, Ralf . . . . . . . . . . . . . . . . 23 G Gösseln, Guido von. . . . . . . . . . . . . . 51 Gründler, Rainer. . . . . . . . . . . . . . . . . 23 H Haubrock, Sören. . . . . . . . . . . . . . . . . 1 Hecker, J. Martin. . . . . . . . . . . . . . . . 23 Heipke, Christian. . . . . . . . . . . . . . . . 51 Hübner, Sebastian. . . . . . . . . . . . . . . . 1 K Kappler, Wolfgang . . . . . . . . . . . . . . 31 Kersebaum, Kurt-Christian . . . . . . . . 23 Kiehle, Christian . . . . . . . . . . . . . . . . 31 Klopp, Sascha . . . . . . . . . . . . . . . . . . 51 Kuhn, Werner . . . . . . . . . . . . . . . . . . . 1 Kunkel, Ralf . . . . . . . . . . . . . . . . . . . 31

L Leppig, Björn . . . . . . . . . . . . . . . . . . 31 Lessing, Rolf . . . . . . . . . . . . . . . . . . . . 1 Lipeck, Udo. . . . . . . . . . . . . . . . . . . . 51 Lutz, Michael . . . . . . . . . . . . . . . . . . . 1 M Malaka, Rainer . . . . . . . . . . . . . . . . . 37 Mantel, Daniela . . . . . . . . . . . . . . . . 51 Meiners, Hans-Georg . . . . . . . . . . . . 31 Michels, Ingo . . . . . . . . . . . . . . . . . . 23 Müller, Frank. . . . . . . . . . . . . . . . . . . 31 R Reinhardt, Wolfgang. . . . . . . . . . . . . 37 S Sester, Monika . . . . . . . . . . . . . . . . . 51 Steidl, Jörg . . . . . . . . . . . . . . . . . . . . 23 Schlüter, Michael. . . . . . . . . . . . . . . . 17 Schröder, Wilfried . . . . . . . . . . . . . . . 17 V Visser, Ubbo . . . . . . . . . . . . . . . . . . . . 1 Vetter, Lutz . . . . . . . . . . . . . . . . . . . . 17 W Waldow, Harald von.. . . . . . . . . . . . . 23 Wendland, Frank. . . . . . . . . . . . . . . . 31 Wieland, Ralf. . . . . . . . . . . . . . . . . . . 23 Wiesel, Joachim. . . . . . . . . . . . . . . . . 37 Wimmer, Guido. . . . . . . . . . . . . . . . . 31

64


GEOTECHNOLOGIEN Science Report’s – Already published

No. 1

Gas Hydrates in the Geosystem – Status Seminar GEOMAR Research Centre Kiel, 6-7 May 2002, 151 pages


Notes


Notes


Information System in Earth Management Geoinformation and geoinformation systems (GIS) - as tools to deal with this type of information - play an important role at all levels of public life. Daily a huge amount of geoinformation is created and used in land registration offices, utility companies, environmental and planning offices and so on. For a national economy geoinformation is a type of infrastructure similar to the traffic network. Various scientific disciplines investigate spatial patterns and relations, other disciplines are developing concepts and tools for doing these types of investigations. In Germany a national programme ÂťInformation Systems in Earth ManagementÂŤ (Informationssysteme im Erdmanagement) was launched in late 2002 as part of the R&D Programme GEOTECHNOLOGIEN. Goal of the programme is to improve the basic knowledge, to develop general tools and methods to improve the interoperability, and to foster the application of spatial information systems at different levels. The currently funded projects focus on the following key themes: (i) Semantical interoperability and schematic mapping, (ii) Semantical and geometrical integration of topographical, soil, and geological data, (iii) Rule based derivation of geoinformation, (iv) Typologisation of marine and geoscientifical information, (v) Investigations and development of mobile geo-services, and (vi) Coupling information systems and simulation systems for the evaluation of transport processes. This abstract volume contains the descriptions of the funded projects which have started so far. The internal kick-off-meeting was held at the University of Hannover, 19th of February 2003 as a get-together of all participants. Upcoming meetings will be open to a broader spectra of interested visitors.

GEOTECHNOLOGIEN Science Report

Information Systems in Earth Management Kick-Off-Meeting University of Hannover 19 February 2003

Projects The GEOTECHNOLOGIEN programme is funded by the Federal Ministry for Education and Research (BMBF) and the German Research Council (DFG)

ISSN: 1619-7399

No. 2


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.