Agents, Web Services and the Semantic Web

Page 1

Agents, Web Services and the Semantic Web Pedro Lopes Universidade de Aveiro, Campus Universitårio de Santiago, 3810 – 193 Aveiro, Portugal pedrolopes@ua.pt

Abstract. Evolution in computer science has lead to the exponential growth of the Internet. This growth made the Internet the main platform for anything that involves communication between different peers. Along with the Internet evolution, so has Artificial Intelligence evolved. Artificial Intelligence generated knowledge that is now essential to other computational areas. Intelligent agents are the key to the next step in the Internet. Agents are social, they can communicate and they can learn, if we adapt agents to work with services, we can create an intelligent ecosystem that is able to solve complex problems relying on autonomous web service composition and interactions between heterogeneous sources. Keywords: agents, multi-agent systems, web2.0, web services, semantic, semantic web.

1 Introduction Artificial Intelligence has the main premise of creating machines that are capable of genuine human interactions. However, for this field, the main goal is still science fiction. On a different stage, there is the Internet and its tremendous evolution and growth. The Internet is, by now, one of the main media for information sharing and content publication. With these advances, one must also consider the evolution in artificial intelligence and its relations with the World Wide Web. Artificial intelligence methodologies and ideals are no longer used in closed systems; they have evolved to distributed environments involving distinct and heterogeneous software. The Internet is one of these systems, where traditional artificial intelligence knowledge is being applied in existing applications with focus on search engines [1], information retrieval [2] or web service composition [3]. Intelligent agents, or simply agents, may be considered as pieces of software that act autonomously whether in their or others behalf [4]. Multi-agent systems connect several distinct agents in a single environment. Once again, we must look at the evolution in artificial intelligence to realize that these multi-agent systems are no longer constrained teams of software and hardware, working to a generic goal. The Internet itself can now be considered as a multi-agent system, connecting several points throughout the world. A real-world example could be the mesh of services required by a travel agency to book flights, hotels, shows and cars. These services may be heterogeneous and


located in distinct networks, however, they can be considered intelligent agents as they need to work together to find a solution that results in a schedule that is adequate for all the parties involved. This scenario would not only require action from intelligent agents but also from semantically described web services. In this paper we try to detail the concepts and technologies required by scenarios similar to the presented one. We need and agent-based semantic web service scenario where web services are semantically described, allowing them to act as agents and creating a service composition that allows them to reach the initial goal without any human interference. Designing a agent-based semantic web services architecture requires deep knowledge about semantic web services as well as about the features of web services working as agents in a wide multi-agent system. The rest of the paper is organized as follows. The next section contains a simple background revolving around agents, web services and the semantic web. Next, comes a section with some insights about semantic web services and agents as web services. Section 4 will show a description of an intelligent agent-based semantic web service scenario architecture. Finally, the last section provides some conclusions on the presented research.

2 Background Combining the three main topics of this article, we main obtain several architectural solutions that are widely used throughout the web. However, the most referenced outcome are agent-based semantic web service scenarios. In order to fully understand these scenarios, next we present a detailed description of agents, web services and the semantic web. 2.1

Agents

Agent, in artificial intelligence, is nothing more that a simple autonomous entity that observes the surrounding environment, using sensors, and determines an action or group of actions to be executed in that environment in order to reach a predetermined goal [5]. Fig. 1 describes, visually, agent operability.


Fig. 1. Simple reflex agent architecture.1

Despite being simple, this definition is very generic. Intelligent agents can be a part of innumerous systems with its name changing accordingly to the system. There are, however, several features that are common to any kind of intelligent agent. The essential feature of any agent is that it is autonomous. This means that the agent will operate without assistance inside its predefined ecosystem. It also leverages the existence of multi-agent systems. These systems are composed of several interacting intelligent agents that work together to reach a given goal. Multi agent systems mostly rely on distributed environments to operate. For instance, a team of robots or a group of web services can be a multi agent system. Distributed intelligent agents [6] usage is currently growing and its adoption in several distinct research areas is increasing. 2.2

Web Services

Web services are, nowadays, the most widely used technology for the development of distributed applications that rely on the Internet. The World Wide Web Consortium – W3C – defines web services “as software system designed to support interoperable machine-to-machine interaction over a network” [7]. This wide definition allows us to consider a web service as any kind of Internet available service. Web services can be divided in two main groups, according to their operability. RESTful web services are, so far, the less used group. These types of web services may be seen as simple web applications, accessible through a unique URI and that return data in any kind of predefined format such as HTML, XML or JSON. RESTful web services usage is increasing mostly due to the fact that they do not require adherence to a standard protocol. On the other hand there are the larger group of web services that depend on a predefined set of protocols. These services rely on SOAP 1

Image from Wikimedia: http://en.wikipedia.org/wiki/File:IntelligentAgent-SimpleReflex.png


[8] for message exchange, WSDL [9] to describe the service signature and use UDDI [10] for service discovery in central repositories. Fig. 2 shows traditional client-server web service interaction.

Fig. 2. Web Service Interaction.2

Web services are still evolving considerably. Their acceptance is so wide that concepts like “web-of-services” and “software-as-a-service” are hot topics in several research areas. This evolution raised the need to design applications and methodologies to coordinate the interaction between several distinct web services. Service composition [11] is the stage for the developments in service orchestration and service choreography [12]. Web service composition defines the collection of protocols and messages that have to be used in order to coordinate a heterogeneous set of web services. Connecting distinct web services in a single ecosystem enables the creation of a seamless integration environment for the user side and the possibility of developing, testing and deploying applications faster for the developers. Service orchestration and service choreography are distinct scenarios leveraged by service composition. In one hand, we have service orchestration scenarios, where a main controller service organizes existing services in order to achieve a certain goal. In the other hand, service choreography scenarios involve autonomous discovery of the best combination of services available to achieve the desired goal. Autonomous service composition scenarios are not possible without contextual information. This means that the services, its inputs, its outputs and its operability have to be correctly described in order to allow dynamic integration. Hence, by one way or another, web services have to be semantically described following a predefined ontology, in order to enable autonomous, dynamic and real-time service composition.

2

Image from Wikimedia: http://en.wikipedia.org/wiki/File:Webservices.png


2.3

Semantic Web

The Internet, as proposed initially by Tim Berners-Lee, was designed for humans. The web consisted of a network of interconnected hypertext files, containing information that was displayed in a browser. With the evolution of the Internet, so have the web pages evolved. In the beginning, content was mostly text. With the advances in web development, the websites started to display multimedia content: images, audio and video. Nowadays, the Internet is mostly composed of Rich Internet Applications that rely on techologies such as Flash, Silverlight or AJAX to deliver high quality user experiences to end users: the Web2.0. Semantic web developments, promoted by Berners-Lee [13], can be viewed as the evolution of the web. The main idea is to create a smarter Internet, describing content and enhancing autonomous interactions between heterogeneous software. This distinction between Web1.0, Web2.0 and the Semantic Web can be seen in Fig. 3. Web1.0 is the basic Internet. Messages are exchange between human consumers and the machines that produce some kind of service or humans located in a distinct point of the world. Web2.0 widened the mesh of connections. Consumers are now producers and vice-versa, this increased the number of interactions in the Internet. The main content contributors are everyday Internet users, instead of the traditional technical staff of a company. In the Semantic Web era, machines start interacting with content. Computers are able to find specific elements inside the Internet content and, above all, can interact with the content providers creating heterogeneous applications. Describing content correctly is not a trivial task. Content has to be described according to a predefined ontology. OWL [14] is the current supported started for ontology creation. The content description can be made with several distinct formats. Microformats [15] define a class of formats that enables semantics by the addition of several small attributes to HTML classess. Hence, these attributes are found when the HTML is parsed, thus, the contextual information is read in adequacy. However, the standard format is RDF [16]. RDF relies on a simple XML structure that is added to the page code where any assertion can be made. The basis for RDF is the concept of “triplet�: any statement can be composed, in a subject, a predicate, and the object. With this simple rule, and associations that are obtained from it, one can describe all the Internet content. Despite the facilities offered by OWL, RDF or microformats, the initial step has to be given by content owners and web developers. The main goal of having a complete and comprehensive description environment is to enable autonomous interactions inside the web itself. With semantics, anything can be a resource accessible by a URI. This means that any web page, web service, data source or remote computer can be accessed, if semantically and correctly described, autonomously. Semantic developments target machine-machine interactions and the benefits that these dynamic interactions, without human interference, are immense. These interactions promote data, information and knowledge exchanges. With them, it easier for machines to gather contextual information from any source and process it. This new level of interoperability made possible by content description and semantics promotes new sets of applications involving areas that range from service orchestration [17] to e-workflows [18], from data warehouses [19] to application mashups [20].


Fig. 3. Evolution of the Internet.3

3 Related Concepts To achieve a broad scenario of agent-based web services we must dive further in the features and concepts revolving around semantic web services and the existence of agents acting as web services. 3.1

Semantic Web Services

Developers are, nowadays, experiencing an increase in the automation of web services interoperability. Traditionally, this interaction was done “by hand�, hardcoded by developers inside the applications, resulting in static integration systems. 3

Image from Mike Coffey: http://blogs.nesta.org.uk/innovation/2007/07/the-future-is-s.html


However, the goal of dynamic service integration can only be reached if we rely on semantics. Web services and semantics are a perfect match due to an essential key factor. Web services were developed for an easier interaction among distinct software or hardware platforms and the semantic web promotes content description in order to increase interoperability among heterogeneous systems. Currently, web service providers are focused merely on creating the services and making its signature available to consumers. WSDL offers developers a simple, XML-based, structure for describing their services. However, this simple document is not enough to include content descriptions and context information in the service description. Developers should start by developing or searching for an ontology where their services and service structure fits. The next step is to define the ontology using a standard format like OWL. Following, developers must describe the service recurring to RDF and only then they should publish the services. Mellraith et al.’s work [21] is focused on defining the basic architecture for semantic web services. The proposed architecture is based on three distinct concepts: automatic web service discovery, automatic web service execution and automatic web service composition and interoperation. In order to create richer applications, which integrate heterogeneous services, it is necessary to design a web service location system. Automatic service discovery requires a repository – centralized or distributed – where web services can be published. When owners publish web services, they provide a semantic description of the service that allows them to be indexed in the repository. This process is complemented with a search mechanism. The search engine will automatically locate web services in the repository that obey the initial conditions. These conditions are defined with the same ontology as the description of the service. Regulating the service description and the service search, it is possible to obtain perfect matches for any search query. After finding the most adequate web service, the following step is automatic web service execution. Today, to execute any given service, the consumer has to directly access and command an execution. In the semantic web services era, the software, besides automatically discovering the web service, will also initiate its execution automatically according to the given set of parameters. This is only possible due to the fact that the service is described, which means that its inputs and outputs are correctly described and can be filled by the consumer application. Having inputs and outputs defined, the software can automatically select which services should interoperate in the correct order to reach the final goal. The latter process leads to the complete scenario of automatic web service composition and interoperation. This embraces both the previous concepts. Automatic web service composition combines semantic descriptions with automatic web service discovery and automatic web service execution. A semantic web service scenario will encompass autonomous web service composition. Summarily, with these developments, it will be possible to ask the system for a given goal and it will search, compose and execute the correct web service workflow that will give the optimal result.


3.2

Agents as Web Services

Agents may be used in any area that is related, or requires in any form, computers science ideals [22]. This scope means that agent advantages can reach lots of distinct areas. Web services are, definitely, one of the most used information technologies. However, with the advances in informatics, these concepts get mixed up. Huhns [23] presented an initial vision of a system where the agents can act as web services as the features that compose both of the technologies are similar. Web services require a well-defined set of components to operate. The three key components are the service provider, the service requester and the service broker. Section 2.2 presents these components and their connections. If we transpose these concepts to a multi-agent system, we may get a similar architecture. We also need a multi agent system controller, a requesting agent and an agent broker to facilitate directory services. The similarities are even visible in the used protocols. Agentcommunication language – ACL [24] – is capable of describing data exchange messages – like SOAP, describing the agents – like WSDL – and providing dynamic discovery – like UDDI. These three components in both the system are enough to provide the required execution functionalities: publish, find and bind. Fig. 4 shows the equivalency between web services and agents.

Fig. 4. Web services and agents equivalence.4

Despite the existing similarities between the models, there are several differences that result in advantages for any of the parties. However, combining agents with the web services the particular flaws are overcome, resulting in an enhanced system composed by the best of both worlds. 4

Figure 1 from [23]


When dealing with ontologies, web services are not able to use them directly or take advantage of their benefits. This means that if the service requestor and the service provider “speak different languages” they will not be able to bind, or if they do, they will not reach the goals satisfactorily. On the other hand, agents have a learning capability that allows them to reach an agreement between the requesting agent and the other agents that enable them to reach the initial objective. These learning capabilities are also important because they define agents. Agents are, by default, able to communicate and exchange any kind of message with other agents. Agents can alert other agents or update their information at any moment. Distinctly, web services are independent and passive. They do not execute any task until they receive some particular request to execute a given method. The communication capabilities, that are an essential feature of agents, also make them social agents. This means that the agents are capable of working together, communicating and exchanging experiences but they continue to be independent and autonomous software pieces. Web services are not autonomous, they cannot operate on their own: a specific trigger is always necessary to execute some operation and the service responses are always consequence of a specific request. Agents’ awareness of the environment is essential to their social skills. While a web service only knows about itself, agents know, through sensors, the world that surrounds them. Their learning skills make them adaptable to new environment and capable of making decisions that are more adequate to dynamically changing needs. The agents as web services scenario leads to a straightforward goal: autonomous composition of services. Agents have social awareness and learning capabilities, they are autonomous and they have their own will to execute a task or to cooperate executing a workflow of tasks to reach the optimal goal. This means that merging agent technologies with web services technology will lead to a scenario similar to the one obtained with semantic web services.

4 Agent-based Semantic Web Services As previously mentioned, the main goal of combining agents, semantics and web services is designing an architecture that can support applications with dynamic and autonomous service composition. This is the debated area where more research has been done. Despite the wide range of the existing solutions, the structure is generically the same: define an ontology for a specific scientific area, implement and publish a set of web services describing them with the initial ontology and finally create a system that supports the autonomous composition of web service orchestration or choreography scenarios. Sycara’s work [25], despite being considerably old, describes a good setup for reaching the proposed scenario with minimal effort. Circa the same time as the latter, Milanovic [11] described the existing solutions for web service composition, and in spite of the technology evolution, the concepts behind it are basically the same. Jing was one the first to mix agents with web services and ontologies in a closed scientific context [26]. More recently, Bansal [3] proposed a system comprising an agent-based


approach for composition of semantic web services. This is the most up-to-date work and it focus on the main problems found in implementing this kind of scenario. The agent interaction ontology, as proposed by Lin [27], “refers to a set of semantic markups that specifies the state transitions of the agent�. This ontology will be the controlled vocabulary collection that any agent component will respect. The elements of this ontology are, in most of the cases, restricted to the scientific area where the system will be applied. For instance, if we are creating a system that operates in the disease diagnostic field, known diseases and symptoms will form the ontology. Next, users need to develop the dynamic discovery component that enables autonomous service publishing and location. Sycara [28] proposes a broker-based dynamic discovery and coordination scheme. His work represents the basic setup of an architecture that solves the problems revolving around coordination of heterogeneous web services. The system uses an ontology defined in OWL-S to create the controlled vocabulary for the services and communication messages. A generic architecture may rely on two distinct styles: proxy or direct. The proxy solution, that involves mediation, uses a central repository, acting as a mediator, to coordinate the agent interactions. The messages will be exchanged between the requesting agent and the central repository at first, and then the repository will select an agent provider that corresponds to the original requests. When the match is made, the central repository acts as a proxy mediating the messages between the service requester and the service provider. The direct system only relies on the central repository for the dynamic discover process. Requesting services communicate with the repository to find the providing services that best fit their original needs. The repository will then provide both the involving agents with the appropriate addresses, enabling direct communication with each other. Finally, it is necessary to actually implement the services and the agents that will respond to them. Therefore, with these components in the environment, we can start making requests to the system. The system will then be able to automatically create a service orchestration or choreography that will give the optimal answer to the inputted problem. Hence, we will have an ecosystem that is agent-based and relies on intelligent semantics to allow autonomous and dynamic web service composition.

5

Conclusion

The work presented in this paper reflects the research done by several interested groups in the areas of agents, web services and the semantic web. The constant evolution in the computer science world has led to the acknowledgement of the Internet as the main platform for any kind of human-machine interaction. Hence, the Internet is, by now, the main platform for communication, information publishing and the execution of a multitude of applications. Intelligent agents, originally used in closed artificial intelligence problems, are being adapted to other problems that involve a distinct range of action. Considering the immense sea of growing possibilities that are made possible with the Internet, using agents in the Internet is the next logical step.


The Internet, with its speedy evolution, is growing into a web of services, where executing a simple service can access any kind of resource or operation. Concepts like the GRID [29] or “cloud-computing� [30] rely mostly on these kinds of ideals, leveraging the main purpose: the web as a platform. In the history of communication, messages exchanges started by being between humans. With the evolution of informatics, the communication was improved, machines gained significance and human started interacting with them: the humanmachine communication era was born. Nowadays, the purpose is to reach the era of machine-machine communications. To achieve this high goal, content on the web has to be correctly described. The description must involve contextual information to enhance and regulate the communications. This means that developers must program their applications relying on predefined ontologies creating strong semantics. Semantic web developments allow developers to associate content with context. Thus, machines will be able to recognize content and services improving interaction between machines. Intelligent agents are prone to ontology-enhacements. Intelligent agents are social thus, they are able to learn and communicate. This capability allows them to interact based on a set of rules, which enables them to reach an agreement that leads them to the optimal goal. On this paper, we describe the collection of components that are necessary to create an intelligent system, based in agents, that enables autonomous web service interaction. Using agents as web services and describing the environment with a strict ontology, gives developers the tools required for a system that needs some kind of service interaction. Web service composition, which can be divided in orchestration or choreography, is only possible relying on semantics. Combining semantics with agents, we get an evolved system: an intelligent system, which is capable of solving problems autonomously through communication and learning.

Acknowledgements The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement nÂş 200754 - the GEN2PHEN project.

References 1. Meikang, Q., Hung-Chung, H., Yang, L.T., Jiande, W.: Intelligent Search Agent for Internet Computing with Fuzzy Approach. Computational Science and Engineering, 2008. CSE '08. 11th IEEE International Conference on (2008) 181-188 2. Yi, X., Ming, X., Fan, Z.: Agents-Based Intelligent Retrieval Framework for the Semantic Web. Wireless Communications, Networking and Mobile Computing, 2007. WiCom 2007. International Conference on (2007) 5357-5360


3. Bansal, A., Kona, S., Blake, M.B., Gupta, G.: An Agent-Based Approach for Composition of Semantic Web Services. Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises, 2008. WETICE '08. IEEE 17th (2008) 12-17 4. Brazier, F., Keplicz, B.D., Jennings, N.R., Treur, J.: Formal Specification of Multi-Agent Systems: a Real-World Case. First International Conference on Multiagent Systems. AAAI (1995) 5. Russell, S.J., Norvig, P.: Artificial Intelligence: A Modern Approach. Prentice Hall, New Jersey (2003) 6. Sycara, K., Pannu, A., Willamson, M., Dajun, Z., Decker, K.: Distributed intelligent agents. IEEE Expert 11 (1996) 36-46 7. W3C, W.W.W.C.: Web Services. World Wide Web Consortium (2002) 8. W3C, W.W.W.C.: Simple Object Access Protocol. World Wide Web Consortium (2007) 9. W3C, W.W.W.C.: Web Service Description Language. World Wide Web Consortium (2001) 10.OASIS: Universal Description, Discovery and Integration. OASIS (2005) 11.Milanovic, N., Malek, M.: Current solutions for Web service composition. Internet Computing, IEEE 8 (2004) 51-59 12.Peltz, C.: Web services orchestration and choreography. Computer 36 (2003) 46-52 13.Berners-Lee, T., Hendler, J., Lassila, O.: The Semantic Web. Sci Am 284 (2001) 34 - 43 14.W3C, W.W.W.C.: Web Ontology Language. World Wide Web Consortium (2007) 15.Khare, R.: Microformats: the next (small) thing on the semantic Web? Internet Computing, IEEE 10 (2006) 68-75 16.W3C, W.W.W.C.: Resource Description Framework. World Wide Web Consortium (2004) 17.Lopes, P., Arrais, J., Oliveira, J.L.: Dynamic Service Integration using Web-based Workflows. In: Society, A.C. (ed.): 10th International Conference on Information Integration and Web Applications & Services. Association for Computer Machinery, Linz, Austria (2008) 622-625 18.Cardoso, J., Sheth, A.: Semantic E-Workflow Composition. Journal of Intelligent Information Systems (2003) 19.Belleau, F., Nolin, M.-A., Tourigny, N., Rigault, P., Morissette, J.: Bio2RDF: Towards a mashup to build bioinformatics knowledge systems. Journal of Biomedical Informatics 41 (2008) 706-716 20.Cheung, K.-H., Kashyap, V., Luciano, J.S., Chen, H., Wang, Y., Stephens, S.: Semantic mashup of biomedical data. Journal of Biomedical Informatics 41 (2008) 683-686 21.McIlraith, S.A., Son, T.C., Honglei, Z.: Semantic Web services. Intelligent Systems, IEEE 16 (2001) 46-53 22.Merelli, E., Armano, G., Cannata, N., Corradini, F., d'Inverno, M., Doms, A., Lord, P., Martin, A., Milanesi, L., Moeller, S., Schroeder, M., Luck, M.: Agents in bioinformatics, computational and systems biology. Briefings in Bioinformatics 8 (2007) 45 - 59 23.Huhns, M.N.: Agents as Web services. Internet Computing, IEEE 6 (2002) 93-95 24.Agents, F.f.I.P.: FIPA ACL Message Structure Specification. (2002) 25.Sycara, K., Paolucci, M., Ankolekar, A., Srinivasan, N.: Automated discovery, interaction and composition of Semantic Web services. Web Semantics: Science, Services and Agents on the World Wide Web 1 (2003) 27-46 26.Jing, D., Bi, S., Wu, F.: Geospatial information services on the basis of agent and OWL-S. Geoscience and Remote Sensing Symposium, 2005. IGARSS '05. Proceedings. 2005 IEEE International, Vol. 2 (2005) 4 pp. 27.Yong-Feng, L., Chen, J.J.Y.: OWL-Based Description for Agent Interaction. Computer Software and Applications Conference, 2007. COMPSAC 2007. 31st Annual International, Vol. 2 (2007) 147-152 28.Sycara, K., Paolucci, M., Soudry, J., Naveen, S.: Dynamic discovery and coordination of agent-based semantic Web services. Internet Computing, IEEE 8 (2004) 66-73


29.Schroeder, M., Burger, A., Kostkova, P., Stevens, R., Habermann, B., Dieng-Kuntz, R.: From a Services-based eScience Infrastructure to a Semantic Web for the Life Sciences: The Sealife Project. Proceedings of the Sixth International Workshop NETTAB 2006 on "Distributed Applications, Web Services, Tools and GRID Infrastructures for Bioinformatics" (2006) 30.Vouk, M.A.: Cloud computing - Issues, research and implementations. Information Technology Interfaces, 2008. ITI 2008. 30th International Conference on (2008) 31-40


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.