๐”– Bobbio Scriptorium
โœฆ   LIBER   โœฆ

Explaining answers from the Semantic Web: the Inference Web approach

โœ Scribed by Deborah L. McGuinness; Paulo Pinheiro da Silva


Publisher
Elsevier Science
Year
2004
Tongue
English
Weight
491 KB
Volume
1
Category
Article
ISSN
1570-8268

No coin nor oath required. For personal study only.

โœฆ Synopsis


The Semantic Web lacks support for explaining answers from web applications. When applications return answers, many users do not know what information sources were used, when they were updated, how reliable the source was, or what information was looked up versus derived. Many users also do not know how implicit answers were derived. The Inference Web (IW) aims to take opaque query answers and make the answers more transparent by providing infrastructure for presenting and managing explanations. The explanations include information concerning where answers came from (knowledge provenance) and how they were derived (or retrieved). In this article we describe an infrastructure for IW explanations. The infrastructure includes: IWBase -an extensible web-based registry containing details about information sources, reasoners, languages, and rewrite rules; PML -the Proof Markup Language specification and API used for encoding portable proofs; IW browser -a tool supporting navigation and presentations of proofs and their explanations; and a new explanation dialogue component. Source information in the IWBase is used to convey knowledge provenance. Representation and reasoning language axioms and rewrite rules in the IWBase are used to support proofs, proof combination, and Semantic Web agent interoperability. The Inference Web is in use by four Semantic Web agents, three of them using embedded reasoning engines fully registered in the IW. Inference Web also provides explanation infrastructure for a number of DARPA and ARDA projects.


๐Ÿ“œ SIMILAR VOLUMES


Open answer set programming for the sema
โœ Stijn Heymans; Davy Van Nieuwenborgh; Dirk Vermeir ๐Ÿ“‚ Article ๐Ÿ“… 2007 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 378 KB

We extend answer set programming (ASP) with, possibly infinite, open domains. Since this leads to undecidable reasoning, we restrict the syntax of programs, while carefully guarding knowledge representation mechanisms such as negation as failure and inequalities. Reasoning with the resulting extende

Text summarization contribution to seman
โœ Elena Lloret; Hector Llorens; Paloma Moreda; Estela Saquete; Manuel Palomar ๐Ÿ“‚ Article ๐Ÿ“… 2011 ๐Ÿ› John Wiley and Sons ๐ŸŒ English โš– 400 KB

As the Internet grows, it becomes essential to find efficient tools to deal with all the available information. Question answering (QA) and text summarization (TS) research fields focus on presenting the information requested by users in a more concise way. In this paper, the appropriateness and ben

Extracting focused knowledge from the se
โœ LOUISE CROW; NIGEL SHADBOLT ๐Ÿ“‚ Article ๐Ÿ“… 2001 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 1014 KB

Ontologies are increasingly being recognized as a critical component in making networked knowledge accessible. Software architectures which can assemble knowledge from networked sources coherently according to the requirements of a particular task or perspective will be at a premium in the next gene

OWL-QLโ€”a language for deductive query an
โœ Richard Fikes; Patrick Hayes; Ian Horrocks ๐Ÿ“‚ Article ๐Ÿ“… 2004 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 120 KB

This paper discusses the issues involved in designing a query language for the Semantic Web and presents the OWL query language (OWL-QL) as a candidate standard language and protocol for query-answering dialogues among Semantic Web computational agents using knowledge represented in the W3Cs ontolog

Start making sense: The Chatty Web appro
โœ Karl Aberer; Philippe Cudrรฉ-Mauroux; Manfred Hauswirth ๐Ÿ“‚ Article ๐Ÿ“… 2003 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 333 KB

This paper describes a novel approach for obtaining semantic interoperability in a bottom-up, semi-automatic manner without relying on pre-existing, global semantic models. We assume that large amounts of data exist that have been organized and annotated according to local schemas. Seeing semantics