Erik Wilde's Publications


The publications have been grouped into written publications and presentations. Written publications are available grouped by different categories, you may directly jump to the sections listing authored books, edited books/volumes, theses, book chapters, journal papers, standardization activities, conference papers and posters, workshop papers, technical reports, magazine articles, newspaper articles, or online articles. Presentations are available grouped by different categories, you may directly jump to the sections listing university courses, invited talks, tutorials, talks, or professional courses.

Publications available on-line sometimes are accessible in different formats. PostScript files are for printing and previewing with PostScript previewers such as ghostview. PDF files are documents in Adobe's Portable Document Format. You need Adobe's Acrobat Reader (or some other PDF viewer) to view these files.

Additional information is available in the curriculum vitae and on the home page of Erik Wilde.


Written Publications

Books, Edited Books/Volumes, Theses, and Book Chapters Journal Papers Standardization Reviewed Conference Papers & Posters and Workshop Papers Technical Reports Magazine, Newspaper, and Online Articles
2014 REST2 IT 56(3) RFC 7111 RFC 7351 draft-ietf-appsawg-http-problem draft-wilde-atom-profile draft-wilde-accept-post draft-wilde-home-xml draft-wilde-link-desc XML Prague 2014 UbiComp 2014 W3C WoT
2013 TDO WWW2013 WS-REST 2013 RFC 6892 RFC 6906 RFC 7061 Balisage 2013 W3C eBooks
2012 WS-REST 2012 WoT 2012
2011 WS-REST 2011 REST1 IoT WoT WoT 2011 iConf 2011 WWW2011 ITNG 2011 JCDL 2011 ICSOC 2011 iRep 2011-042
2010 WS-REST 2010 LocWeb 2010 JSP 41(2) JWE 9(4) RFC 5724 WWW20101 WWW20102 WebSci 10 LDOW2010 ICWE 2010 W3C Policy SPRINGL 2010 IoT 2010 WESOA 2010 ICSOC 2010 PC/ETHZ iRep 2010-038 iRep 2010-040 iRep 2010-041
2009 Weaving LocWeb 2009 WEWST 2009 IPSN 2009 WWW2009 ICWE 2009 FQAS 2009 WEWST 2009 iRep 2009-028 iRep 2009-029 iRep 2009-030 DERI TR iRep 2009-035
2008 LocWeb 2008 IJWBC 4(1) OIR 32(3) CACM 51(7) CACM 51(10) ACM Queue 6(6) RFC 5147 WSW2008 LocWeb 2008 EXPONWIRELESS 2008 SCC 2008 IRI 2008 BCS 2008 TIPUGG 2008 HCIR 2008 iRep 2008-016 iRep 2008-025 iRep 2008-026
2007 VDF JoDI 8(3) WWW20071 WWW20072 WWW20073 XTech 2007 SCC 2007 IRI 2007 DocEng 2007 BXML 2007 iRep 2007-001 eCH-0036 eCH-0050 iRep 2007-014 iRep 2007-015 xml.com3
2006 WBC 2006 WWW20061 WWW20062 WWW20063 WWW20064 ELPUB 2006 GMW06 TIKrep 242 TIKrep 244 TIKrep 245 TIKrep 257 eCH-0033 TIKrep 265 eCH-0035
2005 WWW20051 WWW20052 HT 2005 BXML 2005 ECDL 2005 IAWTIC 20051 IAWTIC 20052 TIKrep 212 TIKrep 213 TIKrep 224 eCH-0018 XML & WS 2005(2) iX 18(7) XML & WS 2005(4) iX 18(10)
2004 PHIC1 PHIC2 ISTL 41 XML Europe 2004 ICETE 20041 ICETE 20042 ECOWS'04 XSW 2004 TIKrep 190 TIKrep 194 XML & WS 2004(1) xml.com2 XML & WS 2004(2) XML & WS 2004(3) iX 17(7) XML & WS 2004(4) D-Lib 10(9)
2003 IEEE IC 7(5) XML Europe 2003 WWW20031 WWW20032 WWW20033 IUC24 SINN03 TIKrep 160 TIKrep 166 TIKrep 172 W3C BXML iX 16(2) XML & WS 2003(5) xml.com1 XML & WS 2003(6)
2002 XLink WWW2002 XML 2002 TIKrep 124 TIKrep 125 TIKrep 134 TIKrep 143 TIKrep 148 iX 15(7) iX 15(8)
2001 WWW101 WWW102 Open Publish 2001 TIKrep 102 iX 14(3) iX 14(7)
2000 HICSS-33 NZZ Australian IT iX 13(6)
1999 WWW (german)
1998 WWW
1997 Ph.D. thesis ETT 8(4)
1996 COST 237 ECMAST 96 TIKrep 15 TIKrep 19
1995 TCCC 95 ULPAA 95
1994 TIKrep 18 TIKrep2 TIKrep3
1993 MCAT 93 ZBF 224Z1 ZBF 224Z2 ZBF 224Z3
1992 TIKrep1
1991 Diploma thesis

Books

Edited Books/Volumes

Theses

Book Chapters

  • Robert J. Glushko, Erik Wilde, and Jess Hemerly, Activities in Organizing Systems. In: Robert J. Glushko (Editor), The Discipline of Organizing, pp. 47–93, MIT Press, Cambridge, Massachusetts, 2013, ISBN 978-0-262-51850-5.
  • Dominique Guinard, Vlad Trifa, Friedemann Mattern and Erik Wilde, From the Internet of Things to the Web of Things: Resource-Oriented Architecture and Best Practices. In: Dieter Uckelmann, Mark Harrison, and Florian Michahelles (Editors), Architecting the Internet of Things, pp. 97–129, Springer, Heidelberg, Germany, May 2011. (available as abstract and PDF)
    Abstract: Creating networks of smart things found in the physical world (e.g., with RFID, wireless sensor and actuator networks, embedded devices) on a large scale has become the goal of a variety of recent research activities. Rather than exposing real-world data and functionality through vertical system designs, we propose to make them an integral part of the Web. As a result, smart things become easier to build upon. In such an architecture, popular Web technologies (e.g., HTML, JavaScript, Ajax, PHP, Ruby) can be used to build applications involving smart things, and users can leverage well-known Web mechanisms (e.g., browsing, searching, bookmarking, caching, linking) to interact with and share these devices. In this chapter, we describe the Web of Things (WoT) architecture and best practices based on the RESTful principles that have already contributed to the popular success, scalability, and evolvability of the Web. We discuss several prototypes using these principles, which connect environmental sensor nodes, energy monitoring systems, and RFID-tagged objects to the Web. We also show how Web-enabled smart things can be used in lightweight ad-hoc applications, called physical mashups, and discuss some of the remaining challenges towards the global World Wide Web of Things.
  • Martin Kofahl and Erik Wilde, Location Concepts for the Web. In: Irwin King and Ricardo Baeza-Yates (Editors), Weaving Services and People on the World Wide Web, pp. 147–168, Springer, Heidelberg, Germany, August 2009, ISBN 978-3-642-00570-1.
  • Erik Wilde, ShaRef: Bibliographien als Wissensspeicher. In: Verena Friedrich, Christoph Clases and Theo Wehner (Editors), Hochschule im info-strukturellen Wandel, Chapter 3.3, pp. 285–298, vdf Verlag, Zürich, Switzerland, March 2007, ISBN 978-3-7281-3079-2. (available as PDF)
  • Erik Wilde, XML Core Technologies. In: Munindar P. Singh (Editor), The Practical Handbook of Internet Computing, Chapter 23, pp. 23-1–23-18, CRC Press, Baton Rouge, Florida, September 2004, ISBN 1584883812.
  • Erik Wilde, Advanced XML Technologies. In: Munindar P. Singh (Editor), The Practical Handbook of Internet Computing, Chapter 24, pp. 24-1–24-10, CRC Press, Baton Rouge, Florida, September 2004, ISBN 1584883812.

Journal Papers

  • Erik Wilde, Managing a RESTful SOA: Providing Guidance for Service Designers and Orientation for Service Consumers, Information Technology, 56(3):98–105, June 2014. (available as abstract)
    Abstract: Managing a Service Oriented Architecture (SOA) requires a well-defined model of what a service actually is, and a structure within which services can be published, documented, and consumed. Without such a definition and organizational structure, it is hard to reap the benefits of a SOA. This paper presents a case study that is closely aligned with how the Web is organized as a SOA, but adds some structure so that service producers and service consumers can be supported in their goals. Using this approach, it is possible to realize the architectural benefits of a RESTful architecture, but can still make sure that the published services follow a set of guidelines and constraints that the SOA is based on.
  • Erik Wilde and Anuradha Roy, Web Site Metadata, Journal of Web Engineering, 9(4):283–301, December 2010. (available as abstract and PDF)
    Abstract: The currently established formats for how a Web site can publish metadata about a site's pages, the robots.txt file and sitemaps, focus on how to provide information to crawlers about where to not go and where to go on a site. This is sufficient as input for crawlers, but does not allow Web sites to publish richer metadata about their site's structure, such as the navigational structure. This paper looks at the availability of Web site metadata on today's Web in terms of available information resources and quantitative aspects of their contents. Such an analysis of the available Web site metadata not only makes it easier to understand what data is available today; it also serves as the foundation for investigating what kind of information retrieval processes could be driven by that data, and what additional data could be provided by Web sites if they had richer data formats to publish metadata.
  • Jöran Beel, Bela Gipp and Erik Wilde, Academic Search Engine Optimization (ASEO): Optimizing Scholarly Literature for Google Scholar & Co., Journal of Scholarly Publishing, 41(2):176–190, January 2010. (available as abstract and PDF)
    Abstract: This article introduces and discusses the concept of Academic Search Engine Optimization (ASEO). Based on three recently conducted studies, guidelines are provided on how to optimize scholarly literature for academic search engines in general, and for Google Scholar in particular. In addition, we briefly discuss the risk of researchers' illegitimately over-optimizing their articles.
  • Erik Wilde and Robert J. Glushko, XML Fever, ACM Queue, 6(6):46–53, October 2008. (available as abstract and HTML)
    Abstract: The Extensible Markup Language (XML), which just celebrated its 10th birthday, is one of the big success stories of the Web. Apart from basic Web technologies (URIs, HTTP, and HTML) and the advanced scripting driving the Web 2.0 wave, XML is by far the most successful and ubiquitous Web technology. With great power, however, comes great responsibility, so while XML.s success is well earned as the first truly universal standard for structured data, it must now deal with numerous problems that have grown up around it. These are not entirely the fault of XML itself, but instead can be attributed to exaggerated claims and ideas of what XML is and what it can do.
  • Erik Wilde and Robert J. Glushko, Document Design Matters, Communications of the ACM, 51(10):43–49, October 2008. (available as abstract and HTML)
    Abstract: The classical approach to the data aspect of system design distinguishes conceptual, logical, and physical models. Models of each type or level are governed by metamodels that specify the kinds of concepts and constraints that can be used by each model; in most cases metamodels are accompanied by languages for describing models. For example, in database design, conceptual models usually conform to the Entity-Relationship (ER) metamodel (or some extension of it), the logical model maps ER models to relational tables and introduces normalization, and the physical model handles implementation issues such as possible denormalizations in the context of a particular database schema language. In this modeling methodology, there is a single hierarchy of models that rests on the assumption that one data model spans all modeling levels and applies to all the applications in some domain. The one true model approach assumes homogeneity, but this does not work very well for the Web. The Web as a constantly growing ecosystem of heterogeneous data and services has challenged a number of practices and theories about the design of IT landscapes. Instead of being governed by one true model used by everyone, the underlying assumption of top-down design, Web data and services evolve in an uncoordinated fashion. As a result, a fundamental challenge with Web data and services is matching and mapping local and often partial models that not only are different models of the same application domain, but also differ, implicitly or explicitly, in their associated metamodels.
  • Erik Wilde and Robert J. Glushko, XML Fever, Communications of the ACM, 51(7):40–46, July 2008. (available as abstract and HTML)
    Abstract: The Extensible Markup Language (XML), which just celebrated its 10th birthday, is one of the big success stories of the Web. Apart from basic Web technologies (URIs, HTTP, and HTML) and the advanced scripting driving the Web 2.0 wave, XML is by far the most successful and ubiquitous Web technology. With great power, however, comes great responsibility, so while XML.s success is well earned as the first truly universal standard for structured data, it must now deal with numerous problems that have grown up around it. These are not entirely the fault of XML itself, but instead can be attributed to exaggerated claims and ideas of what XML is and what it can do.
  • Erik Wilde, Deconstructing Blogs, Online Information Review, 32(3):401–414, 2008. (available as abstract)
    Abstract: Purpose: A growing amount of information available on the Web can be classified as "contextual information", putting already existing information into a new context rather than creating isolated new information resources. Blogs are a typical and popular example of this category. By looking at blogs from a more context-oriented view, it is possible to deconstruct them into structures which are more contextual than just focused on the content, facilitating flexible reuse of these structures.
    Design/Methodology/Approach: We look at the underlying structures of blogs and blog posts, representing them as multi-ended links. This alternative representation of blogs and blog posts allows us to represent them as reusable information structures. This paper presents blogs as a popular content type, but the approach of restructuring Web 2.0 content can be extended to other classes of information, as long as they can be regarded as being mainly contextual.
    Findings: By deconstructing blogs and blog posts into their essential properties, we can show how there is a simple and universal representation for blogs. This representation allows the reuse of blog information across specific blog or blogging platforms, and can even go beyond blogs by representing other Web content which provides context.
    Originality/Value: The approach presented in this paper is a novel approach of mapping a popular Web content type to a simple and universal representation. The value of such a unified representation lies in exposing the structural similarities among blogs and blog posts, and making them available for reuse.
  • Erik Wilde, Sai Anand, Thierry Bücheler, Max Jörg, Nick Nabholz and Petra Zimmermann, Collaboration Support for Bibliographic Data, International Journal of Web Based Communities, 4(1):98–109, January 2008. (available as abstract)
    Abstract: In many collaborative research settings, electronic bibliographic repositories (bibliographies) are used to aggregate information about related work among researchers. These bibliographies allow for group bibliography collection, individual tracking of each user.s library, and personal annotation capabilities within each user.s library. However, most tools used for managing bibliographic data do not support collaboration. Given the collaborative nature of the research group, this information should be shareable between researchers within the group and potentially across larger organizational units (for example, research institutes). By using ShaRef, users can share bibliographic information and collaborate, publish and export data using a variety of output channels. ShaRef.s goal is to make sharing of and collaboration with bibliographic information easier than it is today.
  • Erik Wilde, Personalization of Shared Data: The ShaRef Approach, Journal of Digital Information, 8(3), 2007. (available as abstract)
    Abstract: Personalization of services often has to cope with the conflicting goals of allowing cooperation and sharing, which require common data formats and services, and supporting individual use cases, which require as much personalization as possible. In this paper we present the ShaRef approach to personalization and sharing, which on the one hand allows users to cooperatively work with bibliographic references, and on the other hand supports the usage of this information in personalized and diverse ways. The goal of this approach is to foster as much cooperation as possible, while simultaneously supporting users with individualized ways of reusing the cooperatively managed data. This way of building applications combines the beneficial aspects of information sharing and personalization. Using this approach, applications are better suited to become building blocks in information infrastructures that are built by users in unpredictable ways.
  • Erik Wilde, References as Knowledge Management, Issues in Science & Technology Librarianship, No. 41, Fall 2004. (available as abstract)
    Abstract: Management of bibliographic and Web references for many researchers is the closest thing to knowledge management they will ever do. This article describes ShaRef, a new approach to reference management that focuses on the user and enhances traditional reference management approaches with collaboration features and lightweight knowledge management. While this is primarily targeted at providing individual users and user groups with a better tool, it also creates a new and interesting link to libraries, because of the features that enable users to go from their own references directly to the library through the use of OpenURL. Thus, a new task for libraries is to adjust to this new type of users, who are using new technologies to access a library.
  • Erik Wilde, XML Technologies Dissected, IEEE Internet Computing, 7(5):74–78, September/October 2003. (available as abstract)
    Abstract: XML technologies are very popular, and one of the most important reasons for this is the availability of tools and technologies for working with XML, eliminating the need to build XML processing from scratch. However, XML technologies are built on top of inherent (and not always well-defined) information models, and this may cause problems because (1) the information models of some tools may not support the required ``view'' of XML, or (2) there is no appropriate data model to work with the information model in question. In this article, we approach this question from the systematic side, and describe the most prominent XML technologies with regard to their information and data models.
  • Erik Wilde and Bernhard Plattner, Transport-Independent Group and Session Management for Group Communication Platforms, European Transactions on Telecommunications, 8(4): 409–421, July 1997. (available as abstract, PostScript, and PDF)
    Abstract: With more and more computers gradually changing from isolated, personal tools to networked workstations, group communications is an area of research which has received much attention recently. This paper focuses on a model and the architecture of a system which supports group communications by providing group and session management functionality. The system architecture is related to DNS or X.500, however avoids their complexity by focusing on group and session management and adding functionality where necessary. New functionality is needed for the dynamics of group communications (members of a connection may change over the lifetime of the connection) and increased complexity of relations which may be established between objects. A model is described which defines six object types which represent the relevant objects. Users and groups represent real world users and their relations. Sessions and flows describe ongoing group communications. Flow templates and certificates provide mechanisms for management and security issues. The architecture presented in this paper can be used for group and session management support within different group communications platforms. A description of the implementation as well as implementation results are given in the last section.

Standardization Activities

  • Mark Nottingham and Erik Wilde, Problem Details for HTTP APIs, Internet Draft draft-ietf-appsawg-http-problem-00, September 2014. (available as abstract, ASCII, and HTML)
    Abstract: This document defines a "problem detail" as a way to carry machine-readable details of errors in a HTTP response, to avoid the need to invent new error response formats for HTTP APIs.
  • Erik Wilde, A Media Type for XML Patch Operations, RFC 7351, August 2014. (available as abstract, ASCII, and HTML)
    Abstract: The XML Patch document format defines an XML document structure for expressing a sequence of patch operations to be applied to an XML document. The XML Patch document format builds on the foundations defined in RFC 5261. This specification defines the also provides the media type registration "application/xml-patch+xml", to allow the use of XML Patch documents in, for example, HTTP conversations.
  • John Arwe, Steve Speicher, and Erik Wilde, The Accept-Post HTTP Header, Internet Draft draft-wilde-accept-post-03, August 2014. (available as abstract, ASCII, and HTML)
    Abstract: This specification defines a new HTTP response header field Accept-Post, which indicates server support for specific media types for entity bodies in HTTP POST requests.
  • Erik Wilde, Profile Support for the Atom Syndication Format, Internet Draft draft-wilde-atom-profile-04, July 2014. (available as abstract, ASCII, and HTML)
    Abstract: The Atom syndication format is a generic XML format for representing collections. Profiles are one way how Atom feeds can indicate that they support specific extensions. To make this support visible on the media type level, this specification adds an optional "profile" media type parameter to the Atom media type. This allows profiles to become visible at the media type level, so that servers as well as clients can indicate support for specific Atom profiles in conversations, for example when communicating via HTTP. This specification updates RFC 4287 by adding the "profile" media type parameter to the application/atom+xml media type registration.
  • Erik Wilde, HTTP Link Descriptions, Internet Draft draft-wilde-link-desc-01, February 2014. (available as abstract, ASCII, and HTML)
    Abstract: Interactions with many resources on the Web are driven by links, and these links often define certain expectations about the interactions (such as HTTP methods being used, media types being sent in the request, or URI parameters being used in a certain way). While these expectations are essential to define the possible framework for interactions, it may be useful to further narrow them down by providing link descriptions, which can help clients to gain more runtime knowledge about the resource they are about to interact with. This memo defines Link Descriptions, a model and associated media type that can be used to describe links by supporting descriptive markup for representing interaction information with links. Link Descriptions can be used by media types (by inclusion or by reference) that seek to make Link Descriptions runtime-capable, without having to create their own representation.
  • Erik Wilde, Home Documents for HTTP Services: XML Syntax, Internet Draft draft-wilde-home-xml-03, February 2014. (available as abstract, ASCII, and HTML)
    Abstract: The current draft for HTTP Home Documents provides a JSON syntax only. This draft provides an XML syntax for the same underlying data model, so that the concept of HTTP Home Documents can be consistently exposed in both JSON- and XML-based HTTP services.
  • Michael Hausenblas, Erik Wilde, and Jeni Tennison, URI Fragment Identifiers for the text/csv Media Type, RFC 7111, January 2014. (available as abstract, ASCII, and HTML)
    Abstract: This memo defines URI fragment identifiers for text/csv MIME entities. These fragment identifiers make it possible to refer to parts of a text/csv MIME entity identified by row, column, or cell. Fragment identification can use single items or ranges.
  • Rémon Sinnema and Erik Wilde, eXtensible Access Control Markup Language (XACML) XML Media Type, RFC 7061, November 2013. (available as abstract, ASCII, and HTML)
    Abstract: This specification registers an XML-based media type for the eXtensible Access Control Markup Language (XACML).
  • Erik Wilde, The 'profile' Link Relation Type, RFC 6906, March 2013. (available as abstract, ASCII, and HTML)
    Abstract: This specification defines the 'profile' link relation type that allows resource representations to indicate that they are following one or more profiles. A profile is defined not to alter the semantics of the resource representation itself, but to allow clients to learn about additional semantics (constraints, conventions, extensions) that are associated with the resource representation, in addition to those defined by the media type and possibly other mechanisms.
  • Erik Wilde, The 'describes' Link Relation Type, RFC 6892, March 2013. (available as abstract, ASCII, and HTML)
    Abstract: This specification defines the 'describes' link relation type that allows resource representations to indicate that they are describing another resource. In contexts where applications want to associate described resources and description resources, and want to build services based on these associations, the 'describes' link relation type provides the opposite direction of the 'describedby' link relation type, which already is a registered link relation type.
  • Erik Wilde and Antti Vähä-Sipilä, URI Scheme for Global System for Mobile Communications (GSM) Short Message Service (SMS), RFC 5724, January 2010. (available as abstract, ASCII, and HTML)
    Abstract: This memo specifies the Uniform Resource Identifier (URI) scheme "sms" for specifying one or more recipients for an SMS message. SMS messages are two-way paging messages that can be sent from and received by a mobile phone or a suitably equipped networked device.
  • Erik Wilde and Martin Dürst, URI Fragment Identifiers for the text/plain Media Type, RFC 5147, April 2008. (available as abstract, ASCII, and HTML)
    Abstract: This memo defines URI fragment identifiers for text/plain MIME entities. These fragment identifiers make it possible to refer to parts of a text/plain MIME entity, either identified by character position or range, or by line position or range. Fragment identifiers may also contain information for integrity checks to make them more robust.

Reviewed Conference Papers

  • Jonathan Robie, Rémon Sinnema, and Erik Wilde, RADL: RESTful API Description Language, XML Prague 2014, Prague, Czech Republic, February 2014. (available as abstract)
    Abstract: In a REST API, the server provides options to a client in the form of hypermedia links in documents, and the main thing a client needs to know is how to locate and use these links in order to use the API. The main job of a REST API description is to provide this information to the client in the context of media type descriptions. Unfortunately, most REST service description languages and design methodologies focus on other concerns instead. RESTful API Description Language (RADL) is an XML vocabulary for describing Hypermedia-driven RESTful APIs. The APIs it describes may use any media type, in XML, JSON, HTML, or any other format. The structure of a RADL description is based on media types, including the documents associated with a media type, links found in these documents, and the interfaces associated with these links. RADL can be used as a specification language or as run-time metadata to describe a service.
  • Jonathan Robie, Rob Cavicchio, Rémon Sinnema, and Erik Wilde, RESTful Service Description Language (RSDL): Describing RESTful Services Without Tight Coupling, Balisage: The Markup Conference 2013, Montréal, Canada, August 2013. (available as abstract)
    Abstract: RESTful Service Description Language (RSDL) is an XML vocabulary for designing and documenting hypermedia-driven RESTful Services. RSDL takes a purist hypermedia-driven approach to REST design, requiring that a service have a single entry point, and focusing the design on resources, links, and media types.
  • Cesare Pautasso and Erik Wilde, Push-Enabling RESTful Business Processes, 9th International Conference on Service Oriented Computing (ICSOC 2011), Paphos, Cyprus, December 2011. (available as abstract and PDF)
    Abstract: Representational State Transfer (REST) as an architectural style for service design has seen substantial uptake in the past years. However, some areas such as Business Process Modeling (BPM) and push services so far have not been addressed in the context of REST principles. In this work, we look at how both BPM and push can be combined so that business processes can be modeled and observed in a RESTful way. Based on this approach, clients can subscribe to be notified when certain states in a business process are reached. Our goal is to design an architecture that brings REST's claims of loose coupling and good scalability to the area of BPM, and still allow process-driven composition and interaction between resources to be modeled.
  • Erik Wilde, Open and Accessible Presentations, 8th International Conference on Information Technology: New Generations (ITNG 2011), Las Vegas, Nevada, April 2011. (available as abstract and PDF, and paper presentation)
    Abstract: E-learning often is perceived as something that, on the technical level, can be addressed by designing an e-learning system, which often is equipped with a Web-based interface. We argue that this traditional approach of e-learning system design should be reversed in today's Web-oriented environment, in the sense that e-learning applications should be designed as well-behaving Web citizens and expose their services through nothing else but the Web's loose coupling principles. This article presents a system for Web-based presentations which follows this approach in publishing presentation material in a way that is as Web-friendly as possible. We show how such a system can be used as one building block in an e-learning infrastructure; replacing the traditional view of monolithic e-learning systems with an open and loosely coupled ecosystem of cooperating e-learning Web applications.
  • Yiming Liu and Erik Wilde, Personalized Location-Based Services, iConference 2011, Seattle, Washington, February 2011. (available as abstract and PDF)
    Abstract: Location-Based Services (LBS) are based on a combination of the inherent location information about specific data, and/or the location information supplied by LBS clients, requesting location-specific and otherwise customized services. The integration of location-annotated data with existing personal and public information and services creates opportunities for insightful new views on the world, and allows rich, personalized, and contextualized user experiences. One of the biggest constraints of current LBS is that most of them are essentially vertical services. These current designs makes it hard for users to integrate LBS from a variety of service providers, either to create intermediate value-added services such as social information sharing facilities, or to facilitate client-side aggregations and mashups across specific LBS providers. Our approach, the Tiled Feeds architecture, applies the well-established, standard Web service pattern of feeds, and extends it with query and location-based features. Using this approach, LBS on the Web can be exposed in a generalized and aggregation-friendly way. We believe this approach can be used to facilitate the creation of standardized, Web-friendly, horizontally integrated location-based services.
  • Erik Wilde, Linked Data and Service Orientation, 8th International Conference on Service Oriented Computing (ICSOC 2010), San Francisco, California, December 2010. (available as abstract, PDF, and paper presentation)
    Abstract: Linked Data has become a popular term and method of how to expose structured data on the Web. There currently are two school of thought when it comes to defining what Linked Data actually is, with one school of thought defining it more narrowly as a set of principles describing of how to publish data based on Semantic Web technologies, whereas the other school more generally defines it as any form of properly linked data that follows the Representational State Transfer (REST) architectural style of the Web. In this paper, we describe and compare these two schools of thoughts with a particular emphasis on how well they support principles of service orientation.
  • Dominique Guinard, Vlad Trifa and Erik Wilde, A Resource Oriented Architecture for the Web of Things, Second International Conference on the Internet of Things (IoT 2010), Tokyo, Japan, November/December 2010. (available as abstract and PDF)
    Abstract: Many efforts are centered around networking smart things from the physical world (e.g. RFID, wireless sensor and actuator networks, embedded devices) on a larger scale. Rather than exposing real-world data and functionality through proprietary and tightly-coupled systems we propose to make them an integral part of the Web. As a result, smart things become easier to build upon. Popular Web languages (e.g. HTML, URI, JavaScript, PHP) can be used to build applications involving smart things and users can leverage well-known Web mechanisms (e.g. browsing, searching, bookmarking, caching, linking) to interact and share things. In this paper, we begin by describing a Web of Things architecture and best-practices rooted on the RESTful principles that contributed to the popular success, scalability, and evolvability of the traditional Web. We then discuss several prototypes implemented using these principles to connect environmental sensor nodes and an energy monitoring systems to the World Wide Web. We finally show how Web-enabled things can be used in lightweight ad-hoc applications called physical mashups.
  • Yiming Liu and Erik Wilde, Scalable and Mashable Location-Oriented Web Services, 10th International Conference on Web Engineering (ICWE 2010), Vienna, Austria, July 2010. (available as abstract, PDF, and paper presentation)
    Abstract: Web-based access to services increasingly moves to location-oriented scenarios, with either the client being mobile and requesting relevant information for the current location, or with a mobile or stationary client accessing a service which provides access to location-based information. The Web currently has no specific support for this kind of service pattern, and many scenarios use proprietary solutions which result in vertical designs with little possibility to share and mix information across various services. This papers describes an architecture for providing access to location-oriented services which is based on the principles of Representational State Transfer (REST) and uses a tiling scheme to allow clients to uniformly access location-oriented services. Based on these Tiled Feeds, lightweight access to location-oriented services can be implemented in a uniform and scalable way, and by using feeds, established patterns of information aggregation, filtering, and republishing can be easily applied.
  • Erik Wilde and Alexandros Marinos, Feed Querying as a Proxy for Querying the Web, Eighth International Conference on Flexible Query Answering Systems (FQAS 2009), Roskilde, Denmark, October 2009. (available as abstract and PDF)
    Abstract: Managing information, access to information, and updates to relevant information on the Web has become a challenging task because of the volume and the variety of information sources and services available on the Web. This problem will only grow because of the increasing number of potential information resources, and the increasing number of services which could be driven by machine-friendly access to these resources. In this paper, we propose to use the established and simple metamodel of feeds as a proxy for information resources on the Web, and to use feed-based methods for producing, aggregating, querying, and publishing information about resources on the Web. We propose an architecture that is flexible and scalable and uses well-established RESTful methods of loose coupling. By using such an architecture, mashups and the repurposing of Web services is encouraged, and the simplicity of the underlying metamodel places no undue restrictions on the possible application areas.
  • Erik Wilde and Anuradha Roy, Web Site Metadata, 9th International Conference on Web Engineering (ICWE 2009), San Sebastián, Spain, June 2009. (available as abstract, PDF, and paper presentation)
    Abstract: Understanding the availability of site metadata on the Web is a foundation for any system or application that wants to work with the pages published by Web sites, and also wants to understand a Web site's structure. There is little information available about how much information Web sites make available about themselves, and this paper presents data addressing this question. Based on this analysis of available Web site metadata, it is easier for Web-oriented applications to be based on statistical analysis rather than assumptions when relying on Web site metadata. Our study of robots.txt files and sitemaps can be used as a starting point for Web-oriented applications wishing to work with Web site metadata.
  • Cesare Pautasso and Erik Wilde, Why is the Web Loosely Coupled? A Multi-Faceted Metric for Service Design, 18th International World Wide Web Conference (WWW2009), Madrid, Spain, April 2009. (available as abstract, PDF, and paper presentation)
    Abstract: Loose coupling is often quoted as a desirable property of systems architectures. One of the main goals of building systems using Web technologies is to achieve loose coupling. However, given the lack of a widely accepted definition of this term, it becomes hard to use coupling as a criterion to evaluate alternative Web technology choices, as all options may exhibit, and claim to provide, some kind of loose coupling effects. This paper presents a systematic study of the degree of coupling found in service-oriented systems based on a multi-faceted approach. Thanks to the metric introduced in this paper, coupling is no longer a one-dimensional concept with loose coupling found somewhere in between tight coupling and no coupling. The paper shows how the metric can be applied to real-world examples in order to support and improve the design process of service-oriented systems.
  • Erik Wilde and Martin Gaedke, Web Engineering Revisited, 2008 British Computer Society (BCS) Conference on Visions of Computer Science, London, UK, September 2008. (available as abstract and PDF)
    Abstract: We propose Web Engineering 2.0 to not focus anymore on how to engineer for the Web, but how to engineer the Web. Web Engineering has become one of the core disciplines for building Web-oriented applications. This paper proposes to reposition Web engineering to be more specific to what the Web is, by which we mean not only an interface technology, but an information system, into which Web-oriented applications have to be embedded. More traditional Web applications often are just user interfaces to data silos, whereas the last years have shown that well-designed Web-oriented applications can essentially start with no data, and derive all their value from being open and attracting users on a large scale. Such an approach to Web engineering not only leads to a more disciplined way of engineering the Web, it also allows computer science to better integrate the special properties of the Web, most importantly the loosely coupled nature of the Web, and the importance of the social systems driving the Web.
  • Erik Wilde and Yiming Liu, Lightweight Linked Data, 2008 IEEE International Conference on Information Reuse and Integration (IRI 2008), Las Vegas, Nevada, July 2008. (available as abstract and PDF)
    Abstract: Much of the Web's success rests with its role in enabling information reuse and integration across various boundaries. Hyperlinked Web resources represent a rich information tapestry of content and context, instrumental in effective knowledge sharing and further knowledge development. However, the Web's simple linking model has become increasingly inadequate for effective content discovery and reuse. At the same time, rigorous but heavyweight solutions such as the Semantic Web have yet to garner critical mass in adoption. This paper analyzes the relative strengths and shortcomings of existing linked data approaches. It proposes a novel, lightweight architecture for the modeling, aggregation, retrieval, management, and sharing of contextual information for Web resources, based on established standards and designed to encourage more efficient and robust information reuse on the Web.
  • Eric C. Kansa and Erik Wilde, Tourism, Peer Production, and Location-Based Service Design, 2008 IEEE International Conference on Services Computing (SCC 2008), Honolulu, Hawaii, July 2008. (available as abstract and PDF)
    Abstract: This paper describes characteristics of information and service design by exploring the needs and motivations of tourists. Tourists are expected to be important and demanding users of location-based services. They will need customized means to filter their experience of destinations, as well as ways to meaningfully participate in the creation of narratives and histories about different places. Mobile technologies will also allow tourists to be more discriminating in their patronage of different service offerings, especially as they gain greater knowledge of so-called backstage processes. These demanding needs will require choreography between services offered by many different commercial, cultural, educational, and community providers. The paper suggests approaches to deliver tourist location-based services based on low barrier of entry principles of web architecture. The paper concludes with a discussion on how the erosion of backstage/front-stage distinctions in service systems impacts service innovation.
  • Erik Wilde, Philippe Cattin and Felix Michel, Web-Based Presentations, Berliner XML Tage 2007 (BXML 2007), Berlin, Germany, September 2007. (available as abstract and PDF)
    Abstract: The management and publishing of complex presentations is poorly supported by available presentation software. This makes it hard to publish usable and accessible presentation material, and to reuse that material for continuously evolving events. XSLidy provides an XSLT-based approach to generate presentations out of a mix of general-purpose HTML and a small number of presentation-specific structural elements. Using XSLidy, the management and reuse of complex presentations becomes easier, and the results are more user-friendly in terms of usability and accessibility.
  • Erik Wilde, Declarative Web 2.0, 2007 IEEE International Conference on Information Reuse and Integration (IRI 2007), Las Vegas, Nevada, August 2007. (available as abstract and PDF)
    Abstract: Web 2.0 applications have become popular as drivers of new types of Web content, but they have also introduced a new level of interface design in Web development; they are focusing on richer interfaces, user-generated content, and better interworking of Web-based applications. The current foundations of the Web 2.0, however, are strictly imperative in nature, which makes it difficult to develop applications which are robust, interoperable, and backwards compatible. Using a declarative approach for Web 2.0 applications, this new wave of applications can be built on a more robust foundation which is more in line with the Web's style of using declarative methods whenever possible. We show a path how today's imperative Web 2.0 applications can be regarded as a testbed as well as a first implementation for a revised version of Web 2.0 technologies, which will be based on declarative markup rather than imperative code.
  • Erik Wilde, What are you talking about?, 2007 IEEE International Conference on Services Computing (SCC 2007), Salt Lake City, Utah, July 2007. (available as abstract and PDF)
    Abstract: While services are widely regarded as an important new concept in IT architecture, so far there is no consolidated concept about the exact meaning of the term "service orientation". While there are many problems which are simply problems of certain technical decisions, other areas are more fundamental and lead to different perspectives and eventually implementations of service oriented systems. We argue that the current emphasis of service orientation as a collection of interface descriptions misses the critical point of services, which is that they revolve around resources. With a more resource-centered approach, the investment into a service oriented architecture can be made much more promising, because the resource-centered approach is better suited for the design of loosely coupled systems than the current interface-based approach.
  • Felix Michel and Erik Wilde, Data Model Perspectives for XML Schema, XTech 2007, Paris, France, May 2007. (available as abstract and presentation PDF)
    Abstract: The family of upcoming XML technologies, consisting of XPath 2.0, XSLT 2.0, and XQuery, no longer operates only on the Infoset, but also utilize schema information. Today, this schema information is added to the Infoset during schema-validation and commonly is referred to as PSVI contributions (PSVI for .Post-Validation Schema Infoset.). Utilizing schema information is promising, for XML Schema allows to describe relationships between structures in an expressive, semantically relevant way, e.g. through type derivation and substitution groups. This structural information can become valuable meta-data when processing instances that comply to the respective Schema. However, only a small fraction of this schema information is accessible with the aforementioned technologies. There are various reasons for this: Some schema information such as where wildcards can occur is not exposed at all, and other components (e.g. types) are only represented by QNames, lacking any possibilities to further navigate the schema information. Secondly, the PSVI specification remains vague with respect to the data model. And finally, the present data model of XML Schema is not appropriate for some application contexts. The existence of differing data models for XML Schema (e.g. in programming APIs for XML Schema) is evidence for the fact that the abstract data model as defined in the recommendation does not rule out the need for other data model perspectives. In fact, the abstract data model and its incarnations (namely the normative XML syntax) may be good for defining schemas, but it proves to be less appropriate for exploiting the structural information. Features that are convenient for definition (such as named groups and nested model groups) turn out to be problematic for retrieval and navigation, the most important ways of using the structural information. We propose an alternative data model perspective that represents the schema information in a way that meets the needs of certain classes of applications better. These applications have in common read-only access to schema information, an instance-driven perspective, the need for schema inspection at runtime, and possibly only a local scope. Our data model uses what we call .occurrences. instead of the .particles. in the normative abstract data model, and it expands what we (deliberately) consider to be notational shorthands (like occurrence constraints and named groups). Furthermore, we index all occurrences (even of the same element), as it is done in .marked expressions. in regular language theory. The structural information is not longer captured by model groups, but by a set of potential next occurrences. This is based on the idea of Brzozowski derivatives and again inspired by the anticipated needs of instance-oriented applications. We present a prototype implementation which is purely based on standard technologies. It is implemented as a XSLT 2.0 function library that reads schemas in the normative XML syntax, constructs the data model from this information, and provides various functions for accessing, navigating, and exploiting the schema information. We show that such functionality is highly beneficial, making applications more powerful, resilient, and easier to develop.
  • Erik Wilde, Structuring Content with XML, 10th International Conference on Electronic Publishing (ELPUB 2006), Bansko, Bulgaria, June 2006. (available as abstract and PDF)
    Abstract: XML as the most successful data representation format makes it easy to start working with structured data because of the simplicity of XML documents and DTDs, and because of the general availability of tools. This paper first describes the origin and features of XML as a markup language. In a second part, the question of how to use the features provided by XML for structuring content is addressed. Data modeling for electronic publishing and document engineering is an research field with many open issues, the most important open question being what to use as the modeling language for XML-based applications. While the paper does not provide a solution to the modeling language question, it provides guidelines for how to design schemas once the model has been defined.
  • Erik Wilde, Sai Anand, Thierry Bücheler, Nick Nabholz and Petra Zimmermann, Bibliographies as Shared Resources, Web Based Communities 2006 Conference (WBC 2006), San Sebastián, Spain, February 2006. (available as abstract and PDF)
    Abstract: In many research settings, bibliographies are a central resource for collecting information about related work, keeping track of the own research record, and annotating this information with remarks. By its very nature, this information should be shared between researchers within a research group and maybe in larger organizational units (for example research institutes) as well. However, most tools used for managing bibliographic data do not support collaboration. Using ShaRef, users can share bibliographic information, collaborate, and publish and export data using a variety of output channels. ShaRef's goal is to make sharing of and collaboration with bibliographic information easier than it is today.
  • Erik Wilde, Augmenting XHTML for Help and Documentation, International Conference on Intelligent Agents, Web Technology and Internet Commerce (IAWTIC 2005), Vienna, Austria, November 2005. (available as abstract and PDF)
    Abstract: Providing users with help and other documentation is essential for any software targeted at end users. Authoring help and documentation in a platform-independent way is hard, because different help systems have different conventions for structuring and organizing the documents. The Help System Generator (HSG) presented in this paper provides an easy and platform-independent way of preparing and publishing help and documentation. Using HSG, software creators can easily author, reuse, and publish help and documentation for different platforms.
  • Erik Wilde and Nick Nabholz, Access Control for Shared Resources, International Conference on Intelligent Agents, Web Technology and Internet Commerce (IAWTIC 2005), Vienna, Austria, November 2005. (available as abstract and PDF)
    Abstract: Access control for shared resources is a complex and challenging task, in particular if the access control policy should be able to cope with different kind of sharing and collaboration. The reason for this is that traditional access control system often depend on administrators to set up the foundations of the access control mechanism, in most cases users and their group memberships. The access control model presented in this paper approaches this problem by supporting two different kinds of groups, named groups and resource-based groups. Using the implementation of this model in our application allows to to support a wide variety of sharing and collaboration types between the application's users.
  • Erik Wilde, Sai Anand and Petra Zimmermann, Management and Sharing of Bibliographies, 9th European Conference on Research and Advanced Technology for Digital Libraries (ECDL 2005), Vienna, Austria, September 2005. (available as abstract and PDF)
    Abstract: Managing bibliographic data is a requirement for many researchers, and in the group setting within which the majority of research takes place, the managing and sharing of bibliographic data is an important facet of organizing the research work. Managing and sharing bibliographies has to balance different levels of shared access (public catalogs, closed research group bibliographies, and personal bibliographies), and the sharing platform should integrate as seamlessly as possible into diverse environments in terms of operating systems, document processing, and other information management tools. The ShaRef system presented in this paper has been designed to fill the gap between public libraries and personal bibliographies, and provides an open platform for sharing bibliographic data among user groups. Through its simple and flexible data model and system architecture, ShaRef adapts to many settings and requirements, and can be used to increase collaboration and information flow within groups.
  • Erik Wilde, Towards Conceptual Modeling for XML, Berliner XML Tage 2005 (BXML 2005), Berlin, Germany, September 2005. (available as abstract and PDF, and paper presentation)
    Abstract: Today, XML is primarily regarded as a syntax for exchanging structured data, and therefore the question of how to develop well-designed XML models has not been studied extensively. As applications are increasingly penetrated by XML technologies, and because query and programming languages provide native XML support, it would be beneficial to use these features to work with well-designed XML models. In order to better focus on XML-oriented technologies in systems engineering and programming languages, an XML modeling language should be used, which is more focused on modeling and structure than typical XML schema languages. In this paper, we examine the current state of the art in XML schema languages and XML modeling, and present a list of requirements for a XML conceptual modeling language.
  • Erik Wilde and Marcel Baschnagel, Fragment Identifiers for Plain Text Files, Sixteenth ACM Conference on Hypertext and Hypermedia (HT 2005), Salzburg, Austria, September 2005. (available as abstract and PDF)
    Abstract: Hypermedia systems like the Web heavily depend on their ability to link resources. One of the key features of the Web's URIs is their ability to not only specify a resource, but to also identify a subresource within that resource, by using a fragment identifier. Fragment identification enables user to create better hypermedia. We present a proposal for fragment identifiers for plain text files, which makes it possible to identify character or line ranges, or subresources identified by regular expressions. Using these fragment identifiers, it is possible to create more specific hyperlinks, by not only linking to a complete plain text resource, but only the relevant part of it. Along with this proposal, a prototype implementation is described which can be used both as a server-side testbed and as a client-side extension for the Firefox browser.
  • Erik Wilde, Semantically Extensible Schemas for Web Service Evolution, European Conference on Web Services (ECOWS'04), Erfurt, Germany, September 2004. (available as abstract, PDF, and paper presentation)
    Abstract: Web Services are designed for loosely coupled systems, which means that in many cases it is not possible to synchronously upgrade all peers of a Web Service scenario. Instead, Web Service peers should be able to coexist in different versions. Additionally, older software versions often could benefit from upgrades to the service if they were able to understand it. This paper presents a framework for semantically extensible schemas for Web Service evolution. The core idea of is to use declarative semantics to describe extensions to a service's vocabulary. These declarative semantics can be used by older software versions to understand the semantics of extensions, thus enabling older software to dynamically adapt to newer versions of the service. As long as declarative semantics are sufficient, older software can benefit from the service's extension.
  • Erik Wilde, Protecting Legacy Applications from Unicode, International Conference on E-Business and Telecommunication Networks (ICETE 2004), Setúbal, Portugal, August 2004. (available as abstract, PDF, and paper presentation)
    Abstract: While XML-based Web Service architectures are successfully turning the Web into an infrastructure for cooperating applications, not all problems with respect to interoperability problems have yet been solved. XML-based data exchange has the ability to carry the full Unicode character repertoire, which is approaching 100'000 characters. Many legacy application are being Web-Service-enabled rather than being re-built from scratch, and therefore still have the same limitations. A frequently seen limitation is the inability to handle the full Unicode character repertoire. We describe an architectural approach and a schema language to address this issue. The architectural approach proposes to establish validation as basic Web Service functionality, which should be built into a Web Services architecture rather than applications. Based on this vision of modular an infrastructure-based validation, we propose a schema language for character repertoire validation. Lessons learned from the first implementation and possible improvements of the schema language conclude the paper.
  • Erik Wilde and Jacqueline Schwerzmann, When Business Models Go Bad: The Music Industry's Future, International Conference on E-Business and Telecommunication Networks (ICETE 2004), Setúbal, Portugal, August 2004. (available as abstract, PDF, and paper presentation)
    Abstract: The music industry is an interesting example for how business models from the pre-Internet area can get into trouble in the new Internet-based economy. Since 2000, the music industry has suffered declining sales, and very often this is attributed to the advent of the Internet-based peer-to-peer file sharing programs. We argue that this explanation is only one of several possible explanations, and that the general decrease in the economic indicators is a more reasonable way to explain the declining sales. Whatever the reason for the declining sales may be, the question remains what the music industry could and should do to stop the decline in revenue. The current strategy of the music industry is centered around protecting their traditional business model through technical measures and in parallel working towards legally protecting the technical measures. It remains to be seen whether this approach is successful, and whether the resulting landscape of tightly controlled digital content distribution is technically feasible and accepted by the consumers. We argue that the search for new business models is the better way to go, even though it may take some time and effort to identify these business models.
  • Mario Jeckle and Erik Wilde, Identical Principles, Higher Layers: Modeling Web Services as Protocol Stack, XML Europe 2004, Amsterdam, April 2004. (available as abstract, PDF, and HTML)
    Abstract: Web Services and their potential applications are currently under heavy discussion in industry, research, and standardization. As a result of evaluation and experience by early adopters, the technology is expected to mature through the advent of new standards and solutions leveraging Web Service's power. In essence, the efforts undertaken to create and complete a stack of Web Service protocols lead to a new communication architecture and extends the stack of classical network protocols. This evolving architecture could serve as a future-proof infrastructure for businesses to rely on. However the growth of the Web Service stack with respect to the addition of new layers and expansion of the resulting infrastructure has not been studied in comparison with well-established protocol suites like the ISO/OSI stack or the set of protocols constituting the Internet. Strictly speaking, industry's demand for functionality and services enhancing the basic Web Service protocols such as XML-RPC or SOAP, leads to the creation of a full-fledged layered protocol suite on-top of the existing ones. Nevertheless, the various standards, specifications, and ideas have neither been consolidated on a common terminological basis, nor been integrated in a single framework of reference. This observation also applies to the established trio of Web Service standards composing of SOAP, WSDL, and UDDI. According to the specific usage patterns of these specifications, they are not operating on one layer as the well-known triangular relationship graph suggests, but instead they are connected by means of unidirectional usage dependencies. From this point of view, the message patterns (MP) defined by WSDL 2.0 offer services to layers organized on top of WSDL which rely on the service interfaces exposed by SOAP. More precisely, not the interface definition with WSDL but the accompanying MPs act as the transport layer of the service stack. Based on this and other criteria, SOAP can be categorized as the basic low-level layer of the Web Service infrastructure corresponding to the network-dependent layers of the classical protocol suites. Based on these facts, all of the various efforts relying on the seminal Web Service protocols can be categorized at the various levels layered above the transport layer. This is especially true for specifications dealing with the management of sessions and transactions which are layered directly above the MPs. Also, security standards like XML digital signatures and XML encryption fit well into this by classifying them as part of the presentation layer. Furthermore, within the Web Service environment quite analogous application layer mechanisms (e.g. firewalls for content filtering) emerge are commonly known for classical network operation. Taking this congruency of established protocol stacks and the Web Service's one step further the analogy may serve as a valuable framework for the comparison of different architectural styles in Web Service deployment. Taking the continuing debate weighing services based on representational state transfer (REST) against those based on RPC-style SOAP as an example, both approaches reveal themselves as heterogeneous protocols. Both ideas are not mutually exclusive nor conflicting at all. Both protocols can be made interoperable by the use of bridges or gateways arbitrating between the two parties. Our analysis shows that Web Services are a true but yet incomplete protocol suite deploying classical Internet protocols as basic services by the continued addition of supplemental specifications and standards.
  • Erik Wilde, Towards Federated Referatories, SINN03 Conference on Worldwide Coherent Workforce and Satisfied Users, Oldenburg, Germany, September 2003. (available as abstract, PDF, and paper presentation)
    Abstract: Metadata usage often depends on schemas for metadata, which are important to convey the meaning of the metadata. We propose an architecture where users can extend the schema used by a system for managing referential metadata. Users can plugin new schemas and install custom filters for exporting metadata, so that users are not forced to limit their metadata to a fixed schema. The goal of this architecture is to provide users with a system that helps them managing their referatory, enables them with powerful tools to adapt the tool to their metadata, and still makes it possible to collect the metadata of several users in a central storage and exploit the common facets of the metadata. Our system is based on a specialized schema language, which has been built on top of the XML schema languages XML Schema and Schematron.
  • Erik Wilde, Validation of Character Repertoires for XML Documents, Twenty-fourth Internationalization and Unicode Conference (IUC24), Atlanta, Georgia, September 2003. (available as abstract, PDF, and paper presentation)
    Abstract: XML is based on Unicode, and therefore XML documents may use the full Unicode character repertoire. However, XML-based applications often use XML interfaces to legacy software which in many cases is not capable of dealing with the full Unicode character repertoire. We therefore propose a schema language for XML which is capable of limiting the character repertoire of XML documents. This schema language, called Character Repertoire Validation for XML (CRVX), has features to permit or disallow character repertoire subsets from certain parts of an XML document, for example only for element and attribute names. CRVX uses information from the Unicode Character Database (UCD) to make character repertoire specification as easy as possible. CRVX is not intended to be the only schema language in an XML application scenario, but it provides useful additional schema-based validation to protect applications from unsupported characters. XML applications typically combine different schema languages before processing XML documents, and CRVX is intended to complement other schema languages such as grammar-based languages (DTD, XML Schema) or rule-based languages (Schematron). CRVX can be implemented in various ways. One simple solution is to use XSLT to transform an CRVX schema into an XSLT program, which is then used to validate XML documents. We briefly describe such an implementation. Other (and more efficient) implementations could be based on DOM or SAX parsers.
  • Erik Wilde and Kilian Stillhard, A Compact XML Schema Syntax, XML Europe 2003, London, UK, May 2003. (available as abstract, HTML, and paper presentation)
    Abstract: The new schema language defined by the W3C, XML Schema, is used in a number of applications, such as Web Services and XQuery, and will probably be used by an increasing number of users in the near future. Currently, XML Schema's data model, the "XML Schema Components", can only be represented in the rather verbose XML syntax defined in the XML Schema specification itself. We propose an alternative non-XML syntax, which is (1) much more compact than the XML syntax, (2) defined by EBNF productions, (3) re-uses well-known syntactic concepts where appropriate, and (4) is easy to implement using standard parser-generating tools. Our approach is comparable to the approach of the RELAX NG schema language, which also supports two alternative syntaxes, an XML-based one, and a more compact non-XML one. We believe that XML Schema could be made easier to use by supporting a compact syntax. Currently, complex schemas are very hard to read due to the large amount of XML markup, and the various tools and GUIs that are on the market differ widely and in all cases support only a subset of the features of XML Schema. We believe that there should be a compact syntax, optimized for human users, which makes it easy to read and write XML Schemas, and which supports the full feature set of XML Schema. Obviously, a non-XML syntax makes it necessary to introduce new tools. However, generating parsers from EBNF productions is rather simple and well-supported by standard tools (such as yacc and JavaCC), and the other direction (i.e., generating non-XML syntax) can be implemented by using XML tools. Our XML Schema Compact Syntax (XSCS) is geared towards human users, by re-using language constructs known from other application areas, such as DTDs and programming languages, and making them available for XML Schema component representation. Examples for this re-use of syntactic constructs are DTD-style content models, number ranges ("[a,b]" or "(a,b]" as in standard mathematical notation), and qualifying attributes like "abstract" or "final" known from programming languages ("final abstract type { ... }"). We also believe that graphical representations of complex structures such as schemas are not always suitable because some people prefer textual representations, editing might be faster when using keyboard input instead of using click-and-point operations, and graphical representations (usually) hide some information. We fully integrate the processing of our syntax into the existing pipeline of XML-based tools by creating a parser that generates SAX events or DOM trees from the compact syntax documents. This way, we can use the existing XML Schema validation engines and XML Schema error checking facilities already implemented in validation engines like the Xerces parser. In addition, we have a serialization module to generate compact syntax documents from XML Schema DOM trees. Our overall goal is to improve XML Schema acceptance by providing a syntax that is easier to work with than the XML syntax, and tools to process this syntax.
  • Erik Wilde, Making the Infoset Extensible, XML 2002, Baltimore, Maryland, December 2002. (available as abstract, PDF, HTML, and paper presentation)
    Abstract: The XML Infoset defines the data model of XML, and it is used by a number of other specifications, such as XML Schema, XPath, DOM, and SAX. Currently, the Infoset defines a fixed number of Information Items and their Properties, and the only widely accepted extension of the Infoset are the Post Schema Validation Infoset (PSVI) contributions of XML Schema. XML Schema demonstrates that extending the Infoset can be very useful, and the PSVI contributions of XML Schema are being used by XPath 2.0 to access type information in a document's Infoset. In this paper, we present an approach to making the Infoset generically extensible by using the well-known Namespace mechanism. Using Namespaces, it is possible to define sets of additional Information Items and Properties which are extending the core Infoset (or other Infoset extensions, defining a possibly multi-level hierarchy of Infoset extensions). Basically, a Namespace for an Infoset extension contains a number of Information Items, which may have any number of Properties. It is also possible to define an Infoset extension containing only Properties, extending the Information Items of other Infosets. Further elaborating on this method, many of the XML technologies currently using the Infoset could be extended to support the Infoset extensions by importing Infoset extension using the extension's Namespace name. To illustrate these concepts, we give an example by defining the XML Linking Language (XLink), the XML vocabulary for hyperlinking information, in terms of Infoset extensions. We show how the proposed ways of supporting Infoset extensions in XML technologies such as XPath, DOM, and CSS could pave the path to a better support (and hopefully faster adoption) of XLink than we see today. XLink serves as one example, but the proposed extensions and techniques are not limited to this particular technology. The content of this paper is work in progress, contributing to the ongoing debate on how to deal with different XML vocabularies and their usage in other XML technologies. We believe that making the Infoset extensible would provide a robust and flexible way of making the data model of XML-based data more versatile, and creating an accepted way of making the data available through standard interfaces such as DOM and XPath.
  • David Lowe and Erik Wilde, Improving Web Linking Using XLink, Open Publish 2001, Sydney, July 2001. (available as abstract, and PDF)
    Abstract: Although the Web has continuously grown and evolved since its introduction in 1989, the technical foundations have remained relatively unchanged. Of the basic technologies, URLs and HTTP has remained stable for some time now, and only HTML has changed more frequently. However, the introduction of XML has heralded a substantial change in the way in which content can be managed. One of the most significant of these changes is with respect to the greatly enhanced model for linking functionality that is enabled by the emerging XLink and XPointer standards. These standards have the capacity to fundamentally change the way in which we utilise the Web, especially with respect to the way in which users interact with information. In this paper, we will discuss some of the richer linking functionality that XLink and XPointer enable — particularly with respect to aspects such as content transclusion, multiple source and destination links, generic linking, and the use of linkbases to add links into content over which the author has no control. The discussions will be illustrated with example XLink code fragments, and will emphasise the particular uses to which these linking concepts can be put.
  • Erik Wilde and David Lowe, From Content-centered Publishing to a Link-based View of Information Resources, 33rd Hawaii International Conference on System Sciences (HICSS-33), Maui, Hawaii, January 2000. (available as abstract, PostScript, and PDF)
    Abstract: Influenced by the linking model which is implicit in HTML, today's publishing model on the Web is content-centered, with the emphasis of publishing on content rather than links. With the growing amount of information available on the Web, and the more powerful hypermedia architectures made possible by new Web technologies, putting the content into context will become increasingly important. In this paper, a new way of structuring publishing systems for information providers is presented in an attempt to shift the emphasis in Web-based publishing from content to an improved balance between content and links. After a description of the architecture of a link-based publishing system, a strategy for implementing such a system is described. Finally, a number of challenges associated with such a fundamental transition in the publishing model are described, in the technical as well as in the organizational domain.
  • Erik Wilde, Murali Nanduri and Bernhard Plattner, A Transport-Independent Component for a Group and Session Management Service in Group Communications Platforms, European Conference on Multimedia Applications, Services and Techniques (ECMAST 96), Louvain-la-Neuve, Belgium, May 1996. (available as abstract, PostScript, and PDF)
    Abstract: Group communications is an area of research which has received a lot of attention recently. This paper focuses on a model and the architecture of a system which supports group communications by providing group and session management functionality. This system is an extension of directory services which are used with unicast communications. New functionality is needed for the dynamics of group communications (members of a connection may change over the lifetime of the connection) and increased complexity of relations. A model is described which defines six object types which represent the relevant objects. Users and groups represent real world users and their relations. Sessions and flows describe ongoing group communications. Flow templates and certificates provide mechanisms for management and security issues. The architecture presented in this paper is transport-independent, ie it can be used within different group communication platforms. A short sketch of the implementation is given in the last section.
  • Erik Wilde, Group Management and Communication Support for Collaborative Applications, Conference on Upper Layer Protocols, Architectures and Applications (ULPAA 95), Sydney, December 1995. (available as abstract, PostScript, and PDF)
    Abstract: In this paper, an architecture for communication support for collaborative applications is described. The motivation for the design of this architecture is the observation that generic support for group communications is an area which received not much attention until now. The design is based on two components, a Group Management System (GMS) and Group Communication Support (GCS). The GMS is responsible for managing the name space of the support platform. Users and groups are the two entities of the name space, and two different relationships between them (membership and manager) can be established. This way it is possible to reflect the structure of collaborative workers inside the GMS. The GCS component is responsible for establishing connections between collaborative applications using the GMS/GCS and for hiding the details of the multicast transport infrastructure from the application. It is possible to bind users and groups to specific applications and multicast transport services. This way any group can be used by different applications using different transport services. The main advantages of GMS/GCS are reduced implementation costs, a shared name space of users and groups, and a simple interface to different multicast transport services.

Reviewed Conference Posters

  • Jöran Beel, Bela Gipp, Stefan Langer, Marcel Genzmehr, Erik Wilde, and Jim Pitman, Introducing Mr. DLib, the Machine-Readable Digital Library, Eleventh ACM/IEEE Joint Conference on Digital Libraries (JCDL 2011), Ottawa, Canada, June 2011. (available as abstract and PDF)
    Abstract: In this demo-paper we present Mr. DLib, a machine-readable digital library. Mr. DLib provides access to several millions of articles in full-text and their metadata in XML and JSON format via a RESTful Web Service. In addition, Mr. DLib provides related documents for given academic articles. The service is intended to serve researchers who need bibliographic data and full-text of scholarly literature for their analyses (e.g. impact and trend analysis); providers of academic services who need additional information to enhance their own services (e.g. literature recommendations); and providers who want to build their own services based on data from Mr. DLib.
  • Yiming Liu, Rui Yang and Erik Wilde, Open and Decentralized Access across Location-Based Services, 20th International World Wide Web Conference (WWW2011), Hyderabad, India, March 2011. (available as abstract and PDF)
    Abstract: Users now interact with multiple Location-Based Services (LBS) through a myriad set of location-aware devices and interfaces. However, current LBS tend to be centralized silos with ad-hoc APIs, which limits potential for information sharing and reuse. Further, LBS subscriptions and user experiences are not easily portable across devices. We propose a general architecture for providing open and decentralized access to LBS, based on Tiled Feeds — a RESTful protocol for access and interactions with LBS using feeds, and Feed Subscription Management (FSM) — a generalized feed-based service management protocol. We describe two client designs, and demonstrate how they enable standardized access to LBS services, promote information sharing and mashup creation, and offer service management across various types of location-enabled devices.
  • Rosa Alarcòn and Erik Wilde, RESTler: Crawling RESTful Services, 19th International World Wide Web Conference (WWW2010), Raleigh, North Carolina, April 2010. (available as abstract and PDF)
    Abstract: Service descriptions allow designers to document, understand, and use services, creating new useful and complex services with aggregated business value. Unlike RPC-based services, REST characteristics require a different approach to service description. We present the Resource Linking Language (ReLL) that introduces the concepts of media types, resource types, and link types as first class citizens for a service description. A proof of concept, a crawler called RESTler that crawls RESTful services based on ReLL descriptions, is also presented.
  • Alexandros Marinos, Erik Wilde and Jiannan Lu, HTTP Database Connector (HDBC): RESTful Access to Relational Databases, 19th International World Wide Web Conference (WWW2010), Raleigh, North Carolina, April 2010. (available as abstract and PDF)
    Abstract: Relational databases hold a vast quantity of information and making them accessible to the web is an big challenge. There is a need to make these databases accessible with as little difficulty as possible, opening them up to the power and serendipity of the Web. Our work presents a series of patterns that bridge the relational database model with the architecture of the Web along with an implementation of some of them. The aim is for relational databases to be made accessible with no intermediate steps and no extra metadata required. This approach can vastly increase the data available on the web, therefore making the Web itself all the more powerful, while enabling its users to seamlessly perform tasks that previously required bridging multiple domains and paradigms or were not possible.
  • Alissa Cooper, Henning Schulzrinne and Erik Wilde, Challenges for the Location-Aware Web, Web Science Conference 2010 (WebSci 10), Raleigh, North Carolina, April 2010. (available as abstract and PDF)
    Abstract: The Web is on its way to becoming a location-aware information system. This transition causes some technical and policy challenges in terms of both design and coordination with existing approaches in this area. In this paper we propose that managing the transition to location-awareness (and some other aspects) requires a more strategic approach than has been taken thus far.
  • Erik Wilde, Making Sensor Data Available Using Web Feeds, 8th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN 2009), San Francisco, California, April 2009. (available as abstract and PDF)
    Abstract: The setup of and processing within sensor networks often requires sophisticated and specialized system designs and implementations, but the service provided by them should be as accessible and repurposable as possible. If the increasing number of available sensor-based data sources can be accessed in a simple and universal way, the network effect of aggregating, filtering, and republishing data from these sources will significantly increase their value. We propose an architecture where sensor-based data sources publish their data based on feeds, but extended with query capabilities. Using the well-known and widely supported Atom feed format and extending it with query capabilities allows us to lower the barrier-of-entry to sensor-based data sources, opening this data to a wider audience of clients.
  • Erik Wilde and Philippe Cattin, Presenting in HTML, ACM Symposium on Document Engineering (DocEng 2007), Winnipeg, Manitoba, August 2007. (available as abstract and PDF)
    Abstract: The management and publishing of complex presentations is poorly supported by available presentation software. This makes it hard to publish usable and accessible presentation material, and to reuse that material for continuously evolving events. XSLidy provides a XSLT-based approach to generate presentations out of a mix of HTML and structural elements. Using XSLidy, the management and reuse of complex presentations becomes easier, and the results are more user-friendly in terms of usability and accessibility.
  • Erik Wilde and Felix Michel, XML-Based XML Schema Access, 16th International World Wide Web Conference (WWW2007), Banff, Alberta, May 2007. (available as abstract and PDF)
    Abstract: XML Schema's abstract data model consists of components, which are the structures that eventually define a schema as a whole. XML Schema's XML syntax, on the other hand, is not a direct representation of the schema components, and it proves to be surprisingly hard to derive a schema's components from the XML syntax. The Schema Component XML Syntax (SCX) is a representation which attempts to map schema components as faithfully as possible to XML structures. SCX serves as the starting point for applications which need access to schema components and want to do so using standardized and widely available XML technologies.
  • Erik Wilde and Felix Michel, SPath: A Path Language for XML Schema, 16th International World Wide Web Conference (WWW2007), Banff, Alberta, May 2007. (available as abstract and PDF)
    Abstract: XML is increasingly being used as a typed data format, and therefore it becomes more important to gain access to the type system; very often this is an XML Schema. The XML Schema Path Language (SPath) presented in this paper provides access to XML Schema components by extending the well-known XPath language to also include the domain of XML Schemas. Using SPath, XML developers gain access to XML Schemas and thus can more easily develop software which is type- or schema-aware, and thus more robust.
  • Felix Michel and Erik Wilde, Extensible Schema Documentation with XSLT 2.0, 16th International World Wide Web Conference (WWW2007), Banff, Alberta, May 2007. (available as abstract and PDF)
    Abstract: XML Schema documents are defined using an XML syntax, which means that the idea of generating schema documentation through standard XML technologies is intriguing. We present X2Doc, a framework for generating schema-documentation solely through XSLT. The framework uses SCX, an XML syntax for XML Schema components, as intermediate format and produces XML-based output formats. Using a modular set of XSLT stylesheets, X2Doc is highly configurable and carefully crafted towards extensibility. This proves especially useful for composite schemas, where additional schema information like Schematron rules are embedded into XML Schemas.
  • Erik Wilde, Modulare und Offene Komponenten zur Wissensverwaltung, 11. Europäische Jahrestagung der Gesellschaft für Medien in der Wissenschaft (GMW06), Zürich, Switzerland, September 2006. (available as abstract and PDF)
    Abstract: Wissensvermittlung setzt zu einem massgeblichen Teil nicht nur das Lehren von Fakten und Methoden voraus, sondern unverzichtbar auch deren Einordnung in den durch das Fachgebiet vorgegebenen Rahmen. Eine ICT Strategie wissensvermittelnder Organisationen sollte diesem weiten Fokus der Wissensvermittlung Rechnung tragen und durch strategische Zielsetzungen verhindern, dass geschlossene Insellösungen entstehen, die dem Ziel der Vermittlung vernetzten Wissens abträglich sind. Im Rahmen geeigneter strategischer und technischer Rahmenbedingungen können heutzutage basierend auf existierenden Technologien Tools entwickelt werden, die sich durch ihr modulares und offenes Konzept optimal im sich ständig ändernden ICT Umfeld einer Hochschule einsetzen lassen. Am Beispiel eines Tools zur Verwaltung von Literaturverweisen wird erläutert, wie eine offene ICT Strategie in Form technischer Lösungen umgesetzt werden kann.
  • Erik Wilde, Tables and Trees Don't Mix (very well), 15th International World Wide Web Conference (WWW2006), Edinburgh, UK, May 2006. (available as abstract, PDF, and HTML)
    Abstract: There are principal differences between the relational model and XML's tree model. This causes problems in all cases where information from these two worlds has to be brought together. Using a few rules for mapping the incompatible aspects of the two models, it becomes easier to process data in systems which need to work with relational and tree data. The most important requirement for a good mapping is that the conceptual model is available and can thus be used for making mapping decisions.
  • Kaspar Giger and Erik Wilde, XPath Filename Expansion in a Unix Shell, 15th International World Wide Web Conference (WWW2006), Edinburgh, UK, May 2006. (available as abstract, PDF, and HTML)
    Abstract: Locating files based on file system structure, file properties, and maybe even file contents is a core task of the user interface of operating systems. By adapting XPath's power to the environment of a Unix shell, it is possible to greatly increase the expressive power of the command line language. We present a concept for integrating an XPath view of the file system into a shell, which can be used to find files based on file attributes and contents in a very flexible way. The syntax of the command line language is backwards compatible with traditional shells, and the new XPath-based expressions can be easily mastered with a little bit of XPath knowledge.
  • Erik Wilde, Structuring Namespace Descriptions, 15th International World Wide Web Conference (WWW2006), Edinburgh, UK, May 2006. (available as abstract, PDF, and HTML)
    Abstract: Namespaces are a central building block of XML technologies today, they provide the identification mechanism for many XML-related vocabularies. Despite their ubiquity, there is no established mechanism for describing namespaces, and in particular for describing the dependencies of namespaces. We propose a simple model for describing namespaces and their dependencies. Using these descriptions, it is possible to compile directories of namespaces providing searchable and browsable namespace descriptions.
  • Erik Wilde, Merging Trees: File System and Content Integration, 15th International World Wide Web Conference (WWW2006), Edinburgh, UK, May 2006. (available as abstract, PDF, and HTML)
    Abstract: XML is the predominant format for representing structured information inside documents, but it stops at the level of files. This makes it hard to use XML-oriented tools to process information which is scattered over multiple documents within a file system. File System XML (FSX) and its content integration provides a unified view of file system structure and content. FSX's adaptors map file contents to XML, which means that any file format can be integrated with an XML view in the integrated view of the file system.
  • Erik Wilde, Describing Namespaces with GRDDL, 14th International World Wide Web Conference (WWW2005), Chiba, Japan, May 2005. (available as abstract and PDF)
    Abstract: Describing XML Namespaces is an open issue for many users of XML technologies, and even though namespaces are one of the foundations of XML, there is no generally accepted and widely used format for namespace descriptions. We present a framework for describing namespaces based on GRDDL using a controlled vocabulary. Using this framework, namespace descriptions can be easily generated, harvested and published in human- or machine-readable form.
  • Sai Anand and Erik Wilde, Mapping XML Instances, 14th International World Wide Web Conference (WWW2005), Chiba, Japan, May 2005. (available as abstract and PDF)
    Abstract: For XML-based applications in general and B2B applications in particular, mapping between differently structured XML documents, to enable exchange of data, is a basic problem. A generic solution to the problem is of interest and desirable both in an academic and practical sense. We present a case study of the problem that arises in an XML based project, which involves mapping of different XML schemas to each other. We describe our approach to solving the problem, its advantages and limitations. We also compare and contrast our approach with previously known approaches and commercially available software solutions.
  • Erik Wilde, Character Repertoire Validation for XML Documents, Twelfth International World Wide Web Conference (WWW2003), Budapest, Hungary, May 2003. (available as abstract, PDF, and HTML)
    Abstract: XML documents may contain a large diversity of characters. The Character Repertoire Validation for XML (CRVX) language is a simple schema language for specifying character repertoire constraints. These constraints can be specific for syntax- and/or context-based parts of an XML document. The constraints are based on the character classes introduced by XML Schema's regular expressions.
  • Erik Wilde and Kilian Stillhard, Making XML Schema Easier to Read and Write, Twelfth International World Wide Web Conference (WWW2003), Budapest, Hungary, May 2003. (available as abstract, PDF, and HTML)
    Abstract: XML Schema is a rather complex schema language, partly because of its inherent complexity, and partly because of its XML syntax. In an effort to reduce the syntactic verboseness and complexity of XML Schema, we designed the XML Schema Compact Syntax (XSCS), a non-XML syntax for XML Schema. XSCS is designed for human users, and transformations from and to XML Schema XML syntax are implemented using Java-based tools.
  • Erik Wilde, Martin Waldburger and Beat Krähenmann, Conference Time-Table Management, Twelfth International World Wide Web Conference (WWW2003), Budapest, Hungary, May 2003. (available as abstract, PDF, and HTML)
    Abstract: Conference time-tables provide information that is indispensable for all attendees. Since there are a lot of reusable data structures and tasks, we have designed the Conference Time-Table Management (CTTM) system, which is intended to be used as a reusable component in a large diversity of conference Web sites. CTTM features a flexible concept for time-tables and provides users with personalization and notification services.
  • Erik Wilde, Linkbase Access Protocol Design, Eleventh International World Wide Web Conference (WWW2002), Honolulu, Hawaii, May 2002. (available as abstract and PDF)
    Abstract: XML itself does not support hypermedia, but the XLink standard has been defined to make XML usable for hypermedia. One of XLink's most interesting features is its support for external links and linkbases, which makes it possible to create links between resources without having to change the resources. In order to use these links, user agents must access linkbases and query them for relevant links, and we present our approach to create a protocol for linkbase access.
  • Marcel Dasen and Erik Wilde, Keeping Web Indices up-to-date, Tenth International World Wide Web Conference (WWW10), Hong Kong, May 2001. (available as abstract and PDF)
    Abstract: Search engines play a crucial role in the Web. Without search engines large parts of the Web becomes inaccessible for the majority of users. Search engines can make new and smaller sites accessible at low cost. Without them, other media, such as Television, would be needed to advertise the existence new site on the Web, only large commercial sites can follow this path. The Web would be endangered to become dominated by a few, well known sites. A crucial problem of search engines is to keep their index up-to-date. Especially if the index grows, the effort needed to update the index increases, since Web documents are dynamic and thus already stored data becomes obsolete. There have been various attempts to monitor the evolvement of the Web. However, we believe, that change model used in prior work over-estimates the rate of change due to an inadequate change model. Our change model has been adapted from the information retrieval field to distinguish index relevant changes from irrelevant modifications in Web documents, e.g. simple spelling corrections or dynamic advertisement links. We have monitored multiple smaller collections of documents over a time period of six month to measure the documents change.
  • Luca Previtali, Brenno Lurati and Erik Wilde, BibTeXML: An XML Representation of BibTeX, Tenth International World Wide Web Conference (WWW10), Hong Kong, May 2001. (available as abstract and PDF)
    Abstract: BibTeXML is an XML representation of BibTeX data. It can be used to represent bibliographic data in XML. The advantage of BibTeXML over BibTeX's native syntax is that it can be easily managed using standard XML tools (in particular, XSLT style sheets), while native BibTeX data can only be manipulated using specialized tools.

Reviewed Workshop Papers

  • Erik Wilde, Jack Hodges, Mareike Kritzler, Stefan Lüder, and Florian Michahelles, A Web of Wearables, Workshop on the Superorganism of Massive Collective Wearables at 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2014), Seattle, Washington, September 2014. (available as abstract and PDF)
    Abstract: Wearables are becoming the next Big Thing, and it is clear that they will become increasingly integrated into the Web of Things, instead of just being standalone resources that are not linked into the Web. Such a Web of Wearables will make wearables as easily accessible as other Web resources, allowing new classes of applications and systems to use them. This Web of Wearables will establish an ecosystem noticeably different from the current Web with more ties to the real world, more ties to personal information and data, and more ways to interact with the real world. It remains to be seen which applications and systems will emerge, but the designs of today will have an impact on what is possible tomorrow, so we should strive to make sure that the ecosystem we design is open, extensible, and flexible.
  • Rosa Alarcòn, Erik Wilde and Jesus Bellido, Hypermedia-driven RESTful Service Composition, 6th Workshop on Engineering Service-Oriented Applications (WESOA 2010), San Francisco, California, December 2010. (available as abstract and PDF)
    Abstract: Representational State Transfer (REST) services are gaining momentum as a lightweight approach for the provision of services on the Web. Unlike WSDL-based services, in REST the set of operations is reduced, standardized, with well known semantics, and changes the resource's state. Few attempts have been proposed to support composition models for REST, they are mainly operation-centric and fail to acknowledge the hypermedia nature of REST, that is, clients must inspect the served resource state and choose the link to follow from there. We explore RESTful service composition as it is driven by the hypermedia net that is dynamically created while a client interacts with a server resulting in a light-weight approach. We based our proposal on a hypermedia-centric REST service description, the Resource Linking Language (ReLL) and Petri Nets as a mechanism for describing the machine-client navigation.
  • Nick Doty and Erik Wilde, Geolocation Privacy and Application Platforms, 3rd ACM SIGSPATIAL International Workshop on Security and Privacy in GIS and LBS (SPRINGL) 2010, San Jose, California, November 2010. (available as abstract and PDF)
    Abstract: Security and privacy issues for Location-Based Services (LBS) and geolocation-capable applications often revolve around the idea of designing a User Interface (UI) which satisfies certain requirements so that users are informed about what the services or applications are doing, and have the ability to accept or decline. However, in a world where applications increasingly draw on a wide variety of LBS providers as the back-end, and where more and more applications are using small-screen or even screenless devices, UI-centered views of designing security and privacy are no longer sufficient. In this position paper, we describe the increasingly varied landscape of platforms with which users are faced today, and argue that the most important level to look at is the service level, so that security and privacy issues are described and negotiated in a machine-readable way, and can thus be adapted to new platforms and UIs more easily. While matters of UI and User Experience (UX) are important, we argue that they should be derived from a service-oriented view, instead of being designed and built for each platform individually.
  • Nick Doty and Erik Wilde, Simple Policy Negotiation for Location Disclosure, W3C Workshop on Privacy and Data Usage Control, Cambridge, Massachusetts, October 2010. (available as abstract and PDF)
    Abstract: Relying on non-enforceable normative language to persuade Web sites to make their privacy practices clear has proven unsuccessful, and where privacy policies are present, they are notoriously unclear and unread. Various machine-readable techniques have been proposed to address this problem, but many have suffered from practical difficulties. We propose a simple standard for transmitting policy information just-in-time and enabling simple negotiation between the site and the user agent. In particular, we detail how this could improve privacy of the W3C Geolocation API, but also suggest the possibility of extension to other application areas in need of privacy and policy negotiations.
  • Rosa Alarcòn and Erik Wilde, Linking Data from RESTful Services, Third Workshop on Linked Data on the Web (LDOW2010), Raleigh, North Carolina, April 2010. (available as abstract, PDF, and paper presentation)
    Abstract: One of the main goals of the Semantic Web is to extend current human-readable Web resources with semantic information encoded in a machine-processable form. One of its most successful approaches is the Web of Data which by following the principles of Linked Data have made available several data sources compliant with the Semantic Web technologies, such as, RDF triple stores, and SPARQL endpoints. On the other hand, the set of the architectural principles that underlie the human-readable Web has been conceptualized as the Representational State Transfer (REST) architectural style. In this paper, we distill REST concepts in order to provide a mechanism for describing REST (i.e. human-readable Web) resources and transform them into semantic resources. The strategy allowed us to harvest already existing Web resources without requiring changes on the original sources, or ad-hoc interfaces. We illustrate our approach with an application and expect that the presented approach may contribute to the availability of more data sources and become a further step to lower the entry barrier to semantic resources publishing.
  • Erik Wilde and Michael Hausenblas, RESTful SPARQL? You Name It! — Aligning SPARQL with REST and Resource Orientation, 4th Workshop on Emerging Web Services Technology (WEWST 2009), Eindhoven, Netherlands, November 2009. (available as abstract and PDF)
    Abstract: SPARQL is the standard query language for RDF, but currently is a read-only language defined in a way similar to SQL: Queries can be formulated, are submitted to a single processing facility, which then returns a result set. In this paper, we examine the shortcomings of this approach with regard to Web architecture, and propose a path towards a language that is more in line with basic principles of Web architecture. While this work has been done in the context of a proposed update extension for SPARQL, our focus is on how to apply the principles of Representational State Transfer (REST) to SPARQL. Our claim is that a RESTful redesign of SPARQL allows the Semantic Web to evolve in a more decentralized and openly accessible way than the current RPC-style design of SPARQL.
  • Erik Wilde, Site Metadata on the Web, Second Workshop on Human-Computer Interaction and Information Retrieval (HCIR 2008), Redmond, Washington, October 2008. (available as abstract and PDF)
    Abstract: The navigation structure of Web sites can be regarded as metadata that can be used for interesting applications in User Interface (UI) design and Human-Computer Interaction (HCI), as well as for Information Retrieval (IR) tasks. However, there currently is no established format for site metadata, which makes it hard for Web sites to publish their structure in a machine-readable way, which could then be used by HCI and/or IR applications. We propose a model and a format for site metadata that is built on top of an existing format and thus could be deployed with little overhead by publishers as well as consumers. Making site metadata available as machine-readable data can be used for improving user interfaces (informing user agents about the context of the page they are displaying) and better information retrieval (allowing search engines to use sitemap information for better ranking and display of the results).
  • Bernt Wahl and Erik Wilde, Mapping the World ... One Neighborhood at a Time, First International Workshop on Trends in Pervasive and Ubiquitous Geotechnology and Geoinformation, Park City, Utah, September 2008. (available as PDF)
  • Erik Wilde, Location Management for Mobile Devices, 3rd IEEE Workshop on Advanced Experimental Activities on Wireless Networks & Systems (EXPONWIRELESS 2008), Newport Beach, California, June 2008. (available as abstract and PDF)
    Abstract: Location-awareness, in the form of location information about clients and location-based services provided by servers, is becoming increasingly important for networked communications in general, and wireless and mobile devices in particular. The current fragmented landscape of location concepts and location-awareness, however, is not suitable for handling location information on a Web scale. Providing users with mechanisms which allow them to control how they want to expose their location information, and thus allow control over how to share location information with others and services, is a crucial step for better location management for mobile devices. This paper presents a concept for representing location vocabularies, matching and mapping them, how these vocabularies can be used to support better privacy for users of location-based services, and better location sharing between users and services. The concept is based on a language for describing place name vocabularies, which we call Place Markup Language (PlaceML), and on various ways how these vocabularies can be used in a location-aware infrastructure of networked devices.
  • Erik Wilde and Martin Kofahl, The Locative Web, First International Workshop on Location and the Web (LocWeb 2008), Beijing, China, April 2008. (available as abstract, PDF, and paper presentation)
    Abstract: The concept of location has become very popular in many applications on the Web, in particular for those which aim at connecting the real world with resources on the Web. However, the Web as it is today has no overall location concept, which means that applications have to introduce their own location concepts and have done so in incompatible ways. By turning the Web into a location-aware Web, which we call the Locative Web, location-oriented applications get better support for their location concepts on the Web, and the Web becomes an information system where location-related information can be more easily shared across different applications and application areas. We describe a location concept for the Web supporting different location types, its embedding into some of the Web's core technologies, and prototype implementations of these concepts in location-enabled Web components.
  • Erik Wilde, The Plain Web, Web Science Workshop (WSW2008) at WWW2008, Beijing, China, April 2008. (available as abstract, PDF, and paper presentation)
    Abstract: The Web has become a very popular starting point for many innovations targeting infrastructure, services, and applications. One of the challenges of today's vast Web landscape is to monitor ongoing developments, put them into context, and assess their chances of success. One of the main virtues of a more scientific approach towards the Web landscape would be a clear differentiation between approaches which build on top of the infrastructure of the Web, with little embedding into the landscape itself, and those that are intended to blend into the Web, becoming a part of the Web itself. One of the main challenges in this area is to understand and classify new developments, and a better understanding of various dimensions of Web technology design would make it easier to assess the chances of success of any given development. This paper presents a preliminary classification, and presents arguments how those factors influence the chance for success.
  • Erik Wilde, Metaschema Layering for XML, Workshop on XML Technologies for the Semantic Web (XSW 2004), Berlin, Germany, October 2004. (available as abstract, PDF, and paper presentation)
    Abstract: The Extensible Markup Language (XML) is based on the concept of schema languages, which are used for validation of XML documents. In most cases, the metamodeling view of XML-based application is rather simple, with XML documents being instances of some schema, which in turn is based on some schema language. In this paper, a metaschema layering approach for XML is presented, which is demonstrated in the context of various application scenarios. This approach is based on two generalizations of the standard XML schema language usage scenario: (1) it is assumed that one or more schema languages are acceptable as foundations for an XML scenario, but these schema languages should be customized by restricting, extending, or combining them; (2) for applications requiring application-specific schema languages, these schema languages can be implemented by reusing existing schema languages, thus introducing an additional metaschema layer. Metaschema layering can be used in a variety of application areas, and this paper shows some possible applications and mentions some more possibilities. XML is increasingly entering the modeling domain, since it is gradually moving from an exchange format for structured data into the applications as their inherent model. XML modeling still is in its infancy, and the metaschema layering approach presented in this paper is one contribution how to leverage the most important of XML feature's, which is the reuse of existing concepts and implementations.
  • Erik Wilde, Pascal Freiburghaus, Daniel Koller and Bernhard Plattner, A Group and Session Management System for Distributed Multimedia Applications, Third COST 237 Workshop on Multimedia Telecommunications and Applications, Barcelona, Spain, November 1996. (available as abstract, PostScript, and PDF)
    Abstract: Distributed multimedia applications are very demanding with respect to support they require from the underlying group communication platform. In this paper, an approach is described which aims at providing group communication platform designers with a component which can be used for powerful group and session management functionality. This component, which can be integrated into group communication platforms, is part of a system called the group and session management system (GMS). The GMS model consists of GMS user agents, which are the components to be integrated into group communication platforms, and GMS system agents which are distributed directory agents providing the distributed database which the user agents access. Communication between these two types of agents is defined in two protocols, the GMS access protocol between user agents and system agents, and the GMS system protocol between system agents. GMS also defines a number of objects and relations which can be used to manage users, groups, and sessions on a very abstract level, thus providing both group communication platform designers and programmers of distributed multimedia application with a high-level description of group communications. This approach enables a truly integrated approach for collaborative applications, where all applications, even when using different group communication platforms, can share the same database about users, groups, and sessions. The paper also contains a short description of the ongoing implementation of GMS's components.
  • Daniel Bauer, Erik Wilde and Bernhard Plattner, Design Considerations for a Multicast Communication Framework, Tenth Annual Workshop on Computer Communications (TCCC 95), Eastsound, Washington, September 1995. (available as abstract, PostScript, and PDF)
    Abstract: In the last years, networked multimedia multipoint applications have been developed in conjunction with emerging broadband networks. Experiences have shown that existing transport systems support these applications only insufficiently, since they offer no assistance for real-time multimedia and multipoint applications. In this paper, we propose a Multicast Communication Framework (MCF) which satisfies the needs of multimedia multipoint applications. MCF covers both transportation and presentation of multimedia data. It guarantees quality of service (QoS) for the complete path between multimedia sources and multimedia sinks. Furthermore, it offers a high-level abstraction of multicast communication services that hides the details of the underlying endsystems and networks.
  • Erik Wilde, Multimedia Joint Editing Based on Reservations, 3rd Australian Multi-Media Communications, Applications and Technology Workshop (MCAT 93), Wollongong, Australia, July 1993. (available as abstract, PostScript, and PDF)
    Abstract: Joint editing as opposed to "normal" editing is an activity carried out by several people simultaneously. It raises the problem of coordinating write access to a document. The approach described in this paper uses an editing model of reserved regions and a client/server architecture. Any region of a document may be selected and reserved (provided that it is not reserved already) and may then be changed by the owner. Other users can only read it. The software basis of the editor is the Andrew Toolkit. This allows the use of arbitrary media types within the document.

Technical Reports

  • Erik Wilde, Florian Michahelles, and Stefan Lüder, Leveraging the Web Platform for the Web of Things: Position Paper for W3C's Web of Things Workshop, Berlin, Germany, June 2014. (available as abstract, PDF, and paper presentation)
    Abstract: Web Architecture provides a general-purpose way of establishing an interlinked network of resources, which are interacted with through the exchange of representations of their state. We argue that the "Web of Things" fits well into this general framework, and thus should be built firmly on the foundation provided by Web Architecture. We also argue that in order to allow an evolutionary path towards a "Web of Things", it is important to take small and incremental steps towards the final goal, instead of trying to establish a grand "Web of Things Architecture" in one monolithic step. One interesting first step could be to focus on Activity Streams as one way how streams of resource updates can be represented in a uniform, extensible, and machine-readable way.
  • Erik Wilde and Robert J. Glushko, Bridging the Gap between eBook Readers and Browsers: Position Paper for W3C's eBooks Workshop, New York, New York, February 2013. (available as abstract and PDF)
    Abstract: Using Web technologies as a platform has become a common approach for many IT scenarios. In this position paper, we describe how a structured analysis of current common eBook readers, and the capabilities of the evolving HTML5 platform, can help to identify areas where there are gaps between what a Web as a Platform (WaaP) eBook reader requires, and what HTML5 and its implementation in modern browsers currently deliver. We believe that eBooks and ePublishing of pre-packaged materials in general should be an important enough use case to influence some of the relevant HTML5 standards, and the current landscape of over 50 specs under development makes it non-trivial to match identified eBook-reader uses cases against current WaaP capabilities. Our proposal is to work towards a functional description of eBook readers that makes it easy for eBook producers and the creators of eBook readers to decide whether a pure WaaP approach is currently feasible for them or not. Such a functional breakdown can also serve as a guide for identifying the most important areas where HTML5 needs to add or change functionality to become a better eBook implementation platform.
  • Erik Wilde and Yiming Liu, Feed Subscription Management, UCB ISchool Report 2011-042, School of Information, UC Berkeley, May 2011. (available as abstract and PDF)
    Abstract: An increasing number of data sources and services are made available on the Web, and in many cases these information sources are or easily could be made available as feeds. However, the more data sources and services are exposed through feed-based services, the more it becomes necessary to manage and be able to share those services, so that users and uses of those services can build on the foundation of an open and decentralized architecture. In this paper we present the Feed Subscription Management (FSM) architecture, which is a model for managing feed subscriptions and supports structured feed subscriptions. Based on FSM, it is easy to build services that manage feed-based services so that those feed-based services can easily create, change and delete feed subscriptions, and that it is easily possible to share feed subscriptions across users and/or devices. Our main reason for focusing on feeds is that we see feeds as a good foundation for an ecosystem of RESTful services, and thus our architectural approach revolves around the idea of modeling services as interactions with feeds.
  • Rosa Alarcòn and Erik Wilde, From RESTful Services to RDF: Connecting the Web and the Semantic Web, UCB ISchool Report 2010-041, School of Information, UC Berkeley, June 2010. (available as abstract and PDF)
    Abstract: RESTful services on the Web expose information through retrievable resource representations that represent self-describing descriptions of resources, and through the way how these resources are interlinked through the hyperlinks that can be found in those representations. This basic design of RESTful services means that for extracting the most useful information from a service, it is necessary to understand a service's representations, which means both the semantics in terms of describing a resource, and also its semantics in terms of describing its linkage with other resources. Based on the Resource Linking Language (ReLL), this paper describes a framework for how RESTful services can be described, and how these descriptions can then be used to harvest information from these services. Building on this framework, a layered model of RESTful service semantics allows to represent a service's information in RDF/OWL. Because REST is based on the linkage between resources, the same model can be used for aggregating and interlinking multiple services for extracting RDF data from sets of RESTful services.
  • Raymond Yee, Eric C. Kansa and Erik Wilde, Improving Federal Spending Transparency: Lessons Drawn from Recovery.gov, UCB ISchool Report 2010-040, School of Information, UC Berkeley, May 2010. (available as abstract and PDF)
    Abstract: Information about federal spending can affect national priorities and government processes, having impacts on society that few other data sources can rival. However, building effective open government and transparency mechanisms holds a host of technical, conceptual, and organizational challenges. To help guide development and deployment of future federal spending transparency systems, this paper explores the effectiveness of accountability measures deployed for the American Recovery and Reinvestment Act of 2009 (Recovery Act or ARRA). The Recovery Act provides an excellent case study to better understand the general requirements for designing and deploying Open Government systems. In this document, we show specific examples of how problems in data quality, service design, and systems architecture limit the effectiveness of ARRA's promised transparency. We also highlight organizational and incentive issues that impede transparency, and point to design processes as well as general architectural principles needed to better realize the goals advanced by open government advocates.
  • Nick Doty, Deirdre Mulligan and Erik Wilde Privacy Issues of the W3C Geolocation API, UCB ISchool Report 2010-038, School of Information, UC Berkeley, February 2010. (available as abstract and PDF)
    Abstract: The W3C's Geolocation API may rapidly standardize the transmission of location information on the Web, but, in dealing with such sensitive information, it also raises serious privacy concerns. We analyze the manner and extent to which the current W3C Geolocation API provides mechanisms to support privacy. We propose a privacy framework for the consideration of location information and use it to evaluate the W3C Geolocation API, both the specification and its use in the wild, and recommend some modifications to the API as a result of our analysis.
  • Dominique Guinard, Vlad Trifa and Erik Wilde, Architecting a Mashable Open World Wide Web of Things, Technical Report 663, Institute for Pervasive Computing, ETH Zürich, February 2010. (available as abstract and PDF)
    Abstract: Many efforts are currently going towards networking smart things from the physical world (e.g. RFID, wireless sensor and actuator networks, embedded devices) networked on a larger scale. Rather than exposing real-world data and functionality through proprietary and tightly-coupled systems we propose to make them an integral part of the Web. As a result, smart things become easier to build upon. Popular Web languages (e.g. HTML, URI, JavaScript, PHP) can be used to build applications involving smart things and users can leverage well-known Web mechanisms (e.g. browsing, searching, bookmarking, caching, linking) to interact and share things. In this paper, we begin by describing a Web of Things architecture and best practices rooted on the RESTful principles that contributed to the popular success, scalability, and evolvability of the traditional Web. We then discuss several prototypes implemented using these principles to connect environmental sensor nodes, energy monitoring systems and RFID tagged objects to the World Wide Web. We finally show how Web-enabled things can be used in lightweight ad-hoc applications called physical mashups.
  • Erik Wilde, Eric C. Kansa and Raymond Yee, Web Services for Recovery.gov, UCB ISchool Report 2009-035, School of Information, UC Berkeley, October 2009. (available as abstract and PDF)
    Abstract: One of the main goals of the Recovery.gov Web site is provide information about how the money for the American Recovery and Reinvestment Act (ARRA) of 2009 is allocated and spent. In this report, we propose a reporting architecture that would focus on the reporting services rather than the Web site and page design, and the uses these Web services to build the user-facing part of ARRA reporting. Our proposed architecture is based on simple and well-established Web technologies, and the main goal of this architecture is to provide citizens and watchdog groups simple and easy access to machine-readable data. Our architecture uses a more sophisticated than simple downloads of data files, and is based on the principles of Representational State Transfer (REST) and uses established and widely supported Web technologies such as feeds and XML. We argue that such an architecture is easy to design and implement, easy to understand for users, and easy to work with for those who want to access ARRA reporting data in a machine-readable way.
  • Jürgen Umbrich, Michael Hausenblas, Phil Archer, Eran Hammer-Lahav and Erik Wilde, Discovering Resources on the Web, DERI Technical Report 2009-08-04, DERI Galway, August 2009. (available as abstract and PDF)
    Abstract: Discovering information on the Web in a scalable and reliable way is an important but often underestimated task. Research on discovery itself is quite a young field. Hence, to date not many Web-compliant discovery mechanism exist. Firstly, we introduce a layered Abstract Discovery Model and discuss its features. Then, driven by use cases and requirements, we review three promising discovery proposals in the context of the Web of Data and the Web of Documents: XRD, POWDER, and voiD.
  • Erik Wilde, Feeds as Query Result Serializations, UCB ISchool Report 2009-030, School of Information, UC Berkeley, April 2009. (available as abstract and PDF)
    Abstract: Many Web-based data sources and services are available as feeds, a model that provides consumers with a loosely coupled way of interacting with providers. The current feed model is limited in its capabilities, however. Though it is simple to implement and scales well, it cannot be transferred to a wider range of application scenarios. This paper conceptualizes feeds as a way to serialize query results, describes the current hardcoded query semantics of such a perspective, and surveys the ways in which extensions of this hardcoded model have been proposed or implemented. Our generalized view of feeds as query result serializations has implications for the applicability of feeds as a generic Web service for any collection that is providing access to individual information items. As one interesting and compelling class of applications, we describe a simple way in which a query-based approach to feeds can be used to support location-based services.
  • Erik Wilde, Eric C. Kansa and Raymond Yee, Proposed Guideline Clarifications for American Recovery and Reinvestment Act of 2009, UCB ISchool Report 2009-029, School of Information, UC Berkeley, March 2009. (available as abstract and PDF)
    Abstract: The Initial Implementing Guidance for the American Recovery and Reinvestment Act of 2009 provides guidance for a feed-based information dissemination architecture. In this report, we suggest some improvements and refinements of the initial guidelines, in the hope of paving the path for a more transparent and useful feed-based architecture. This report is meant as a preliminary guide to how the current guidelines could be made more specific and provide better guidance for providers and consumers of recovery act spending information. It is by no means intended as a complete or final set of recommendations.
  • Erik Wilde and Anuradha Roy, Web Site Metadata, UCB ISchool Report 2009-028, School of Information, UC Berkeley, February 2009. (available as abstract and PDF)
    Abstract: The currently established formats for how a Web site can publish metadata about a site's pages, the robots.txt file and sitemaps, focus on how to provide information to crawlers about where to not go and where to go on a site. This is sufficient as input for crawlers, but does not allow Web sites to publish richer metadata about their site's structure, such as the navigational structure. This paper looks at the availability of Web site metadata on today's Web in terms of available information resources and quantitative aspects of their contents. Such an analysis of the available Web site metadata not only makes it easier to understand what data is available today; it also serves as the foundation for investigating what kind of information retrieval processes could be driven by that data, and what additional data could be provided by Web sites if they had richer data formats to publish metadata.
  • Erik Wilde, Open Location-Oriented Services for the Web, UCB ISchool Report 2008-026, School of Information, UC Berkeley, August 2008. (available as abstract and PDF)
    Abstract: Location concepts are still not part of today's Web architecture, which means that applications must rely on higher-level specifications to use and provide location-oriented services. This problem can be approached in two different approaches, the first being a tightly coupled approach for scenarios targeting an integrated system architecture, and the second being a loosely coupled approach, being centered around cooperating services in the open world of the Web. This paper argues that the current specifications for location-oriented services cater mainly for the tightly coupled approach, whereas the loosely coupled approach is not yet addressed by available specifications. A more lightweight and loosely coupled approach to location-oriented services is the central issue for making the valuable data in geographic information systems better available on the Web. Only if location-oriented services can be used easily and cooperatively, today's rapidly evolving infrastructure of wireless data services and mobile devices can take full advantage of these services.
  • Erik Wilde and Igor Pesenson, Feed Feeds: Managing Feeds Using Feeds, UCB ISchool Report 2008-025, School of Information, UC Berkeley, May 2008. (available as abstract and PDF)
    Abstract: Feeds have become an important information channel on the Web, but the management of feed metadata so far has received little attention. It is hard for feed publishers to manage and publish their feed information in a unified format, and for feed consumers to manage and use their feed subscription data across various feed readers, and to share it with other users. We present a system for managing feed metadata using feeds, which we call feed feeds. Because these feeds are Atom feeds, the widely deployed Atom and AtomPub standards can be used to manage feed metadata, making feed management available through an established API.
  • Erik Wilde, Location Management for Mobile Devices, UCB ISchool Report 2008-016, School of Information, UC Berkeley, February 2008. (available as abstract and PDF)
    Abstract: Location-awareness, in the form of location information about clients and location-based services provided by servers, is becoming increasingly important for networked communications in general, and wireless and mobile devices in particular. The current fragmented landscape of location concepts and location-awareness, however, is not suitable for handling location information on a Web scale. Providing users with mechanisms which allow them to control how they want to expose their location information, and thus allow control over how to share location information with others and services, is a crucial step for better location management for mobile devices. This paper presents a concept for representing location vocabularies, matching and mapping them, how these vocabularies can be used to support better privacy for users of location-based services, and better location sharing between users and services. The concept is based on a language for describing place name vocabularies, which we call Place Markup Language (PlaceML), and on various ways how these vocabularies can be used in a location-aware infrastructure of networked devices.
  • Erik Wilde, Putting Things to REST, UCB ISchool Report 2007-015, School of Information, UC Berkeley, November 2007. (available as abstract and PDF)
    Abstract: Integrating resources into the Web is an important aspect of making them accessible as part of this global information system. The integration of physical things into the Web so far has not been done on a large scale, which makes it harder to realize network effects that could emerge by the combination of today's Web content, and the integration of physical things into the Web. This paper presents a path towards a Web where physical objects are made available through RESTful principles. By using this architectural style for pervasive and ubiquitous computing scenarios, they will scale better, integrate better with other applications, and pave the path towards a "Web of Things" that seamlessly integrates conceptual and physical resources.
  • Ryan Shaw and Erik Wilde, Web-Style Multimedia Annotations, UCB ISchool Report 2007-014, School of Information, UC Berkeley, August 2007. (available as abstract and PDF)
    Abstract: Annotation of multimedia resources supports a wide range of applications, ranging from associating metadata with multimedia resources or parts of these resources, to the collaborative use of multimedia resources through the act of distributed authoring and annotation of resources. Most annotation frameworks, however, are based on a closed approach, where the annotations data is limited to the annotation framework, and cannot readily be reused in other application scenarios. We present a declarative approach to multimedia annotations, which represents the annotations in an XML format independent from the multimedia resources. Using this declarative approach, multimedia annotations can be used in an easier and more flexible way, enabling application scenarios such as third-party annotations and annotation aggregation and filtering.
  • Erik Wilde, Hilfskomponenten zur Konstruktion von XML Schemas, Technical Report eCH-0050, eCH, 2007. (available as abstract)
    Abstract: Das vorliegende Dokument definiert Hilfskomponenten, die zur Definition von XML Schemas verwendet werden können. Diese Hilfskomponenten köonnen eingesetzt werden, um wiederkehrende Aspekte von Datenmodellen auf existierende und gemeinsame XML Schema Definitionen abzubilden. Auf diese Weise müussen keine neuen XML Schema Komponenten für diese Aspekte eines Datenmodells definiert werden, und durch die Verwendung wiederverwendbarer Komponenten wird es einfacher, ein XML Schema zu verstehen, in dem diese Hilfskomponenten verwendet werden.
  • Erik Wilde, Dokumentation für den XML-orientierten Datenaustausch, Technical Report eCH-0036, eCH, March 2007. (available as abstract and PDF)
    Abstract: Im vorliegenden Dokument wird beschrieben, welche Dokumentation für XML Schemas zu erstellen ist, damit die nötigen Grundlagen für die Implementierung einer Schnittstelle für den XML-basierten Datenaustausch vorhanden sind. Ausgangspunkt ist ein Datenmodell des Ausschnitts der Realität, über welchen Informationen ausgetauscht werden sollen. Davon abgeleitet werden für die jeweils interessierenden Transaktionen Datenmodelle für den Datenaustausch (Austauschmodelle). Ein Austauschmodell wiederum dient als Grundlage für eines oder auch verschiedene Schemas (Austauschschemas). Nur wenn die Modelle wohldefiniert und dokumentiert sind, und wenn die Beziehungen zwischen den Modellen (Referenz- und Austauschmodelle) und den Schemas (Austauschmodell und -schemas) wohldefiniert und dokumentiert sind, können unabhängige Implementierer die Schnittstelle korrekt umsetzen.
  • Erik Wilde and Felix Michel, SPath: A Path Language for XML Schema, UCB ISchool Report 2007-001, School of Information, UC Berkeley, February 2007. (available as abstract and PDF)
    Abstract: While the information contained in XML documents can be accessed using numerous standards and technologies, accessing the information in an XML Schema currently is only possible using proprietary technologies. XML is increasingly being used as a typed data format, and therefore it becomes more important to gain access to the type system of an XML document class, which in many cases is an XML Schema. The XML Schema Path Language (SPath) presented in this paper provides access to XML Schema components by extending the well-known XPath language to also include the domain of XML Schemas. Using SPath, XML developers gain better access to XML Schemas and thus can more easily develop software which is type- or schema-aware, and thus more robust.
  • Erik Wilde, Design von XML Schemas, Technical Report eCH-0035, eCH, 2007. (available as abstract and PDF)
    Abstract: Das vorliegende Dokuments behandelt die innere Struktur von XML Schemas, d.h. den Aufbau und den Zusammenhang der sogenannten "XML Schema Komponenten". Dies ist vor allem dann wichtig, wenn das Schema wiederverwendet werden soll, z.B. indem Teile in einem neuen Kontext wiederverwendet werden sollen, oder indem eine neue Version des Schemas definiert werden soll. In beiden Fällen wirkt sich die innere Struktur des Schemas stark darauf aus, wie einfach diese Aufgabe ausgeführt werden kann.
  • Felix Michel and Erik Wilde, XML Schema Editors — A Comparison of Real-World XML Schema Visualizations, TIK Report 265, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, December 2006. (available as abstract and PDF)
    Abstract: XML Schema is a complex language for defining schemas for classes of XML documents, and its inherent complexity as well as its XML transfer syntax make it hard to read XML Schemas for humans. This is a problem, because in many cases XML Schemas are not only used for validation purposes, but also as the data model for classes of XML documents, which must be understood by developers working with these documents. This report looks at various visualizations of XML Schemas in existing software tools and compares them in terms of how well they are suited to represent the data model behind the XML Schema syntax. As of today, no available tool provides a level of abstraction that would be appropriate for a data model perspective; most of them are visualizations of the syntax rather than the model. The tools included in this report are the Eclipse XML editors, XML Spy, oXygen, Turbo XML, and Stylus Studio. This report is not a complete evaluation of these tools, it only looks at them in terms of their schema visualization and their support of a less syntax-centered view of XML Schema.
  • Erik Wilde, XML Namespace Beschreibungen für eCH Schemas, Technical Report eCH-0033, eCH, November 2006. (available as abstract and PDF)
    Abstract: Das vorliegende Dokument beschreibt, in welcher Form XML Namespaces beschrieben werden sollten, so dass die Benutzer von XML-basierten Vokabularen im Rahmen von eCH eine einfache Quelle haben, über die sie Dokumentation zu einem XML Namespace finden können. Ausgehend von einem einfachen XML Schema können zu einer Definition eines XML Vokabulars auf diese Art auf eine einfache Art Beschreibungen zu einem XML Namespace erzeugt werden, aus denen sowohl menschen- als auch maschinenlesbare Information gewonnen werden kann.
  • Erik Wilde, Model Mapping in XML-Oriented Environments, TIK Report 257, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, July 2006. (available as abstract and PDF)
    Abstract: XML and Service-Oriented Architectures (SOA) make it easier to develop loosely coupled systems, but they do not eliminate the fundamental necessity of communications that there must be an underlying shared model. Because of the popularity of SOA, it becomes increasingly important to be able to quickly and efficiently integrate information systems, rather than using an expensive top-down process. The XML landscape evolved bottom-up, and so far it has not yet reached a stage where XML is explicitly targeted in conceptual models. Filling this gap with guidelines and best practices thus is the most pragmatic approach. The approach presented in this paper is a structured view accompanied by guidelines for how interoperability can be achieved on the model level today.
  • Erik Wilde, Knowledge Organization Mashups, TIK Report 245, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, March 2006. (available as abstract and PDF)
    Abstract: Information today is often distributed among many different system within a complex IT environment. Using this information for creating knowledge organization systems and services thus involves using this distributed information and re-purposing it within new applications. The current trend in Web technologies to build systems not in a monolithic fashion, but rather intended as building blocks within a constantly evolving and unplanned landscape of information processing agents. This approach can be used as a foundation for building Knowledge Organization Mashups. We investigate the possibilities and challenges of this type of application, and as a case study describe a service for managing bibliographic metadata.
  • Arijit Sengupta and Erik Wilde, The Case for Conceptual Modeling for XML, TIK Report 244, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, February 2006. (available as abstract and PDF)
    Abstract: Because of its success, XML is increasingly used in many different application areas, and is moving towards the center of applications, evolving from an exchange format to the native data format of application components. These developments suggest that similar to other core areas of application design, XML should be designed conceptually before the implementation tasks of designing markup and writing schemas are approached. In this paper, we describe why conceptual modeling will become an important part of the XML landscape, what issues need to be addressed, and what the requirements for a conceptual modeling language for XML are.
  • Erik Wilde, XML-Centric Application Development, TIK Report 242, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, February 2006. (available as abstract and PDF)
    Abstract: XML has become an important standard for exchanging structured data between applications, but XML increasingly penetrates applications and is thus also becoming an important part of application development. The current state of XML specifications and technologies provides support in many aspects of application development, while other aspects are still only poorly supported. We describe as an example the development of an XML-centric application and identify and describe the areas where today's support for application developers could and should be improved. This case study thus can help developers to focus on the problem areas of today's support for XML-centric application development, and may also serve as an agenda for areas where more research and tools are required to improve the development of XML-centric applications.
  • Erik Wilde, Hanspeter Salvisberg and Alexander Pina, XML Best Practices, Technical Report eCH-0018, eCH, August 2005. (available as abstract and PDF)
    Abstract: Das vorliegende Dokument beschreibt Regeln, welche bei der Benutzung von XML und von XML Schemas in eCH Standards zu berücksichtigen sind. Dabei wird das Schwergewicht auf Basismechanismen und Grundsatzüberlegungen gestellt, welche sich die Benutzern von XML Schemas in der Regel stellen.
  • Erik Wilde, Shared Bibliographies as Hypertext, TIK Report 224, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, May 2005. (available as abstract and PDF)
    Abstract: The creation, management and dissemination of bibliographic information is a common task for almost all people working in a research environment, and it also is a (often weak) way of knowledge management. Current tools and methods are either centered on the process of document preparation using bibliographic references, or on the aspect of creating annotations and/or relationships describing bibliographic resources. As a result, bibliography management in many cases is still carried out with fairly simple tools and methods, and with little or no support for sharing the information. In the ShaRef project, the areas of document preparation, knowledge management, and information sharing among workgroup members are treated as equally important. As a result, ShaRef enables users to create, manage, and disseminate bibliographic information in a wide variety of use cases.
  • Erik Wilde, Sai Anand and Petra Zimmermann, ShaRef: XML-Centric Software Design, TIK Report 213, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, February 2005. (available as abstract and PDF)
    Abstract: In this paper, we describe a real-life application which has been designed around an XML data model and various XML technologies. We describe the rationale behind this design, and the benefits from the information system design point of view. We believe that XML-centric design has a lot of benefits, and that future developments in the area of XML technologies will better support this design style and help to make it even more advantageous. XML-centric design allows to leverage an ever-increasing number of XML-based technologies. We describe some of the XML technologies that helped us implementing some of the core parts of the software, and point out some others that we do not yet use, but are actively investigating for possible future developments.
  • Erik Wilde and Willy Müller, Organizing Federal E-Government Schemas, TIK Report 212, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, February 2005. (available as abstract and PDF)
    Abstract: In this paper we present an approach to organize e-government schemas in Switzerland. On the political side, Switzerland is a challenging environment for any federation-wide harmonization and cooperation, because many authorities are organized independently. On the technical side, we describe an approach which aims at increasing the federation-wide cooperation through providing interested parties with a low barrier-to-entry, and with clearly visible benefits through the continuous evolution of a directory of e-government schemas. This paper describes a light-weight Semantic Web approach, enabling schema authors to create namespace descriptions that provide a minimal semantic description of the namespace's subject. Using these namespace descriptions, RDF data is extracted and serves as source for a highly interlinked directory of e-government schemas in Switzerland.
  • Erik Wilde, Usage and Management of Collections of References, TIK Report 194, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, June 2004. (available as abstract and PDF)
    Abstract: Collections of references are an important part of scientific and scholarly work. For many people, collections of references are the most advanced form of formal knowledge representation they are using. However, today's standards and tools for collections of references are pretty poor, providing closed environments with no or little extension mechanisms. In this paper, we describe our goal to design and implement an advanced system for collections of references. The primary goal of this system is to provide users with a tool that matches their requirements of semantic richness vs. usability, which are competing goals. As a first step towards this goal, we designed and conducted a survey among the employees of a large university, trying to find out how people are managing their references today, and what their expectations are if a new tool became available. The results of the survey are presented, followed by conclusions that are the guiding principles for the upcoming ShaRef project. The goal of this project is to design and implement a system for reference management that runs Web-based as well as stand-alone, is easy to use, supports collaborative collections of references and collection sharing, has an open and extensible data model, covers the majority of user requirements according to the 80/20 principle, and thus provides scientists and scholars with a better way to manage their collections of references.
  • Erik Wilde and Andreas Steiner, Networking Metaphors for E-Commerce, TIK Report 190, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, February 2004. (available as abstract and PDF)
    Abstract: E-commerce technologies have reached a level of maturity where many businesses are no longer hampered by technological limitations. However, the adoption of e-commerce technologies is slower than anticipated. We argue one of the limitations is a psychological barrier, which is created by the perception that e-commerce technologies are a whole new set of technologies which are completely different from computer networking. By applying metaphors from basic networking technologies (such as bridges and routers), we try to (1) demonstrate that e-commerce technologies are — in many ways — comparable to computer networking, and (2) show that convincing businesses to adopt e-commerce technologies could be made easier by showing them that e-commerce is basically computer networking taken to another level. We also believe that using these metaphors will make it easier to talk about e-commerce technologies, to reuse existing knowledge about networking architectures on this new level, and to identify the areas where additional work needs to be done.
  • Erik Wilde, Position Paper for the W3C Workshop on Binary Interchange of XML Information Item Sets, Santa Clara, California, September 2003. (available as PDF)
  • Erik Wilde, Character Repertoire Validation for XML (CRVX) Version 1.0, TIK Report 172, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, June 2003.
  • Kilian Stillhard and Erik Wilde, XML Schema Compact Syntax (XSCS) Version 1.0, TIK Report 166, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, March 2003. (available as abstract and PDF)
    Abstract: XML Schema is a schema language for XML, providing advanced features for creating types, deriving types, and a library of built-in simple datatypes. The model behind XML Schema are XML Schema components, and XML Schema uses XML syntax for representing XML Schema components. In this report, we present an alternative syntax for XML Schema, which is defined using EBNF productions. Since the new syntax has been designed with the design goals of readability and compactness, it is called XML Schema Compact Syntax (XSCS). XSCS has been created for making XML Schema easier to read and write by humans, while XML Schema's XML syntax is better suited for automated processing of XML Schemas. Consequently, XSCS is not meant as a replacement of the XML syntax, but as a complementary syntax.
  • Erik Wilde, The Extensible XML Information Set, TIK Report 160, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, February 2003. (available as abstract and PDF)
    Abstract: XML and its data model, the XML Information Set, are used for a large number of applications. These applications have widely varying data models, ranging from very simple regular trees to irregularly structured graphs using many different types of nodes and vertices. While some applications are sufficiently supported by the data model provided by the XML Infoset itself, others could benefit from extensions of the data model and assistance for these extensions in supporting XML technologies (such as the DOM API or the XSLT programming language). In this paper, we describe the Extensible XML Information Set (EXIS), which is a reformulation of the XML Infoset targeted at making the Infoset easier to extend and to make these extensions usable in higher-level XML technologies. EXIS provides a framework for defining extensions to the core XML Infoset, and for identifying these extensions (using namespace names). Higher-level XML technologies (such as DOM or XPath) can then support EXIS extensions through additional interfaces, such as a dedicated DOM module, or XPath extension mechanisms (extension axes and/or functions). In order to make EXIS work, additional efforts are required in these areas of higher-level XML technologies, but EXIS itself could be used rather quickly to provide a foundation for well-defined Infoset extensions, such as XML Schema's PSVI contributions, or the reformulation of XLink as being based on a data model (rather than a syntax).
  • Erik Wilde, A Proposal for an XLink Data Model, TIK Report 148, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, August 2002. (available as abstract and PDF)
    Abstract: This report describes a proposal for a data model for XLink. It defines the data model as contributions of XLink to the XML Infoset. The data model is meant as a clarification of the link model implicitly defined by XLink. It is also meant as the foundation for future work on XLink, for example a DOM module for XLink support, a CSS module for styling XLinks, or a protocol for accessing XLink linkbases.
  • Erik Wilde, Protocol Considerations for Web Linkbase Access, TIK Report 143, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, July 2002. (available as abstract and PDF)
    Abstract: We propose the Open Web, which aims at transforming the Web into an Open Hypermedia System. Based on the Extensible Linking Language (XLink), we investigate the possibilities for implementing linkbase access methods. Linkbases are collections of so-called third-party links, which are links which live outside the resources that they are linking, and thus must be found and retrieved somehow when presenting the resources that they are linking. We focus on the protocol issues of accessing linkbases, concentrating on how such a new protocol could and should be designed. In addition to our design goal of specifying a protocol for accessing the linkbase Web service, be believe that our protocol considerations can serve as a blueprint for other areas where Web access to services is required.
  • Erik Wilde, Adobe Advanced Annotations (A³), White Paper, May 2002. (available as abstract and PDF)
    Abstract: PDF in its current for has rather weak support for annotations. This paper describes a usage scenario and how advanced annotations support could make using PDF (and possibly other Adobe applications) more productive. Starting from these observations, different problems are described which could be solved based on different evolutionary steps of the annotation architecture, which has been dubbed "Adobe Advanced Annotations (A³)". Following this scenario, some design approaches and a number of implementation issues are discussed.
  • Erik Wilde and Christian Stillhard, Openly Accessing Linkbases, TIK Report 134, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, January 2002. (available as abstract and PDF)
    Abstract: In this paper, we investigate the requirements for linkbase access on the Web. The recent advancements of Web technologies (XML, XLink, and XPointer) have brought us one step closer to the vision of using the Web as an Open Hypermedia System (OHS). However, some of the pieces to make this work are still missing, and this paper discusses which they are and the status of the current work in these areas. Concentrating on one of these pieces, the access to linkbases, the paper than continues by describing the prerequisites and requirements of such an access mechanism, and closes with a list of requirements and design principles that we will use in a next step to specify and implement a linkbase access protocol.
  • Glenn Oberholzer and Erik Wilde, Extended Link Visualization with DHTML: The Web as an Open Hypermedia System, TIK Report 125, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, January 2002. (available as abstract and PDF)
    Abstract: The World Wide Web is by far the most successful hypermedia system, its name often being used synonymously for the Internet. However, it is based on a rather restricted hypermedia model with limited linking functionality. Even though underlying systems may provide a richer data model, there is still the question of how to present this information in a Web-based interface in an easily understandable way. Assuming an underlying system similar to Topic Maps, which allows storing, managing, and categorizing meta data and links, we propose a presentation of extended links. We try to provide a usable way for users to handle the additional functionality. The mechanism is based on already available technologies like DHTML. It is one facet of our approach to make the Web more interconnected and to work towards a more richly and openly linked Web.
  • Erik Wilde, Picture Metadata and its Associations: Using Web Technologies for Representing Semantics, TIK Report 124, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, January 2002. (available as abstract and PDF)
    Abstract: Web technologies today go far beyond simply enabling the creation of Web pages. XML and metadata formats based on it make it possible to manage metadata in a powerful and flexible way. In this paper, we describe the concept and the prototype of an application for the management of metadata for a specific domain, metadata associated with pictures. The goal of the paper is to highlight the benefits which result from employing Web technologies instead of proprietary data formats. While we think that both application developers as well as users could benefit from such an approach, we are aware that in many real-world cases other issues (such as the ability to bind users to a certain product) also play an important role. Nevertheless, in this paper we show that open and well-documented technologies not only can make software development easier, but also open up synergies between standards-compliant products. While the prototype we present in this paper is not sophisticated enough to be released to the general public, we hope that software vendors will consider incorporating some of the concepts introduced in this paper.
  • Erik Wilde and Manfred Meyer, Routed Message Driven Beans: A new Abstraction for using EJBs, TIK Report 102, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, December 2001. (available as abstract and PDF)
    Abstract: Asynchronous messaging between cooperating software components proves to be useful in many scenarios. One framework supporting this functionality is Sun's J2EE platform with its Message-Driven Beans (MDB) model. We present a novel way to use MDBs by providing a way to add routing information to the messages, which is then used to send a message through a given path of processing components. We call this model Routed Message-Driven Beans (RMDB), and the two main topics that are important for RMDBs are (1) the message format that is used for the routing information, and (2) the API which can be used by programmers to take advantage of the abstraction provided by RMDBs. Performance measurements show that the overhead caused by our RMDB framework is acceptable if messages are routed through several EJBs.
  • Erik Wilde, Specification of GMS System Protocol (GSP) Version 1.0, TIK Report 19, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, September 1996. (available as abstract, PostScript, and PDF)
    Abstract: Group communications require special support for name and address management, QoS support, and connection establishment. The group and session management system (GMS) is a distributed directory system which is specifically designed to support group communication infrastructures. This report briefly introduces the concepts of GMS, the data types available, and then gives the specification of GSP, the GMS system protocol. This protocol is used for communication between GSAs, ie it is used for GMS's internal communication. GSP is specified by state diagrams, time sequence diagrams, textual descriptions, the PDU syntax in ASN.1, and the PDU semantics in comments given for each PDU.
  • Erik Wilde, Specification of GMS Access Protocol (GAP) Version 1.0, TIK Report 15, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, March 1996. (available as abstract, PostScript, and PDF)
    Abstract: Group communications require special support for name and address management, QoS support, and connection establishment. The group and session management system (GMS) is a distributed directory system which is specifically designed to support group communication infrastructures. This report briefly introduces the concepts of GMS, the data types available, and then gives a specification of GAP, the GMS access protocol. GAP is specified by state diagrams describing the behavior of two communication entities, the PDU syntax in ASN.1, and the PDU semantics in comments given for each PDU.
  • Erik Wilde and Christoph Burkhardt, Modelling Groups for Group Communications, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, May 1994. (available as abstract, PostScript, and PDF)
    Abstract: This paper presents a general model of a Group Management Service (GMS) which is designed to support collaborative interactions among groups of distributed users using different applications. There are two main benefits of such a service. Firstly, it would be easier to implement new collaborative applications because of the possibility to use an existing service. Secondly, it would be possible for different applications to share collaboration relevant information because of a common database of information about users and groups maintained by the GMS. One important property of the GMS is its flexibility with respect to the information stored. It is possible to store application-independent as well as application-dependent information. Using an object-oriented approach, applications can share the application-independent information (such as a group's members and administrative information) and can also use the GMS to store application-dependent information which can only be interpreted by a closed set of applications (those who know the syntax and semantics of the application-dependent information). The model of the GMS is very simple and consists mainly of two classes of objects, namely user and group. A small set of operations is provided for querying and modifying GMS information. The possibility to store application-dependent information is realized by allowing using application to create derived classes (ie subclasses) of the classes user and group. Thus it is possible for applications using the GMS to implement their own user and group classes without losing the ability to manage these objects with the GMS. Two applications are presented which may use the GMS to manage their users and groups. Both applications use application-specific derived classes of user and group. However, it is still possible for these applications to share the application-independent information of their users and groups.
  • Erik Wilde, Multi-User Multimedia Editing with the MultimETH System, TIK Report 18, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, February 1994. (available as abstract, PostScript, and PDF)
    Abstract: Multi-user multimedia editing ought to be supported by different means. Besides technical means required for editing a given document by several users simultaneously, there is also a demand for communication mechanisms which are able to support the synchronization of the users. The system presented in this paper does not only offer multi-user multimedia editing capabilities but also provides a shared workspace and an environment for communication both via terminals and via telephone. The shared workspace is a concept which allows the members of a conference to share documents and other data. Telephone communications enable members of a collaborative editing session to have conference connections and to dynamically form subgroups.
  • Erik Wilde, Supporting CSCW Applications with an Efficient Shared Information Space, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, February 1994. (available as abstract, PostScript, and PDF)
    Abstract: Today most CSCW systems are built on top of standard operating systems. Only a few frameworks for generic support of CSCW applications exist. These platforms mostly concentrate on the management of workflows and on the layer on top of them, the CSCW applications. Little work is done in exploring the impacts of new networks onto support for CSCW. The project described in this paper focuses on providing CSCW applications with an efficient shared information space. Efficiency in this context means the utilization of network technology which offers much better services than today's networks.
  • Hannes Lubich, Christoph Burkhardt and Erik Wilde, Schlussbericht zum ZBF-Projekt 224 Z (MultimETH): Bericht, Technical Report, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, September 1993. (available as PostScript and PDF)
  • Hannes Lubich, Christoph Burkhardt and Erik Wilde, Schlussbericht zum ZBF-Projekt 224 Z (MultimETH): Benutzerhandbuch, Technical Report, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, September 1993. (available as PostScript and PDF)
  • Hannes Lubich, Christoph Burkhardt and Erik Wilde, Schlussbericht zum ZBF-Projekt 224 Z (MultimETH): Implementationsbeschreibung, Technical Report, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, September 1993. (available as PostScript and PDF)
  • Erik Wilde, Konzept eines mehrbenutzerfähigen Multimedia-Editors, Technical Report, Computer Engineering and Networks Laboratory (TIK), ETH Zürich, May 1992. (available as PostScript and PDF)

Magazine Articles

Newspaper Articles

  • Erik Wilde, XML: It's only the beginning, Australian IT, 2/15/2000.
  • Erik Wilde, Hypermedia-Möglichkeiten des WWW — Neue Perspektiven, aber auch neue Probleme, Neue Zürcher Zeitung, 2/8/2000.

Online Articles

  • Erik Wilde, XInclude Processing in XSLT, xml.com, March 2007. (available as abstract)
    Abstract: Assembling various parts of a document before processing the assembled document is a recurring theme in document processing. XML Inclusions (XInclude) is the W3C standard which has been created to support this scenario, but since it is a standalone specification, it needs to be supported by a piece of software implementing this functionality. The XInclude Processor (XIPr) written in XSLT 2.0 implements XInclude and thus may help to reduce the dependency on numerous software packages, if XInclude should be used in an environment where XSLT 2.0 is used anyway. XIPr is implemented as a single XSLT 2.0 stylesheet and can be used standalone in a publishing pipeline, or as an imported module in some other XSLT code for integrated XInclude processing.
  • Erik Wilde, A Tool for Bibliography Management and Sharing: The ShaRef Project, D-Lib Magazine, 10(9), September 2004. (available as HTML)
  • Erik Wilde, Character Repertoire Validation for XML, xml.com, January 2004. (available as abstract)
    Abstract: In this article, a small schema language for XML is presented which can be used to restrict the use of character repertoires in XML documents. It is called Character Repertoire Validation for XML (CRVX). CRVX restrictions can be based on structural components of an XML document, contexts, or a combination of both.
  • Erik Wilde, A Compact Syntax for XML Schema, xml.com, August 2003. (available as abstract)
    Abstract: XML Schema is a very powerful and also a rather complex schema language. One of the problems when working with XML Schema is the fact that XML Schema uses an XML syntax, which makes XML Schemas verbose and hard to read. In this article, we describe a compact text-based syntax for XML Schema, called XML Schema Compact Syntax (XSCS), which re-uses well known syntactic constructs from DTDs; and we present a Java-based implementation for converting the compact syntax to the XML syntax and vice versa.

Presentations

University Courses Tutorials (Invited) Talks Professional Courses
2014 W3C WoT
2013 XML@ISchoolFa13
2012 DBM@ISchoolFa12
2011 PPOS@ISchoolSp11 WAIM@ISchoolSp11 XML@ISchoolFa11 Oracle Tech Talk Integrative Workshop ITNG 2011
2010 MobApp@ISchoolSp10 WAIM@ISchoolSp10 WWW@ISchoolFa10 XML@ISchoolFa10 WWW2010 ICWE2010 LDOW2010 ICWE 2010 ICSOC 2010
2009 WAIM@ISchoolSp09 WWW@ISchoolFa09 XML@ISchoolFa09 WWW2009 ICWE 2009 WWW 2009 ICWE 2009 W3C 2009
2008 Publishing@ISchoolSp08 Services@ISchoolSp08 ISD@ISchoolSp08 WWW@ISchoolFa08 XML@ISchoolFa08 WSW2008 LocWeb 2008 Web 2.0 E-Courts 2008
2007 InfoSys@ISNMWS06/07 Publishing@ISchoolSp07 SSME@ISchoolSp07 WWW@ISchoolFa07 XML@ISchoolFa07 ISD@ISchoolFa07 ISD IRI 2007 Academic Library 2.0
2006 XML@FHNWSS06 XML@ETHZSS06 XML@ISchoolFa06 Modeling@ISchoolFa06 Services@ISchoolFa06 XML Clearinghouse DBIS ETH ICT TIK4 Modeling Workshop XSLT18
2005 XML@FHASS05 XML@ETHZSS05 WWW2005 ETH World Showcase LMU BXML 2005 EINIRAS 2005 DERI
2004 XML@FHASS04 XML@ETHZSS04 XML@FHAWS04/05 JAX 20041 JAX 20042 ICETE 20041 ICETE 20042 ECOWS'04 XSW 2004 ETH World Info Lunch ETH World Explore! XML24 Schema8 XSLT15 XSLT16 XML25 Schema9 XML26 XSLT17 XML27
2003 XML@ETHZSS03 XML@FHAWS03/04 JAX 20031 JAX 20032 XML Europe 2003 FHF DGB ZGDV IUC24 SINN03 XSLT10 XML20 XSLT11 XSLT12 Schema5 XML21 XML22 Schema6 XSLT13 XML23 XSLT14 Schema7 XDBMS1
2002 XML@ETHZSS02 namics1 namics2 IBM ZRL XML 2002 MAD XML14 XSLT5 Schema1 XML15 Schema2 XSLT6 XML16 XSLT7 Schema3 XSLT8 XSLT9 XML17 XML18 XML19 Schema4
2001 WWW@ETHZSS01 SwissICT XML Europe 2001 TIK3 XSLT2 WWW14 XML8 XML9 XSLT3 XML10 WWW15 XML11 XSLT4 XML12 XML13 WWW16
2000 WWW@ETHZSS00 WWW10 XML1 XML2 WWW11 XML3 XML4 XSLT1 XML5 WWW12 XML6 WWW13 XML7
1999 WWW@ETHZSS99 Erfa-PIM TOPsoft99 WWW5 WWW6 EComm WWW7 WWW8 WWW9
1998 WWW@ETHZWS98/99 WWW3 WWW4
1997 WWW1 WWW2
1996 ECMAST 96 ICSI COST 237 TIK2 Unix7
1995 TIK1 Unix5 Unix6
1994 IPC2 Unix4
1993 MCAT 93 Unix2 Unix3 IPC1
1992 Unix1

University Courses

Invited Talks

Tutorials

Talks

Professional Courses

Show all Abstracts
Hide all Abstracts

Last modification:
Monday, 22-Sep-2014 16:00:30 PDT
WCAG 1.0 AA Valid CSS! Valid XHTML 1.0!