We use proprietary and third party's cookies to improve your experience and our services, identifying your Internet Browsing preferences on our website; develop analytic activities and display advertising based on your preferences. If you keep browsing, you accept its use. You can get more information on our Cookie Policy
Cookies Policy
FI-WARE Applications/Services Ecosystem and Delivery Framework - FIWARE Forge Wiki

FI-WARE Applications/Services Ecosystem and Delivery Framework

From FIWARE Forge Wiki

Jump to: navigation, search

IMPORTANT NOTE: This page is deprecated. Please refer to the most updated description of the architecture of the Applications and Services Ecosystem and Delivery Framework



The Application and Services Ecosystem and Delivery Framework in FI-WARE comprises a set of generic enablers (i.e. reusable and commonly shared functional building blocks serving a multiplicity of usage areas across various sectors) for creation, composition, delivery, monetisation, and usage of applications and services on the Future Internet. It supports the necessary lifecycle management of services and applications from a technical and business perspective, the latter including business aspects such as the management of the terms and conditions associated to the offering, accounting, billing and SLAs, thus enabling the definition of a wide range of new business models in an agile and flexible way along with their association with the different application and services available in the ecosystem. The capacity to monetize applications and services based on those business models, adapting the offering to users and their context and dealing with the fact that they may have been built through the composition of components developed and provided by different parties is a key objective for the business framework infrastructure.

FI-WARE Apps/Services delivery framework considers the ability to access and handle services linked to processes, ‘things’ and contents uniformly, enabling them to be mashed up in a natural way. The framework brings the necessary composition and mash-up tools that will empower users, from developers to domain experts to citizens without programming skills, to create and share in a crowd-sourcing environment new added value applications and services adapted to their real needs, based on those offered from the available business frameworks. A set of multi-channel/multi-device adaptation enablers are also contributed to allow publication and delivery of the applications through different and configurable channels, including those social web networks which may be more popular at the moment, and their access from any sort of (mobile) device.

To avoid misunderstandings we want to emphasize that FI-Ware will not build an ecosystem, FI-Ware will rather provice Generic Enablers for a core business platform, which will offer tradability, monetization, revenue sharing, payment, ..., important ingredients for a business ecosystem, after customization and domain-specific adaptation for USDL and the GEs as well as some complementary components. Also for Composition and Mash-up FI-Ware "only" offers technologies brought in by the partners and not an universal answer to all composition & mash-up problems, which is clearly beyond the scope of the project.

The high-level architecture illustrated in Figure 41 is structured according to the key business roles and their relationships within the overall services delivery framework and existing IT landscapes. The technical architectures implementing these business roles and relationships are more complex and will be discussed and illustrated in more detail in the subsequent sections. The applications and services delivery framework comprises the internal key business roles: Aggregator, Broker, Gateway, and Channel Maker. Furthermore, there are the external key roles: Provider, Hoster, Premise, and Consumer.

The Provider role supports partners that hold governance and operational responsibility for services and apps through their business operations and processes. The Provider role allows services and applications to be exposed into a business network and services ecosystem so that they can be accessed without compromise to the given delivery mechanisms. For example, a provider should be able to publish a service to a third-party, while still requiring that it run through its current hosting environment and be securely interacted with. Some providers may have services to be re-hosted onto a third-party cloud or to be downloaded and running in the users’ environment. Given that the service will be accessed by different parties, the provider needs to ensure that the service is comprehensively described to ensure it is properly used, especially for the benefit of consumers. Thus, for example the description of a service has to include: data about service ownership, functional descriptions, dependencies on other services, pricing, and consumer rights, penalties and obligations in terms of service performance, exceptions, information disclosure and other legal aspects. For services related in wider business contexts, domain-specific semantic descriptions, vertical industry standards and other documents containing service-related information (like policy, legislation and best practices documents) are also useful to enhance service discovery and consumer comprehension of services. Therefore, they need to be linked to service descriptions and used during service discovery and access through unstructured search techniques.

High-level architecture

The Broker role supports exposing services from diverse providers into new markets, matching consumer requests with capabilities of services advertised through it. The Broker will provide the Business Framework consisting of the “front-desk” delivery of services to consumers and a flexible set of “back-office” support for the secure and trusted execution of the service. The Broker is central to the business network and is used to expose services from providers, to be onboarded and delivered through the Broker’s service delivery functionality (e.g. run-time service access environment, billing/invoicing, metering). The Provider interacts with the Broker to publish and register service details so that they can be accessed through the Broker. Services consumed through end users or applications, or services offered in a business network through intermediaries can be published, alike, in the Broker. Third-parties can extend and repurpose services through the Broker, by using the functionality of the Broker to discover, contract and make use of services through design-time tooling (see subsequent sections for further details). Ultimately service delivery is managed through the Broker when services are accessed at run-time by end users, applications, and business processes. In short, the Broker provides a monetization infrastructure including directory and delivery resources for service access.

Although services are accessed through a broker, the execution of their core parts resides elsewhere in a hosted environment. Certain types of services and apps could be re-hosted from their on-premise environments to cloud-based, on-demand environments to attract new users at much lower cost, without the overheads of requiring access to large application backends.

The Hoster role allows representing the different cloud hosting providers involved as part of the provisioning chain of an application in a business network. An application/service can be deployed onto a specific cloud using the Hoster’s self-service interface. By providing a generic interface for Hosters to report Usage Accounting Records, FI-WARE opens up the possibility to access several cloud providers and implement interesting revenue share models between Providers and Hosters. By defining a standard self-service interface, in turn, FI-WARE opens up the possibility to migrate among alternative FI-WARE Cloud Instance Providers, which would play the Hoster role, avoiding “lock-in”. Work in the Apps and services delivery framework chapter will closely interact with work in the Cloud Hosting chapter in order to address the Hoster interfaces topics (Usage Accounting Records reporting and Self-service interfaces). Cloud services are highly commoditized with slim margin and are subject to business volatility. Cloud services exposed to a business network should be advertised through the Broker. When a service needs to be hosted, the Broker can help to match its hosting needs (e.g. platform services, operating systems) with cloud services (advertised through the Broker). The Broker performs the matching and lists candidate cloud services for a user (a provider requiring hosting as a part of exposing a service offer in a business network or a consumer negotiating hosting options/costs when a service is ordered). Once hosted, the cloud service may be monitored for performance and reliability. FI-WARE Cloud Instance Providers playing the role of Hosters offer business networks partners the possibility to shift cloud providers efficiently, avoiding being “lock-in” a concrete Cloud Hosting Provider.

The Aggregator role supports domain specialists and third-parties in aggregating services and apps for new and unforeseen opportunities and needs. Application services may be integrated into corporate solutions, creating greater value for its users not otherwise available in their applications, or they could be aggregated as value-added, reusable services made available for wider use by business network users. In either case, the Aggregator provides the dedicated tooling for aggregating services at different levels - UI, service operation, business process or business object levels.

Aggregators are similar to providers however, there is an important difference. Aggregators do not operate all services that they aggregate. Rather, they obtain custodial rights from providers to repurpose services, subject to usage context, cost, trust and other factors. Despite new aggregated services being created, constituent services execute within their existing environments and arrangements: they continue being accessed through the Broker based on a delivery model (as discussed above). The delivery framework offers service-level agreement support through its different roles so that providers and aggregators can understand the constraints and risks when provisioning application/services in different situations, including aggregation of services.

The Aggregator provides design-time and run-time tooling functionality. It interacts with the Broker for discovery of services to be aggregated and for publishing aggregated services for use by the business network partners.

The Gateway role supports Providers and Aggregators in selecting a choice of solutions that may provide interoperability, as a service, for their applications. This can include design-time business message mapping as well as run-time message store-forward and message translation from required to proprietary interfaces of applications. This is beneficial when services need to be exposed on a business network so that they can be accessed by different parties. When a provider exposes a service onto a business network, different service versions can be created that have interfaces allowing interactions with different message standards of the partners that are allowed to interact with the service.

The gateway services are advertised through the Broker, allowing providers and aggregators to search for candidate gateway services for interface adaptation to particular message standards. Key differentiators are the different pricing models, different communities engaging in their hubs, and different quality of services. The gateway services may address design time mappings as well as run-time adaptation.

The Channel Maker role provides support for creating outlets through which services are consumed. Channels, in a broad sense, are resources, such as Web sites/portals, social networks, mobile channels and work centres, through which application/services are accessed. The mode of access is governed by technical channels like the Web, mobile, and voice response.

The notion of channelling has obvious resonance with service brokerage. Virtually all the prominent examples of Web-based brokers like iTunes, eBay and Amazon expose services directly for consumption and can also be seen as channels. The service and apps delivery framework’s separate designations of the service channel and the service broker addresses a further level of flexibility for market penetration of services: service channels are purely points of service consumption while brokers are points of accessing services situated between channels and hosting environments where the core parts of service reside. This separation allows different service UIs and channels to be created, outside the capacity of those provided by brokers. In fact, different channels can be created for services registered through the same broker. The delivery framework, in other words, allows for consolidation of service discovery and access through the Broker and independent consumption points through different channels enabled through the Channel Maker. This is especially useful for mainstream verticals like the public sector dedicating whole-of-government resources for service discovery and access, but requiring a variety of channels for its different and emerging audiences.

The creation of a channel involves selection of services consumed through a channel, constraining which service operations are to be used, what business constraints apply (e.g. availability constraints), and how the output of an operation should be displayed (forms template) in the channel. The Channel Maker interacts with the Broker for discovery of services during the process of creating or updating channel specifications as well as for storing channel specifications and channelled service constraints back in the Broker.

The Consumer role completes the service supply chain, effectively fostered by the delivery framework. Through the Consumer, parties can manage the “last mile” integration where application and services are consumed through different environments. This involves the allocation of resources consuming services to consumer environments in which they operate and the interfacing of consumer environments so that the services required can be accessed at run-time in a secure and reliable way.

As discussed above, the resources that consume services are either explicit channels or applications that have services integrated (“hard-wired” into code) or accessible (through a discover/access interface). The Channel Maker and Aggregator are used for supporting the processes of integration of channels and applications, respectively. Since the allocation of applications in organizations is a specialized consideration controlled through administration systems, the Consumer is used mostly for allocating channels in consumer environments (inside organizations or wider availability on the Web).

The Consumer supports consumer environments so that the services they require are integrated with the Broker through inbound and outbound run-time interactions. Recall, the Broker allows services to be accessed through different delivery models. Interfaces are required on remote consumer environments so that, on the one hand, applications running in the environments have well-defined operations that allow interactions with the Broker. Channels can be discovered through the Broker and allocated through the Consumer for usage through designated users/groups in the business network.

In the following sections we describe the architectures realizing the introduced roles in more detail and identify generic enablers. The sections are organized in the following way:

  • The business framework infrastructure realizes the Broker functionality
  • Composition and Mashup infrastructure covers the Aggregator and Gateway functionality
  • The Channel Maker is addressed by Multi-Service Multi-device adaptation

USDL Service Descriptions

The Unified Service Description Language (USDL) is a platform-neutral language for describing services. It was consolidated from SAP Research projects concerning services-related research as an enabler for wide leverage of services on the Internet. With the rise of commoditized, on-demand services, the stage is set for the acceleration of and access to services on an Internet scale. It is provided by major investments through public co-funded projects, under the Internet of Services theme, where services from various domains including cloud computing, service marketplaces and business networks, have been investigated for access, repurposing and trading in large settings (e.g., FAST, RESERVOIR, MASTER, ServFace, SHAPE, SLA@SOI, SOA4ALL), and the Australian Smart Services CRC.)

1- http://fast-fp7project.morfeo-project.org/

2- http://www.reservoir-fp7.eu/

3- http://www.master-fp7.eu/


5- http://www.shape-project.eu/

6- http://sla-at-soi.eu/

7- http://www.soa4all.eu/

8- http://www.smartservicescrc.com.au/

The kinds of services targeted for coverage through USDL include: purely human/professional (e.g. project management and consultancy), transactional (e.g. purchase order requisition), informational (e.g. spatial and demography look-ups), software component (e.g. software widgets for download), digital media (e.g. video & audio clips), platform (e.g. middleware services such as message store-forward), security and infrastructure (e.g. CPU and storage services).

Users of USDL

  • Service providers are describing all aspects of the service from their business point of view.
  • Brokers search and use the USDL information for filtering, aggregation and bundling of services.
  • Consumers can search and read information from USDL descriptions indirectly via the marketplace user interface.
  • Shop owners to specify their offerings on the marketplace.
  • Marketplace owner to do a comparison of service offerings.

A generic service description language for domains as diverse and complex as banking/financials, healthcare, manufacturing and supply chains, is difficult to use and therefore not sufficient. First of all, not all aspects of USDL apply to all domains. Rather, USDL needs to be configured for the particular needs of applications where some concepts are removed or adapted while new and unforeseen ones are introduced. A particular consideration of this is allowing specialized, domain-specific classifications such as those available through vertical industry standards to be leveraged through USDL. In addition to this, the way in which USDL is applied for deployment considerations, e.g., the way lifecycle versioning applies, needs to be managed without compromising the fundamental concepts of USDL. In other words, USDL needs to be applied through a framework which allows separation of concerns for how it is applied and tailored to concrete applications. This need has led to the USDL framework where the concepts of the USDL meta-model as a core are specialized through the USDL Application meta-model. A non-normative specialization of the USDL meta-model with the USDL framework is provided to illustrate how a service directory of a specific Service Delivery Framework (proposed by SAP Research) can be conceptualized through USDL. In this way, an insight is available for an application of USDL.

To make FI applications and services more widely available for such composition and consumption, a uniform standardized way of describing and referencing them is required. A variety of service description efforts has been proposed in the past. However, many of these approaches (e.g. UDDI, WSMO, or OWL-S) only prescribe tiny schemata and leave the modelling of service description concepts such as a generic schema for defining a price model or licenses to the service developer.

Roles of participants in the Internet of Services Ecosystem

Generic Enablers of the Business Framework

The following figure illustrates the high-level architecture of the business framework infrastructure realizing the Broker. The architecture identifies the following core components of the business framework: Marketplace, Shop, Repository and Registry, Business Elements and Models Provisioning System (BE&BM Provisioning), Revenue Sharing Engine, and SLA Management. Furthermore, it shows the core relationships to external parties and systems necessary to make the business framework infrastructure operational.

High-level architecture of the Business Framework*

(*) - The red question marks indicate issues discussed in the question marks section

The set of Generic Enablers (GE) identified are:

  • Repository: The service repository is a place to store service descriptions or parts of it. A location for storage (centrally or distributed and replicated), for reference and/or safety. The use of a repository is required in order to prepare service descriptions to appear at a store, marketplace and other components of the business framework.
  • Registry: The registry is a universal directory of information used for the maintenance, administration, deployment and retrieval of services in the service delivery framework environments. Existing (running) service endpoints as well as information to create an actual service instance and endpoint are registered. The Registry links service descriptions in the repository with technical information about instances available in the platform. Similar to UDDI Registry for web services it is a place to find information for technical integration. Only if the service endpoint is registered it actually can be used for service composition and coordination by the FI-Ware platform.
  • Marketplace and Store: We differentiate the marketplace from a store in our architecture. While a store is owned by a store owner who has full control over the specific (limited) service/app portfolio, and offerings a marketplace is a platform for many stores to place their offerings to a broader audience and consumers to search and compare services and find the store, where to buy. The final business transaction (buying) is done at the store and the whole back office process is handled by the store. There are existing Internet sales platforms that actually have marketplace and store functionality combined. However, conceptually the distinction is useful in order to simplify the architecture and have a better separation of concerns. Due to a large variety of already existing stores and offered functionalities, the store is considered as external component to be integrated with the business framework infrastructure. The main focus will be on developing a secure marketplace as a generic enabler and providing interfaces to the store.
  • Business Elements and Models Provisioning System: The aim of the BE&BM Provisioning is the monetization of services, applications, and their compositions/aggregations. It is necessary to have a flexible way to define the manner in which services and applications can be sold and delivered to the final customers; it can be summarized as the business model definition. While the published service description represents the public view of the business model offered to the customer, the business model defines the way in which customers pay by application and services and the way in which the incomes are to be splitted among the involved parties. Once, the business model is defined, it is necessary to provision these details in the rating/charging/billing systems.
  • ''''Revenue Settlement and Sharing System: In the Future Internet there is a need to manage in a common way how to distribute the revenues produced by a user’s charges for the application and services consumed. When a consumer buys/contracts an application or service, he pays for its usage. This charge can be distributed and split among different actors involved (for instance store or marketplace owner earns money and mash-ups have to split the money). There will be a common pattern for service delivery in service-oriented environments. Independent of service type, composite services based on the aggregation of multiple atomic (from the viewpoint of composition) services are expected to play an important role in applications and services ecosystems. Beyond the complexities of the management of composite services (design, provisioning, etc.), there is a complex issue to solve when dealing with the business aspects. Both the composite and the atomic services must be accounted, rated and charged according to their business model, and each of the service providers must receive their corresponding payment. The Revenue Settlement and Sharing System serves to the purpose to split the charged amounts and revenues among the different services providers.
  • SLA Management: The management of Service Level Agreements (SLAs) will be an essential aspect of service delivery in the future internet. In a competitive service market place, potential customers will not be looking for “a” service, but for “the best” service at the “best price”. That is, the quality of services (QoS) – such as their performance, economic and security characteristics - are just as important, in the market place, as their functional properties. Providers who can offer hard QoS guarantees will have the competitive edge over those who promote services as mere ‘functional units’. SLAs provide these hard guarantees: they are legally binding contracts which specify not just that the provider will deliver some service, but that this service will also, say, be delivered on time, at a given price, and with money back if the pledge is broken. The cost of this increased quality assurance, however, is increased complexity. A comprehensive and systematic approach to SLA management is required to ensure this complexity is handled effectively, in a cohesive fashion, throughout the SLA life-cycle.


Target usage

The Repository is a place to store service models, especially service descriptions but also other models required by components of the overall delivery framework (e.g. technical models for service composition and mashup). The repository provides a common location for storage (centrally or distributed and replicated), reference and/or safety.

The use of a repository is required in order to appear at the marketplace or other tools referring to a number of central repositories for information relevant for interoperation of the enablers and roles within the FI-Ware platform. The repository contains published descriptions which can be utilized by any component in respect to privacy and authorization constraints imposed by the business models. Usually a repository is under control of an authority and usually is keeping track of versions, authenticity and publication dates.

User roles

  • The Provider creates services and has an original description describing basic service information as well as technical information. He needs to upload and publish service descriptions on the repository in order to make them available to other components of the platform, such as the Shops/Stores, Aggregators, etc.
  • The Aggregator can use for example technical and service-level information for existing in the repository for the purpose creating composite services or mashups from existing services. The Aggregator needs information about the functional and technical interfaces of a service in order to provide an implementation. Service descriptions for the newly created composite service can be uploaded and published to the repository again.
  • The Broker needs all kind of business relevant descriptions of services, such as general descriptions, business partners, service-levels, and pricing, to be presented in the shop/store. Also technical information can be required, on a level to be able to do comparisons between services for the consumer.
  • The Channel Maker needs detailed information about the channel to ensure the proper channel creation or selection. Further a channel may require embedding or wrapping the service so it can be accessed by the user through the specific channel. Various channels and devices such as Web (browser), Android, iOS but also global as well as local social networking and community platforms such as Facebook, LinkedIn, MySpace, Xing, KWICK! might be supported.
  • The Hoster requires information on service-level descriptions, deployment and hosting platform requirements to provide the necessary infrastructure in a reliable and scalable way.
  • The Gateway will use information about technical interfaces to provide data, protocol and process mediation services. The gateway also provides services for mediation towards premise systems outside of the FI-Ware platform.

The repository provides also a shared storage for all metadata related to application components, that is, information related to the description of an application component, its associated business model, the user feedback, technical information and other relevant declarative/descriptive information. To make this possible the USDL model will be used and extended in order to specify the initial meta-information, both technical and business/market related, empowering the acquisition and integration of new application components from external sources.

The repository will allow managing and sharing all application and services ecosystem relevant information for the whole platform. This includes the provision of relevant semantic information to the FI-WARE Data/Context Management in order to be exploitable by the whole FI-WARE platform to improve and enrich the platform‘s recommendation abilities.

There can be many repositories. A repository can be operated by the service provider (his own web presence), the market place owner as well as any other stakeholder.

Users of the repository have to cope with multiple distributed instances based on different implementation technologies. Therefore the API and format specifications need to be defined clearly.

Hence, the developers, domain experts, and users can use different composition tools interacting with one or multiple repositories to create compositions while at the same time taking into account a multitude of different (business) aspects, such as participating parties, ownership and licenses, pricing, service level constraints, and technical implementation.

GE description

A suitable API for reading, filtering, and aggregating service information from the repository as well as maintaining service descriptions will be made available for the interaction of other tools and components with the repository and such allowing a tight integration. Figure 45 shows the interaction of some components and tools with the service repository. One important source for service descriptions in the repository is the USDL authoring tool. In different versions it allows the various stakeholders to create services descriptions or parts of the description in respect to certain aspects. The Authoring Tool can use the Repository API to retrieve, write and publish service descriptions on the repository built-in into the tool. The USDL Crawler can find service descriptions (in its serialized form such as XML/RDF or RDFa) on the Web and import it into the repository by using the writing functionality of the Repository API. A special Web-based Repository Management application for example can be used to organize and maintain information in the repository.

Repository for service descriptions

Other components using the repository are components from the Aggregator, Channel Maker, Broker, Gateway, Provider, and Hoster (see Figure 41).


  • The repository allows storing service model descriptions such as expressed by USDL.
  • Searching and filtering allows to access context specific information.
  • Retrieve specific service description elements.
  • Import and export of service descriptions for transport from and into other systems.
  • Organizational management
  • Maintain consistency and constraints within the repository.
  • Provide version management and version tracking.
  • Authorization and access control realized by the FI-Ware framework services.

Critical product attributes

  • Flexible object model for covering various models.
  • Allows for extensions and variants of existing models.
  • Scalability: The repository must be able to store huge amount of models.
  • Optionally distributed architecture.
  • Based on Internet standards
  • Easy on-boarding (e.g. through Web-based access and simple registration)
  • Web-API for integration into other tools/applications

Existing products

There are various approaches for metadata repositories depending on the underlying information model. Most prominent are the relational data model utilized by relational databases, the XML information model and XML databases like eXist, or RDF graph model and RDF repositories like iServe. However, the databases are only the technical foundation for the model repository. One important issue for a repository of service descriptions is to ensure consistency of metadata. Data stored into the repository and referenced by applications and platform components need it to keep the information consistent in order to ensure reliable operation of the platform. Within previous the projects TEXO and Premium Services various implementations of a service repository based on an XML serialization of the USDL eCore model were developed and used. Within FI-Ware we will consider Linked Date representations of USDL in order cope with various extensions and variants of USDL. Semantic metadata repository enablers provided by FI-Ware, which can be the basis of a Linked Data version of the model repository.


Target usage

The Registry acts as a universal directory of information used for the maintenance, administration, deployment and retrieval of services. Existing (running) service endpoints as well as information to create an actual service instance and endpoint are registered.

User roles

  • Provider uses the registry to discover actual service endpoints at runtime.
  • Platform operator provides deployment information for services.
  • Hoster updates actual address and access to service endpoints
  • Service provider provides deployment options.

GE description

The Registry links service descriptions (f.i. USDL descriptions in the repository) with technical runtime information. Similar to a UDDI Registry for web services it is a place to find information for technical integration. Only if the service endpoint is registered it can actually be used for service composition and coordination by the FI-Ware platform.

The registry maintains the master data that is needed to ensure proper (inter-)operation of the platform and their components.


  • Create/read/update/delete entries
  • Searching and querying entries
  • Management (authorization, logging, …)
  • Locating services (find service endpoints)
  • Description of deployment according to the technical models

Critical product attributes

  • High scalability to support large number of active services and users.
  • High availability to ensure mission-critical real-time business processes.
  • Easy on-boarding of new members and services.
  • Web-API for integration into other tools/applications

Existing products

Today the UDDI registry fills some of the functionality of the Model Repository but has a limited scope and is not used in a larger Web context. LDAP is used as yellow pages for maintaining organizational data or technical directories.


Target usage

Internet based business networks require a marketplace and stores, where people can offer and deal with services like goods and finally combine them to value added services. On the marketplace you can quickly find and compare services, which enable you to attend an industry-ecosystem better than before. Services become tradable goods, which can be offered and acquired on internet based marketplaces. Beside automated internet services this also applies for services that are provided by individuals. Partner companies can combine existing services to new services whereby new business models will be incurred and the value added chain is extended.

Given the multitude of apps and services that will be available on the Future Internet, providing efficient and seamless capabilities to locate those services and their providers will become key to establish service and app stores. Besides well-known existing commercial application stores like Apple App Store, Google Android Market, and Nokia Ovi, there are first efforts to establish open service and app marketplaces, e.g. in the U.S. Government‘s Apps.Gov repository and Computer Associates‘ Cloud Commons Marketplace. While these marketplaces already contain a considerable number of services, they are currently, at a premature stage, offering little more than a directory service. FI-WARE will fill this gap by defining generic enablers for marketplaces and providing reference implementations for them.

User roles

  • Service provider will place offers on the marketplace or in a service/app store.
  • Consumer can search, browse and compare offers
  • Repository will be used to get services descriptions
  • Registry will be used to register stores, providers, marketplaces, …
  • Service store will participate on a marketplace and publishes offerings.
  • Channel Maker will consumers give access to the marketplace

GE description

We differentiate the service marketplace from a service store in our architecture. While a store is owned by a store owner who has full control over the specific (limited) service portfolio, and offerings a marketplace is a platform for many stores to place their offerings to a broader audience and consumers to search and compare services and find the store, where to buy. The final business transaction (buying) is done at the store and the whole back office process is handled by the store. There are existing Internet sales platforms that actually have marketplace and store functionality combined. However, conceptually the distinction is useful in order to simplify the architecture and have a better separation of concerns.

Service Marketplace and Store to Consumer

The figure above depicts the interaction of the marketplace and store to bring their services to the consumer via different channels. The marketplace for instance can use a Web channel, which can be used with a standard Web browser, whereas the store is delivering services via a Android device using native applications. The marketplace generic enabler does not have a single user interface. It rather enables to offer marketplace functionality (services) through different channels.

The following figure shows the interaction of the marketplace with the repository, registry and store. There might be multiple instances of all components. A marketplace for instance can use multiple repositories and registries as a source and can have a large number of stores offering their services. Both, marketplace and store are using the repository and registry to retrieve and maintain service descriptions.

As a special value added tool for providers and aggregators, a pricing simulator can be offered at the marketplace. The pricing simulator is a decision support system for strategic pricing management. The aim is to support complex pricing decisions that take both inter-temporal and strategic dependencies into consideration by providing a comprehensive market model representation. A tool to tackle complex strategic pricing decisions has to be capable of taking into account the competitive landscape and its development over time. The cornerstone of such a tool is the realization of a stochastic multi-attribute utility model (probably into an agent-based simulation environment), where it can subsequently be fed by either the fitted part-worth of a conjoint study or the relative quality and price scores of a customer value analysis. The result of the tool provides a forecast how different initial price strategies may unfold in the market.

Marketplace and Store to Registry

Store is considered to be an important part of the business framework. However, there are many stores already in commercial use so that an implementation of a store within the business framework will provide no real value. Hence, a store is considered to be an external system that can be integrated into the business framework through the interface to the marketplace. Reference integration with an existing store is envisaged depending on the availability of resources. The challenge is to provide an interface for markteplace and stores to share descriptions of applications and services without the need to adapt to store specific data formats and APIs.

The functionality listed here contains a number of mandatory features and also a number of nice-to-have features as well. While searching for services, comparing services, and managing connections and interactions with shops are absolutely necessary, the other features are nice-to-have for a marketplace. In the FI-WARE context, request for quotations, ratings, and strategic pricing support seem to offer added value.


  • Search and browse offers from different service stores.
  • Compare offers from different stores.
  • Check availability of an offering in the store.
  • Request for quotation processing / negotiation for a certain need (optional).
  • Independent trustee and clearing house.
  • Auction, bidding (optional).
  • Advertisement, campaigns (optional).
  • Rating, feedback, recommendation of stores and offerings.
  • Pricing support, price monitoring, sales cycles in the market across different stores.
  • Manage connections and interactions with service shops.

Critical product attributes

  • Customizable for different application domains/sectors.
  • Multi-channel access (Web-based access, mobile, …).
  • Easy on-boarding for store owners and consumers

Existing products

There is a plethora of marketplaces for various domains. It is useless to give an extended list here. Prominent examples are ebay.com, craigslist, pricefalls and Amazon (Amazon is actually a certain mix of a store and a marketplace). At the one hand Amazon sells products on its own but also lists product offers from external suppliers.. In the area of services there are markets for craftsman such as myhammer.de. There are also market places for regional players. Since there is no standard in respect to offerings as well as services and products and no common business or technical framework, it is quite difficult for shop owners to be present on multiple market places. In the area of software applications we find a number of so called App Stores (Apple, Google, and Amazon) that are somehow closed environments controlled by a single owner. The USDL Marketplace was developed within the THESEUS/TEXO project.

Business Models & Elements Provisioning System

Target usage

The more important question for an application or service when it is available to be launched in the market is to define the business model and the offers and prices available for the customers. The aim is the monetization of those new services and applications. Then it is necessary to have a flexible way to define the manner in which services and applications can be sold and delivered to the final customers; it can be summarized as the business model definition. The business model will define the way in which customers pay by application and services and the way in which the incomes will be split among the parties (single party and multiparty models). Once time the business model is defined, is necessary to provision these details in the rating/charging/billing systems.

User roles

  • Managers will setup available business models and the parts of them that will be available for applications, services, parties and users.
  • Parties/providers have to setup the business models elements of their applications and services.

GE description

There is a complex issue to solve when dealing with the aggregation and composition of new services based on other ones that affect to business aspects.

Business models & elements description:

  • Offers and price descriptions
  • Policy rules to manage prices
  • Promotions description about the offer and prices
  • Business SLA (violations and penalties)
  • Techniques regarding aggregation, composition, bundling, mash-ups, settlement and revenue sharing business models

Conceptual architecture about BMEPS is shown in Figure 48.

Business Models & Elements Provisioning System


  • To define all the business models elements to be available for applications and services
  • To associate business models elements to applications and services
  • To define revenue settlement and sharing models
  • To link revenue sharing models to applications, users or/and user groups.
  • To provision business models elements in external rating/charging systems and RSSS.

Relations to other components

  • Settlement and revenue sharing system
  • Developers portal

Critical product attributes

  • There must be possible to support different kind of business models.
  • Business models and elements must be customizable.

Existing products

This kind of functionality must exist in an ad-hoc way inside AppStores (Apple, Google, and Amazon) and open APIs from Telco initiatives. It would be attached to rating and billing systems. Inside them it may exists similar tools that may provide this functionality.

Revenue Settlement & Sharing System

Target usage

In the Future Internet there is a need to manage in a common way how to distribute the revenues produced by a user’s charges for the application and services consumed. When a customer buys an application or service, he pays for it. But this charge can be distributed and split among different actors involved (for instance marketplace owner earns money and mash-ups have to split the money).

User roles

  • Managers will setup available business models and parts of them are the revenue share model.
  • Service provider will setup revenue share model associated to Applications and services, and they have to be loaded in the Revenue Settlement & Sharing System.
  • Developers have to know about the revenues of their applications and services.
  • Involved service/applications providers have to know about the revenues of their applications and services.

GE description

In the services oriented environments, there will be a common pattern for services delivery: it does not matter the type of service, each time there will be more composite services based on the aggregation of multiple atomic services. Beyond the complexities of the management of composite services (design, provision, etc.), there is a complex issue to solve when dealing with the business aspects. Both the composite and the atomic services must be accounted, rated and charged according to their business model, and each of the service providers must receive their corresponding payment.

The service composition process will end up in a value network of services (an oriented tree) in which the price model of each service and its share of participation in the overall services is represented. On the other hand, depending on its business model, the business framework may play different roles in relation to the service providers. These realities will lead to different scenarios in which the revenues generated by the services must be settled between the service providers:

  • If the business framework charges the user for the composite service, a settlement process must be executed in order to redistribute the incomes as in a clearing house.
  • If the business framework charges for all the services in the value network, then besides the settlement function, there could be a revenue sharing process by which a service provider might decide to share a part of its incomes with the service provider that is generating the income.

Payments, Settlement and Revenue Sharing in a Services Value Network

In this context, a service must be understood in a broad sense, that is, not only as a remote decoupled execution of some functionality, but also considering other types of services: the business framework itself, content services, advertisement services, etc.

Nowadays, there are some examples in which revenue distribution is needed. The best known example is Apple Application Store (1), which pays a percentage of the incomes form an application download to its developer. Another example is Telco API usage. There are two sides like Telefónica’s BlueVia(2) or Orange’s Partner(3), in which the application developers receive revenue share for the usage of Telco APIs by the final users. There are also examples of this in the cloud computing services, like dbFlex(4) and Rollbase(5)

1- http://store.apple.com

2- http://www.bluevia.com/

3- http://www.orangepartner.com

4- http://www.dbflex.net/

5- http://www.rollbase.com/

The following figure shows a conceptual architecture of a system for settling and sharing revenues. There are a number of different sources of revenues for a given service that will be integrated and processed according to the business model of each service and the revenue sharing policies specified for each partner. The final revenues balance will be transferred to a payment broker to deliver the payments to each provider/developer.

Revenue Settlement & Sharing System


  • Receive or interact with other external system for loading business models models regarding sharing and settlement.
  • Define and store the different revenue sharing models to be applied taking into account Application and Services Ecosystems business models.
  • To receive, store and load call data records or charging logs about the different sources of charges of the application and services ecosystem to the customers.
  • Creation of aggregated information and data to be used to distribute the revenues.
  • The information of developers or users to be paid has to be stored.
  • Daily revenue share execution and generation.
  • Payment file generation and sending to the payment broker.

Relations to other components

  • Business Models Provision System
  • Revenue sources systems (application stores, service shops, advertising, etc)
  • Payment broker
  • Developers portal

Critical product attributes

  • Revenue sharing models must be customizable.
  • There must be available simulations of RS models in real time.
  • High scalability and high volume of CDRs
  • There must be possible to process different revenue sources.
  • Report generation API to extract information.

Existing products

There exist various application stores and service ecosystem from different domains. There are well known examples like AppStores (Apple, Google, and Amazon) and BlueVia and Orange Partner initiatives from the Telco world. Inside his commercial products exists similar systems that provides this functionality.

SLA Management

Target usage

The management of Service Level Agreements (SLAs) will be an essential aspect of service delivery in the future internet. In a competitive service market place, potential customers will not be looking for “a” service, but for “the best” service at the “best price”. That is, the quality of services (QoS) – such as their performance, economic and security characteristics - are just as important, in the market place, as their functional properties. Providers who can offer hard QoS guarantees will have the competitive edge over those who promote services as mere ‘functional units’. SLAs provide these hard guarantees: they are legally binding contracts which specify not just that the provider will deliver some service, but that this service will also, say, be delivered on time, at a given price, and with money back if the pledge is broken. The cost of this increased quality assurance, however, is increased complexity. A comprehensive and systematic approach to SLA management is required to ensure this complexity is handled effectively, in a cohesive fashion, throughout the SLA life-cycle.

User roles

Specifications will cover the following use cases:

  • Service providers design & publish SLA templates as rich descriptions of their service offers.
  • Consumers search SLA template repositories for service offers matching their functional & non-functional (QoS, economic, security) requirements.
  • Consumers and providers negotiate SLAs.
  • Brokers/Hosters (possibly third party) observe the state of service delivery mechanisms in order to detect & report (potential) violations of SLA guarantees.
  • Autonomic controllers (managers) respond to changing contingencies (e.g. violation notifications) - by making appropriate modifications to SLA assets and/or service delivery systems - to ensure business value is optimised.

GE description

SLAs have implications for the entire enterprise context (Figure 51). In particular, SLAs have:

  • Legal Impact: an SLA represents a binding legal agreement. By entering an agreement, the agreement parties commit themselves to satisfying the terms of the agreement and fulfilling its obligations. To have any significance at all, these obligations must be enforceable – by one means or another - such that failure to abide by the agreement carries real valued penalties.
  • Systems Impact: providers must ensure they have the means & resources to deliver the agreed functional capabilities within the guaranteed QoS limits. Customers must ensure that restrictions or requirements on service usage are observed. Monitors must ensure that relevant system state properties are observed, and that timely, accurate warnings and/or notifications of violation are posted.
  • Business Impact: SLAs represent revenue, investment and risk. Customers pay for the services they consume. Providers pay for the resources they exploit to deliver services. SLA guarantee violations incur penalties. The goal of SLA management is to manage SLA assets in order to optimise business value.

A prerequisite of effective SLA management is the systematic integration of knowledge at all these levels.

The impact of SLAs on the enterprise context

The SLA life-cycle (Figure 52) begins with an understanding of the service provider’s business objectives & models and their relation to available resources and service delivery capabilities. This combined knowledge informs the design of the provider’s service offer, encoded and published in the form of an SLA “template” (an SLA with open, customisable “slots” for customer specific information) to support enriched, QoS-based service discovery. Having located a suitable or merely promising template, the customer initiates negotiation with the provider, which proceeds in rounds of SLA proposals and counter-proposals until agreement is reached. If agreement is reached, the SLA is signed; resources allocated, and service delivery and (possibly third party) monitoring mechanisms commissioned. Once in effect, the SLA guarantees must be monitored for violation, and any penalties paid. If possible, steps may also be taken to assure optimal business value: service delivery and monitoring systems can be reconfigured (cf. internal service level management) and SLAs renegotiated or even terminated. For the future internet, we expect - and so wish to support – increasingly autonomic control of negotiation, service-delivery and monitoring. Finally, there are various management issues (not indicated in Figure 52) relating to the versioning of templates (offers) and archiving of SLAs, monitored data, SLA state & negotiation histories, and any other information that may prove useful to the design of future service offers.

Process view of the SLA life-cycle

To reiterate, the key point is that management of the SLA life-cycle critically, and fundamentally, depends on understanding the significance of SLAs to the whole enterprise. SLAs have legal impact, systems impact and business impact. SLA management is all about controlling this impact, throughout the entire SLA life-cycle, in order to optimise business value. To tackle this highly complex problem space, we first need to understand it. Current standards and technologies offer at best only partial solutions: tackling either one aspect in isolation, or looking at the whole but with overly restrictive assumptions. What is missing is the big picture: a comprehensive and highly integrated set of generic information & process models detailing SLA management over the entire SLA life-cycle. The FI-WARE generic enabler for SLA Management will look to develop this integrated view.

Specifically, the generic enabler will consist of a comprehensive “SLA Model” comprising three mutually specified sub-models (Figure 53):

  • SLA content model: formal conceptual, syntactic and semantic specifications of SLA content. In particular, SLA content must be clear and precise. It should be possible to get a clear indication of the terms of the SLA from only a superficial reading. But these same terms must also have a precise significance at the technical systems level – and in particular they must translate to unambiguous monitoring requirements.
  • SLA life-cycle model: formal specifications of SLA state-transitions and state semantics. For example, the precise conditions under which an SLA can be formally described as “agreed”, “terminated” or “violated”.
  • SLA management model: formal specifications of processes & mechanisms impacting SLA state. – e.g. functional capabilities & abstract machines.

The SLA Model

The SLA Model will not be designed from scratch, but will instead be consolidated from existing solutions. In particular, we will look to extend and generalize work undertaken as part of the FP7 ICT Integrated Project SLA@SOI(1), and to consolidate these results with other notable efforts such as: WS-Agreement(2) (re: negotiation), WSLA(3) (re: content & monitoring) and SLAng(4) (re: monitoring).

1- SLA@SOI, see: http://sla-at-soi.eu

2- WS-Agreement, see: http://forge.gridforum.org/projects/graap-wg

3- WSLA, see: http://www.research.ibm.com/wsla/

4- SLAng, see: http://uclslang.sourceforge.net/index.php

In terms of the high-level business framework component architecture, there is no need for a dedicated SLA Manager component. All the SLA management processes described above can be viewed as either advanced functions of existing components or as external service dependencies: discovery is properly the province of the Marketplace; negotiation the province of the Shop; monitoring is ideally left to independent, trusted, third-parties; and the remaining information requirements are covered by USDL and Business Elements & Model Provisioning. This said, existing tools and services (e.g. from SLA@SOI) will be employed if and where applicable, and proof-of-concept and/or prototype demonstrators will be constructed for the more advanced features of SLA management. The goal of the FI-WARE generic enabler for SLA Management, however, is to develop an integrated SLA Model that can serve as the foundation for the development of robust SLA-aware applications in the future internet.


Specifications supporting:

  • SLA content authoring.
  • SLA life-cycle management.
  • SLA-based service level management (SLM).

Relations to other components

  • USDL: e.g. for SLM support
  • Business Models and Elements: for encoding business value
  • Revenue sharing
  • External (third party) services: e.g. for third-party monitoring and secure penalty payments.
  • Marketplace: for QoS-based discovery mechanisms
  • Shop: for SLA negotiation.

Critical product attributes

  • Extensibility and customizability to meet (unforeseen) domain-specific requirements.
  • Clarity and precision of SLA content: the significance of SLA guarantees at system, business & legal levels must be immediate & intuitive and technically unambiguous.
  • Harmonised, comprehensive and integrated SLA Model specifications.
  • Modular and extensible design supporting custom application to (unforeseen) domain-specific requirements.

Existing products

  • WS-Agreement
  • WSLA
  • SLAng

Generic Enablers for Composition and Mashup

The recent social, economic, and technological developments lead to a new phenomenon often called servification, which is supposed to become the dominating principle in the economy of the future. Wikipedia, Amazon, YouTube, Apple AppStore, Facebook and many others show the unprecedented success of Internet-based platforms in many areas including knowledge and content delivery, social networking, and services and apps marketplaces. FI-WARE is supposed to play a key role as the main technological driver bringing together cloud computing, mobile networks, Web2.0, Apps, services, data sources, and things on a broadband Internet and enabling multi-channel consumption and mobile multi-device access. Application and Services Ecosystems able to exploit the innovative value proposition of servification to its full potential from the technology as well as from the business perspective are envisioned as one of the main pillars of the Future Internet.

Few applications can really become killer applications alone, but many of them could have better chances in combination with others. Support of cross-selling through composition would therefore become a highly desirable feature in Application and Services ecosystems. However, most relevant ecosystems today do not incorporate these features or do not incorporate them at the right level. FI-WARE strives to exploit the composable nature of the application and services technologies in order to support cross-selling and achieve the derived network scaling effects in multiple ways. It will enable composition either from the front-end perspective – mash-ups or the back-end perspective – composite services.

Phenomena like Wikipedia and YouTube have taught us how end consumers may become major drivers of innovation whenever suitable authoring tools, complemented by social tools that maximize their ability to exchange knowledge and gain recognition, are provided. However, while crowd-sourcing and social web technologies have experienced a relevant development in the area of information and multimedia content, they are still immature in the application and services space. In FI-WARE, the mash-up and composition capabilities offered by the different types of supported components are expected to leverage their reusability as well as the creation of value-added apps/services not only by application and service providers but also by intermediaries and end users acting as composers or prosumers. The supported framework will rely on a defined set of user, provider and intermediary roles defining the skills, capabilities and responsibilities of the actors and relationships among them. The value network spanned by the roles of the actors and relationships among them defines the creation and distribution of the value-added applications from the technical perspective. As the capabilities and skills of actors are expected to range from technical experts with programming skills to domain experts without technical expertise or even simple end-users with no programming or technical skills, all kinds of usability aspects, conceptual simplification, recommendation, autonomous provisioning (incl. composition, description and deployment), as well as procurement and user guidance will be taken into consideration.

There are two main functional components that can be identified in service composition and application mashups: the aggregator and the mediator roles (Figure 54). The aggregator allows the creation, exposition and execution of composed (or mashed up) services and applications. Whenever a composed service or application is used the need might arise to use a mediator for components to properly communicate and interact.

High-level architecture of Aggregator and Mediator

Future convergent composition techniques require a smart integration of process know how, heterogeneous data sources, management of things, communication services, context information, social elements and business aspects. Communication services are event, rather than process driven by nature. Thus, a composition paradigm for the Future Internet needs to enable composition of business logic also driven by asynchronous events. The framework will support the run-time selection of Mashable Application Components (MACs) and the creation/modification of orchestration workflows based on composition logic defined at design-time, to adapt to the state of the communication and the environment at run-time. The integration of Things into the composition requires that the special characteristic of IoT services are taken into account, like lower granularity, locality of execution, quality of information aspects etc. Moreover, the framework allows the transparent usage of applications over many networks and protocols (HTTP/REST, Web Services, SIP/IMS) and corresponding execution platforms (both Web and native apps) via a multi-protocol/multi-device layer that adapts communication and presentation functionality.

The aggregator can be further split into a composition editing tool used for the creation of design time compositions (described by a specific composition language), an execution environment to expose and execute the composed services, and a repository for keeping the relevant information in the meanwhile. Before presenting the functional components of an aggregator we make a short presentation of relevant concepts and technologies.

Front-end vs. back-end composition

Roughly speaking an application represents a software construct that exposes a set of functions directly to the consumer via a user interface, without exposing any functionality to other software constructs. A service on the other side is a software construct that provides functions for other services or applications is accessible via a set of well defined programming interfaces.

From the compositional perspective we can differentiate two broad categories: service composition or application mashup. The two represent composition from either the front-end perspective – composed applications (or mashups), or the back-end perspective – composite services. The main difference is the interaction with the human end-user. In the case of back-end composition, the composed service is yet another back-end service, the end-user being oblivious to the composition process. In the case of front-end composition, every component will interface both other components and the end-user through some kind of user interface. Thus the front-end composition (or mashup) will have direct influence on the application look and feel; every component will add a new user interaction feature. This of course will heavily influence the functionality of the components. While back-end components are created by atomizing information processing, front-end components (referred also as widgets or gadgets) are created by atomizing information presentation and user interaction. Another difference is that the creation and execution of the front-end components will heavily depend on the available runtime environment (e.g. web-browser, device OS, 2/3D presentation platforms), and the different presentation channels they are exposed through (Facebook, Yahoo Widgets).

Composition vs. mashup

The capabilities and skills of composite service creators are expected to range from technical experts with programming skills to domain experts without technical expertise or even simple end-users with no programming or technical skills. Actually one of the main advantages of a component-based architecture is the democratization of the service creation process. You don’t need to be a technical expert and know how a component is built in order to use it. Of course the expressivity of the composition language will determine the amount of technical knowledge required to use it. A smart composition editor could cater for different user expertise and roles (from service creators, to resellers and finally to prosumers) by hiding complexity behind different types construction blocs, trading off flexibility for simplicity. For example, a technical expert could write the composition in a text-based format with full access to the constructs of the composition language, while an end-user could use a graphical building block construction employing only the most basic features of the language. Nevertheless the complexity and expressivity of the composition language could be pragmatically restricted by the application area or by the support offered in the execution environment.

Due to the reasons previously explained we expect back-end compositions to employ a more complex composition language and environment (notwithstanding the fact that users such as domain experts with no technical background may use only simple composition language subsets). Front-end compositions on the other hand will most likely use simple constructs and be suitable for manipulation by end users and presentation designers, and more expressively referred as mashups. In the next sections we describe common characteristics for both the front-end mashup and the back-end composition.

Data vs. service composition

Another differentiation that is sometimes made regards data vs. service composition. Data composition is used to denote a composition process where only data flows are transformed and aggregated (e.g. Yahoo Pipes, JackBe), in contrast to service composition, which entails more complex dataflow, workflow, and synchronisation mechanisms (e.g. BPEL). Nevertheless we can regard “service composition” as being a superset of “data composition”.

Workflow-based composition

Composite services consist of a set of component services and definition of the control and data flow among them. The actual exchange between elements in a composition can be understood as a workflow with specific actors (the services) and the flow of actions (e.g. a data dependency - one operation needs data produced by another operation) to be executed by them towards achieving the goals specified. Typically such a composition and the related workflow has to be created before the composite service can be executed and thereafter created compositions and workflows will be executed unchanged repeatedly in the same or in different contexts.

Composition creation is the first logical phase in the service composition process. Composition creation functionality aims to support the creation phase by enabling the automated checking of dependencies between services, so that created compositions are valid and useful. General practice in workflow definition and description languages is the definition of fallback services to compensate for faulty execution. Such fallback services can clearly only be defined during the creation phase.

The explicit representation of the execution order of the composition steps in a workflow provides a clear and easy way to express chronological control dependencies. Typically, workflows can express sequential and parallel flows as well as use conditional statements. More advanced approaches can support multiple entry points, error and exceptions handling, invocations of external services. Most often, workflows are defined implicitly by implementations of applications written using typical imperative programming languages. To overcome the aforementioned disadvantages of the implicit workflows and to offer a more formal and programming language independent way of expressing the workflows several workflow definition languages were proposed and standardized (BPEL4WS and BPMN 2.0 are technologies of choice for XML Web Services based workflows). Workflow scripts are executed by an orchestration engine (e.g. a BPEL execution engine executes BPEL4WS workflows). The orchestration denomination represents an analogy to the composition. While “composition” is employed at creation time, the “orchestration” is supervising the execution.

Event-based dynamic composition

Alternatively, workflows can be dynamically created or adapted during composition execution. Suitable services are identified and executed as needed based on the current context (depending on external and internal events and results from previous service invocations).

Such workflows are valid only during execution time and under the particular circumstances. The dynamic creation of compositions can be achieved through systems with the capacity to solve problems using inference procedures with a traceable line of reasoning based on application domain specific knowledge

(e.g. model-driven systems). In contrast to rules-based and case-based systems that rely on experience and observations gathered by human users, model-driven systems rely exclusively on formalised knowledge stored within the system itself.

Model-driven systems are enhanced by modelling the constraints that influence the behaviour and functionality of a system and its components. Constraints are constructs connecting two unknown or variable components and their respective attributes, defines the values the variables are allowed to have, and defines the relationship between the two values. In other words, constraints can ensure that specific components are put together in a correct fashion without having to specify any component related rules or calculations.

The model-driven template-based approach to composition creation is based on composition templates (composition skeletons). The skeleton includes the main parts of the business logic of the composed service, but is not complete with regard to some implementation parts. For example, certain implementation parts should invoke some kind of services (SIP, WS, EJB, etc.) or should require some data from a particular source, but neither concrete service nor data source are known at the moment of service development. Instead, such points in a template are marked in a special way, so that this place can be found at the runtime.

During the run-time (and/or even at the assembly or deployment time), the composition engine is invoked at those places and it dynamically decides about what services to invoke or which data source to use based on constraints evaluated at that particular time. Essentially the composition engine is creating the workflow step-by-step during runtime, and different composition decisions can be taken depending on external events or on the return values of previously executed services. It should be noted, that the selected service can itself be another skeleton build in the same way.

As mentioned above, the composition engine creates workflows on the fly at runtime. Ideally it should not deal with the protocol implementation of the invocation of different service technologies (e.g. SIP, WS, REST, RMI, etc.). These are left to the Composition Execution Agents (CEAs), which are responsible for enforcing composition decisions in a technology and protocol specific way. Actually the CEAs are orchestration engines for the different service technologies and they receive the workflow from the composition engine step-by-step via a uniform API. Note that there could be considerable differences between these CEAs. For example WS entails request-response invocations in a hierarchical tree structure while SIP deals with asynchronous events and persistent services in a chain structure.

Composition editors

Target usage

The Composition editors are Generic Enablers that help the service provider to create application mashups and composed services. Editors should provide an environment to combine and configure applications and services in graphical way.

Different editors could cater for different user expertise (from technical experts with skilled in the composition language to domain experts without technical expertise or even simple end-users with no programming or technical skills) and roles (from composed service creators, to resellers and finally to prosumers) by hiding complexity behind different types of construction blocs, trading off flexibility for simplicity. By prosumer we denote a consumer-side end-user who cannot find a service which fits her needs and therefore modifies/creates services in an ad-hoc manner for her own consumption.

Editors should support facilities for consistency checking of interfaces and simple debugging. They should connect to the Execution Engines allow testing, debugging, installing, executing, controlling and post execution analysis of the composed applications.

Composition descriptions and technical service descriptions should be edited/created in the editor and stored/fetched to/from the Repository.

When creating compositions/mashups editors might connect o the business infrastructure:

  • Marketplace to search for services
  • Shops to purchase component services them for testing /deployment, and to expose composed services for purchase
  • USDL Registry to browse business information related to services

Editors could be connected to a user and identity management service for controlling access to the applications.

Descriptions of GEs

As presented in Figure 54 we have identified three different types of GEs that pertain to specific aggregation needs. These are the Application mashup editor, the Dataflow-oriented service composition editor and the Event- and constraint-based composition editor, which are detailed next.

Application mashup editor

Regarding application mashup, FI-WARE reference architecture should offer an editor to create applications built from discrete front-end components (e.g. gadgets/widgets, apps) and connected at the front-end layer. These components rely on a series of either plain or composed backend data/services. For testing and debugging and presentation of logged runs the editor will connect to the Mashup Execution Engine.

The editor should offer functionality for creating an application front-end as a mashup built from gadgets/widgets that rely on a series of either plain or composed backend services. Client-side inter-gadget communication facilities should be supported including support for optional filters (wiring). Design-time semi-assisted modelling aids, such as suggestions on relevant relationships amongst gadgets and mashups (e.g. consumed data vs. produced data) together with dynamic discovery facilities from the gadget/widget and mashup repository will ease the mashup creator’s work. For persistence/sharing purposes the mashup needs to generate a MDL (MAC description language) model.

The application consumer, acting as a prosumer (being her domain experts, business professionals or end users), uses the application mashup component within the composition editor to develop mashups. The application mashup provider develops their own valuable gadgets or even mashups and offers them as building blocks. These gadgets/mashups are added to and provided via the repository.

Dataflow-oriented service composition editor

We characterize an editor for dataflow oriented compositions intended to support subject matter expert without programming competence and additional separated features design-time composition. Service providers of different roles (subject matter experts, business professionals or end user prosumers) use the editor to describe and operate services. The services are added to the repository and provided via the library. Operator of the application composer manages and controls application composer access and functionality.

The dataflow oriented application composer is modularized, with major functionalities described next. Composition studio: This module provides a graphical user interface to combine appropriate services, connect services and data flows, configure services and check consistency of the composition. It supports file transactions (open, store, rename) and display options (e.g. expand, link types, descriptions). Debug/Simulation: This module allows the step by step application execution. It supports to display a data flow and the change of the data for every single service. Service Library: This module provides an interface to the repository to browse information about the service categories, services, description, data input and data output. Deployment and governance: This module allows configuring, deploying and managing an application. Aspects to be controlled are start and end time for service execution, authorized users and user groups to execute the application, remove and rename of an application. The deployment feature supports composition model translation into executable languages such as BPEL. The execution is supported by the service orchestration engine (especially BPEL features).

Important features are: (a) Extensions for design-time service composition that can be provided in separate libraries, (b) Modelling workflow, (c) Supporting complex behavioural patterns, using modelling elements such as gateways (exclusive, parallel, loops, etc), (d) Modelling data flow, including support for complex data flow mapping, transformation and consistency check, (e) Modelling of orthogonal (independent) composition work and data flow, (f) Design-time semi-assisted (in opposition to manual modelling) modelling aids, such as dynamic binding, data flow generation, data flow mapping, data flow consistency, (g) Task expansion with matching composition (sub-processes).

Event- and constraint-based composition editor

Next we describe the functionality of an editor that supports convergent composition techniques that include asynchronous event-driven communication services in a SOA manner.

The editor allows the creation of composed service skeletons. The skeletons provide the business logic, the data and control flow, and service placeholders. While looking similar to workflows, they are not, as the skeletons only provide placeholders for services that are going to be resolved at runtime. Moreover they may not provide a clear ordering of service execution (explicit parallelism). The order can be chosen by the execution engine at runtime depending on the specified constraints that trigger the execution (data availability, external events, etc.).

Specification of global (valid throughout the composition) and local (valid only on that template) constraints are used to decide runtime service selection and event filtering. While the choice of relevant services can differ at runtime compared to design time, still for many cases the set available at runtime is a subset of the one available at design time. Thus the editor should apply smart constraint resolution also at design time to help the designer get an idea of what services might resolve at runtime, an help her prepare all the relevant inputs and outputs. This includes smart automatic editing suggestions to the user (i.e. tab-completion).

Many communication-type services depend heavily on events, and first class support for events needs to be provided. External and internal events may start actions, events can be filtered, events can be thrown, and scopes can be defined on parts of the skeletons and subsequently used in event filtering.

Several services might be suitable to implement a placeholder template. These services can have different input and output parameters and the editor needs to offer the possibility to correctly map the different dataflow types, while providing an easy way to use the unified interface.

Critical product attributes

Application mashup editor

  • Visually compose (mashup), configure, deploy, and test composite (mashup-based) applications in an easy-to-use environment.
  • Support for innovation at the service front-end, adapting it to their actual necessities.
  • Availability of a Web2.0-based (crowd-sourced) shared catalogue of combinable gadgets/widgets, mashups and services. Find easy-to-understand building blocks, descriptions and examples.
  • Easy configuration, modification and arrangement of gadgets/widgets and mashups.
  • Rely on Web standards including standards on mashup description languages.
  • Openness and extensibility
  • Ability of integration in multiple channels (e.g. social network, portal, widget)

Dataflow-oriented service composition editor

  • Compose, configure, deploy, and test applications in easy- to-use environment for tech-savvy end users without programming competencies.
  • Easy-to-understand descriptions of atomic services and discovery in repository.
  • Easy configuration, modification and arrangement of services and applications.
  • Providing easy-to-use control and monitoring features of application execution and management.
  • Using open web standards for technologies. Thus supports extensibility to integrate other standardised (REST, SOAP interface) services.

Event- and constraint-based composition editor

  • Use Web and Web Service standards and SOA and EDA principles.
  • Ability to interface many service technologies, especially communication-type services.
  • Designer should find easy-to-understand descriptions and examples of atomic and composed services.
  • Support for asynchronous event-driven services.
  • Support for constraint-based composition with late binding.
  • Compose, configure, deploy, and test composed applications in easy-to-use environment.

Existing products

Regarding mashup and gadget specification, there is a draft widgets specification published by the W3C. Software vendors (like Microsoft or Google) defined their own widget model. Mashup platforms such as NetVibes use the compelling Universal Widget Architecture (UWA), whilst others, such as OpenAjax, have no component model per se but vital strategies for fitting/putting Web components together in the same mashup.

There are a large number of FLOSS and Commercial (including free or community editions) service composition editors for BPMN, BPEL, etc, such as Oryx, Intalio, ActiveBPEL, Eclipse BPEL, Eclipse BPMN, JBoss jBPM, Activi BPMN Eclipse Plugin, Oracle BPM Studio.

An event-and constrained-based editor is implemented by the Ericsson Composition Editor.

Composition execution engines

Target usage

The Execution engine is exposing and executing the composed services. The service provider/operator deploys services/mashups by fetching technical service descriptions and composition description from the repository. This most likely will happen through a graphical user interface in the Composition Editor. The service provider/operator controls execution modes (start, stop, debug), and can fetch logs and tracing data, most likely through the Composition Editor GUI.

Descriptions of GEs

We can differentiate three generic enablers for execution engines presented next: front-end mashup execution engines, service orchestration engines, and event-based late-binding composition engines.

Mashup execution engine

The FI-WARE reference architecture should offer a mashup container able to execute applications built from discrete front-end components (e.g. gadgets/widgets, apps) and connected at the front-end layer. At an architectural level the concept of mashup container relying on a well-defined platform API vertebrates the reference architecture. This API will offer inter-gadget communication and mashup state persistence facilities. The decentralized nature of mashups demands the Mashup execution engine to coordinate gadget execution and communication within the mashup. The availability of a standardized mashup description language will help decoupling the mashup engine from the registry and repository.

The functionality should ensure coordination of gadget execution and communication within the mashup, creating the communication channels (data flow) between gadgets. It should also handle deployment and execution of mashups, and guaranteeing the mashup state persistence, and finally generating an executable mashup from a MDL (Mashup Description Language) model

Service orchestration engine

Orchestration describes the automated arrangement, coordination, and management of complex services. Orchestration provides an executable business process, where multiple internal and external web services can be combined. The process flow is controlled in the execution environment. WS-BPEL (Web Service Business Process Execution Language) and BPMN 2.0 (Business Process Modelling Notation) are examples for languages for the orchestration of web services.

Orchestrations based on WS-BPEL and BPMN 2.0 languages have a) facilities to enable sending and receiving messages, b) a property-based message correlation mechanism, c) XML and WSDL typed variables, d) an extensible language plug-in model to allow writing expressions and queries in multiple languages (BPEL and BPMN 2.0 supports XPath 1.0 by default), e) structured programming constructs including if-then-else, if-else, while, sequence (to enable executing commands in order) and flow (to enable executing commands in parallel), f) a scoping system to allow the encapsulation of logic with local variables, fault-handlers, compensation-handlers, g) serialized scopes to control concurrent access to variables.

Alternatively, an orchestration engine could execute sequential workflows steps delivered from a composition engine through a uniform API. Different engines would execute only steps suitable to the protocol/technology implemented (e.g. : SIP, WS, REST, WARP, RMI, JBI).

Common functionality includes a) configuration and enforcement of runtime behaviour of a process using standard policies, b) performing server-based runtime message correlation and handling service communication retries, c) endpoint management to make it easy to deploy an orchestration from one environment to another, or deal with a change in topology, d) suspension of a running process using process exception management capabilities to handle bad data which would otherwise have unnecessarily failed a transaction, and e) a management console to monitor server activity and set performance thresholds for notification.

Service composition engine

For the dynamic late-binding composition, the composition engine creates a workflow on the fly from a matching skeleton. The process is triggered by a composition execution agent (CEA) that receives a triggering event and requests the next step from the composition engine. Based on what the triggering events was, the composition engine selects the matching skeleton and creates a new session. Then, at each step it selects a suitable service that matches all the global and local constraints and serves it to the agent to execute. Execution results from the previous steps together with potential external events can influence the constraint-based decision process selecting the service for the new step. If several services are suitable to implement a certain step one of them is chosen. If a component service fails during execution, the next compatible one might be executed instead.

The engine starts by fetching service description and composition description (skeletons) from USDL Repository, and then it executes the business logic and manages the dataflow transformation and the control flow elements specified in the skeleton step-by-step. At each step it chooses the relevant services for the skeleton execution by applying constraint resolution on context information. The composition engine uses orchestration engine(s) to deliver the on-the-fly created workflow step-by-step via a uniform API. It is also responsible for maintaining an up-to-date structured shared state across sessions, and provides execution traces for debugging and statistics.

Critical product attributes

Mashup execution engine

  • Rely on Web standards including standards on mashup description languages.
  • Openness, Extensibility
  • Ability of integration in multiple channels (e.g. social network, portal, widget)
  • Users executing their mashups from their favourite browser.
  • Persistence of the state of the mashup.

Service orchestration engine

  • High scalability
  • High availability
  • High configurability
  • High robustness

Service composition engine

  • Flexible and robust service composition (including event-based semantic, late binding of services, and constraint resolution).
  • Ability to integrate multiple composition execution agents (CEA) orchestrating different protocols via a common API.
  • High scalability and high performance

Existing products

Regarding mashup execution engines, there are a plethora of products, including EzWeb, Jack Be, Google IG, Yahoo! Pipes, NetVives, Open Kapow. Each of them is tackling the problem with a different approach.

Orchestration engines examples are as following. BPEL: IBM WebSphere Business Integration Server Foundation, Oracle: BPEL-PM, CapeClear. Open-source implementations: Apache ODE, OW2 Bonita, ActiveBPEL, Bexee, PXE BPEL. BPMN 2.0, JBoss jBPM: Alfresco Activiti, and Ericsson Composition Execution Agents: SIP, WS, REST, WARP, RMI, and JBI.

The Ericsson Composition Engine implements a dynamic event- and constraint-based execution engine.

Generic Enablers for Mediation

Providing interoperability solutions is the main functionality of the Mediator. The heterogeneity that exists among the ways to represent data (i.e. to represent syntactically and semantically the information items that are requested or provided by an application or a service), and to represent the communication pattern and the protocol or the public process needed to request a functionality (executing a composition in a different execution environment or implementing dynamic run-time changes might require a process mediation function), are problems that arise in FIWARE. Acknowledging the necessity to deal with these heterogeneities, mediation solutions are required in FIWARE.

The mediation GEs are divided in three main parts, as detailed bellow: Data Mediation, Protocol Mediation, and Process Mediation. In any cases, the mediation component should act as a broker between service consumers and providers, and it should be as transparent as possible.

Moreover mediation should include some non-functional requirements like built-in monitoring and management capabilities in order to be able to be automatically re-configured and to track mediation steps.

Data Mediation

Various mechanisms can be utilized to perform Data Transformation between different data models employed by the sender and receiver:

  • Using a library of built-in transformers in case of well-known formats
  • Using templates for more flexible data transformation capabilities
  • Using DSL (Domain Specific Language) or code components in order to fulfil more complex data transformation requirements
  • Using LILO schema (Lifting and Lowering) and ontology mappings

Data mediation can be used at design time service composition in data flow generation, including data flow mapping and consistency check.

For highly dynamic cases (when services at discovered at runtime, also called late-binding of services), data mediation provides the glue between heterogeneous system by:

  • Semantic matching, at runtime (also supported at design time), of requested and provided data types. It is based on semantic annotations/meta-data over service descriptions (SAWSDL, USDL), extracted from ontologies (RDF/S, OWL, WSMO, etc).
  • Providing semantic matching algorithms that can deal, at runtime, with potential syntactic discrepancies in exchanged data types (from parameters and return values of services operations).
  • Increasing, as a result, the agility of systems.
  • Integrating with legacy systems (non-intrusive, annotation based).

Service providers are describing and operating web services. Semantic annotations can be put on service descriptions and data-types. Additionally, service providers can issue pure semantic service descriptions using schemas such as OWL-S, WSMO and its light counterparts: WSMO Lite, MicroWSMO. Service consumers are describing their needs in a similar fashion.

Protocol Mediation

  • Support for hybrid service compositions that invokes either WSDL based or REST based external services or other protocols based services (for example binary protocols like Hessian or vertical industry standard like FiX and HL7)
  • Support, at runtime, for mediation of heterogeneous communication protocols used by service providers and consumers (each integrated system may use its own communication channels/means).
  • Provides a combined JBI/ESB environment on which all data will be exchanged and heterogeneous protocols “translated” to the internal protocol of the bus.
  • Provides service virtualization and routing capabilities in order to transparently add protocol mediation strategy to a specific service.
  • Provides a modular OSGi Environment that enables an easy composition of built-in and custom developed capabilities
  • Support for communication protocol extensions such as extending a protocol that allows intra-web browser communication to inter-browser communication.

Process Mediation

  • Design time service composition task resolution. Composition tasks are resolved at design time with best matching sub process (service composition) according to the task description (for instance, based on its light semantic description), following a modelling by reused paradigm.
  • Deployment time executable service composition generation from a design time service composition model. Support for translation into executable languages, such as BPEL and BPMN 2.0.
  • Semantic matching of service capabilities. Capabilities are high-level business features offered by a Web service that are defined as a valid sequence of calls (e.g. a BPEL business process) to several distinct operations of an individual service interface.
  • Provides routing capabilities through Enterprise Integration patterns implementation (i.e. message splitting, aggregation and iteration) in order to mediate different processes
  • Provides an execution runtime where composite application can be deployed in order to provide more complex process mediation capabilities
  • Provides an execution environment where BPMN 2.0 based process can be executed

Critical product attributes

  • Seamlessly providing and requesting services without having to worry about mediation problems.
  • Design time and runtime mediation support.
  • Process adaptation at design and runtime including dynamic binding, data flow generation and semantic consistency checking.
  • Template based generation of processes by reuse.
  • Adaptive process deployment, supporting different execution engines.

Existing products

The SETHA2 framework from THALES is a software framework that deals with dynamicity and

heterogeneity concerns in the SOA context and is composed of several components/tools that can be deployed independently, depending on the targeted needs of FI-WARE. A major part of SETHA2 is about providing libraries/facilities dedicated to data mediation.

MEDIATOR-TI from TI (customization of the WSO2 SOA suite) includes some Data Transformation capability provided by libraries based on the open source project Apache Camel. Currently it does not provide semantic annotation management. It also provides provides a modular execution environment where a composite application runtime and a BPMN 2.0 runtime can be plugged in; moreover it povides some protocol mediation capabilities and a modular OSGI environment that enables easy composition of built-in and custom developed capabilities.

SOA4All Design Time Composer provides some design time semi-assisted modelling features for data mediation and hybrid REST / WSDL based service compositions

PetalsESB is an extensible ESB that supports JBI. It has been successfully deployed in the SemEUsE ANR project and is currently being deployed in the CHOReOS European/FP7 project.

Ericsson Composition Execution Agents: SIP, WS, REST, WARP, RMI, JBI fulfil the automatic translation from from/to such services to the core composition engine input/output. Moreover Ericsson provides the WebCommunication mediator (WARP) to compose browser gadgets also across different device/browser boundaries.

Sample Fiware Scenario for Mediator

In the following a short example use case to give an overview of how a mediator GE could be used in FIWARE, both in the platform and in any USE CASE projects

Basically the Mediator GE could play the role of centralizing and abstracting the access towards the hetereogeneity of different devices or target applications exposing different capabilities or the same capabilities through different interfaces and data models.

In this scenario the mediator GE can be configured to implement different functionalities:

  • exposing a uniform interface and data model towards a pletora of heterogeneous devices ans sensors
  • routing incoming messages towards the right target device ot service based on the content of the message or other criteria
  • performing the role of splitting a course grained task into many specific items that shall be executed by different actors (device or applications), dispatching the splitted tasks and aggregating all results in order to return the whole result to the caller.
  • performing the role of event broker. A client application interested on events from a particular set of devices or other applications can subscribe to them using a particular service exposed by the mediator where all relevant event sources are configured.

Generic Enablers for Multi-channel and Multi-device Access

Multi-channel/Multi-device Access System

Target usage

Nowadays, the huge spread of mobile devices – smart phones, tablets, connected devices of all sorts- and the broad spectrum of social networking platforms have prompted the need for delivering FI applications and services by means of multiple channels (such as social and content platforms, application marketplaces, and so on), while ensuring the best user experience at any time allowing their access from any kind of device (multi-device), adapting usability and presentation when necessary. It is also important to manage user’s contextual information in order to support service composition adaptation corresponding to user’s preferences and profile. The more detailed and relevant the information at hand and the smarter the ability to reach the end-user the greater are the chances to accelerate time to market, close a sale, or improve customer satisfaction.

User roles

  • Application Developer will provide channel specific user interfaces, including the corresponding workflow definition.
  • Managers will provide both the content of the knowledge base and access mechanisms to get data from it.

GE description

In order to support the ideas behind this stage, FI applications must be able to give up the control over their user interfaces to take advantage of an external multi-channel and multi-device access enabler. Applications must provide device-independent abstract user interfaces by means of a standardized user interface definition authoring language, in order to have it properly rendered according to the channel and device’s properties and features, publicly available in a shared and standardized knowledge base. Moreover, giving up the control over the user interface also implies the adapter to be on charge of the interface workflow execution, which will be able to call back the application backend through service access points, and control the selection of rendered views.

Apart from solving rendering aspects, which is mandatory for enabling both multi-channel and multi-device access, multi-channel adaptation also requires dealing with the diversity of APIs and capabilities provided by the different channels. More specifically, each channel requires its own specific workflow, and thus there must be support for describing the application workflow in a generic enough abstract workflow language that can be concretized on demand to the target channel.

A workflow engine and a number of renderers, at least one per channel, leveraging the device’s properties and the delivery context can tackle multi-device adaptation within a channel.

Multi-channel / multi-device Access System


  • Creating a channel/device-specific user interface from a device/channel-independent definition language.
  • Storing and delivering specific data regarding capabilities, features, and constrains of all targeted devices and channels.
  • Using data from device description knowledge bases to adapt the user interface.
  • Handling the channel specific workflow and perform the backend invocations when necessary, redirecting to the adapted interface.
  • Rendering the specific channel user interface according to the Device & Channel Knowledge Base.
  • Rendering the specific device user interface according to the Device & Channel Knowledge Base and the targeted channel.

Relations to other components

  • Components resulting from chapter “Data/Context Management Services”. Specifically user’s contextual information managed by the Service Delivery Framework such as user’s preferences and profile, user’s location, etc.
  • Service Delivery Framework

Critical product attributes

  • Access to services and applications from a diversity of devices and distribution channels, including social web networks, content platforms, dedicated web portals, iOS, etc.
  • Multi-channel/multi-device access to the FI-WARE platform itself (business framework, repository and registry, and to the application and services themselves from any available delivery context).
  • Channels that can be used in a multi-device fashion must get some kind of device id in order to be able to discover its properties into the knowledge base.

Existing products

W3C’s initiative DIAL aimed at standardizing an authoring language for device independence.

Products such as MyMobileWeb, HAWHAW, Volantis Mobility Server, Mobile Aware’s Mobile Interaction Server, Sevanval’s Multichannel Server, Netbiscuits provide a rendering engine based on their own proprietary authoring language.

Cyntelix provides a social media distribution and aggregation platform that addresses multi-channel deployment.

Monetizing applications and services

Describing services

In order to publish service offerings in the FI-WARE Business Framework, these services have to be described in a uniform way (using Linked USDL). In the following an example Use Case application domain is presented and used to illustrate the use of Linked USDL to describe domain specific services. Although Linked USDL already provides vocabularies for describing all generic business relevant information for a service offerings, many Use Cases require additional/supplemental vocabularies to describe their domain-specific properties, which are necessary for domain-specific discovery of services.

Transport and logistics (T&L) is an application domain which is characterized by a high degree of division of tasks and interaction between many business partners such as

  • shipper (also called consignor)
  • consignee (receiver of the cargo)
  • carrier (truck, vessel, plane, rail, …)
  • ports, port terminal operator, airports
  • freight forwarder
  • warehouse owner
  • local authorities (port control, food), customs

Nevertheless, we recognize that the actual work in daily work is done manually for execution of the core processes of T&L:

  • Marketing, sales and alignment processes for matching users needs and transport service provider offers
  • Planning processes related to resource utilization, capacities, time schedules
  • Execution processes covering movement and handling of goods and documents as well as monitoring
  • Completion processes, finalization of the contract, payment and claims

The main challenges of the business partners in T&L are rooted in time pressure and lack of accurate in-time information and overall transparency. People often have to take fast decisions, especially in the case of a deviation in the process, a delay, or late cancellation of transport orders. However, due the lack of information, these decisions cannot be optimal. Furthermore, suitable service offerings are not easy to find. Most often the companies rely on their network and the offerings of the business partners proven to be reliable in the past. A simple phone call or E-mails are predominant means of communication. Reservation, booking, and confirmation itself is not an easy task either since it is often not supported by an online system. Because of the many different and incompatible software systems the people in T&L suffer from a relatively high rate of errors in information transfer.

T&L is a service oriented business. Describing services in a uniform format could help to improve many of the processes by automation and IT support. In the following chapter we outline how Linked USDL can be supplemented by additional domain-specific vocabularies suitable for T&L for deriving arbitrary detailed service descriptions, which can be used to address many of the challenges mentioned above. An increased transparency through uniform service descriptions will enable greater flexibility in networking and pooling of resources. An open marketplace for T&L services will tive knowledge which can be used for better predictability of market demand (utilizing statistics, forecasts, marketplace portal). Deviations could be handled better because required workaround and replacements would be easily being found in the real-time offerings on the market and immediate treatment of information. A marketplace could also facilitate collaboration based on the service descriptions and the related services and business models and processes.

All stakeholders operate in a network of services (service ecosystem), where services are combined/composed in a larger service network (see picture about a sample service network from FINEST fishery T&L use case.

All stakeholders offer services and information according to their business, which need to be described in a overall harmonized way. Means Information about one service should be seemingly go into the input of another service.

Complex multi-party, multiple services business process

(Courtesy FINEST Project)

Using Linked USDL for T&L service descriptions

In the following we will analyse specific requirements of the T&L domain with respect to the description of services, which will be used in collaborative businesses.

Business partners

As we outlined above, we find several stakeholders in the T&L business scenarios. All of these stakeholder roles can be described in the Linked USDL core vocabulary or supplemental domain-specific SKOS taxonomies. So for example a service provider role is already covered by the usdl:Provider type. Business partners in general are described using the gr:BusinessEntity (Goodrelations vocabulary). This type contains a couple attributes, which are quite common in business, such as names, descriptions, legalName, identification codes (NAICS, ISICv4, DUNS), addresses, communication information as well as the Point of Sales (POS) location It is also possible to define new domain specific roles in the T&L domain. Domain-specific T&L specific non-functional properties

The most interesting part of the description is probably dealing with the domain specific non-functional properties. Therefore it will be necessary to provide additional logistics vocabularies. However, we don‘t reinvent the wheel! A lot of existing schemas and datasets are already existing. We don’t need to reinvent the wheel, better try to rely on existing stuff as it is a best practice in the Linked Open Data. It is important to understand that Linked USDL as well as the underlying GoodRelations vocabulary already provide a mechanism and point where domain specific extensions will come into place.

Domain-specific T&L specific non-functional properties The most interesting part of the description is probably dealing with the domain specific non-functional properties. Therefore it will be necessary to provide additional logistics vocabularies. However, we don‘t reinvent the wheel! A lot of existing schemas and datasets are already existing. We don’t need to reinvent the wheel, better try to rely on existing stuff as it is a best practice in the Linked Open Data. It is important to understand that Linked USDL as well as the underlying GoodRelations vocabulary already provide a mechanism and point where domain specific extensions will come into place.

There are basically two options to extend the core vocabularies for service properties:

  1. Define non-fuctional properties using GoodRelations: gr:quantitativeProductOrServiceProperty and gr:quantitativeProductOrServiceProperty (gr:QualitativeValue, gr:QuantitativeValue respectively)
  2. Use Linked USDL SLA variables, in case the property is a matter of a concrete service level agreement.

Some links to reusable stuff regarding T&L:

Shipment types

Shipment type must be a domain specific classification scheme expressed by the SKOS vocabulary or a common standard encoding

  • Rail, Road, Sea, Air
  • Vehicle: Truck, Car, Motorcycle, …
Mode of transport

We introduce a new property as a subproperty of qualitativeProductOrServiceProperty from the Goodrelations vocabulary

   logistics:modeOfTransportation a owl:Property ;
      rdfs:subPropertyOf gr:qualitativeProductOrServiceProperty;
      rdfs:label "Mode of Transportation" ;
      rdfs:comment "Defines which mode of transportation is used by the servie" ;
      rdfs:domain gr:ProductOrService ;
      rdfs:range gr:QualitativeValue .

The different transportation modes will be subclasses of logistics:TransportationMode:

   logistics:TransportationMode a skos:ConceptScheme;
      rdfs:label "Transportation Mode Taxonomy";
      rdfs:comment "The taxonomy scheme of transportation modes" .

Besides subclassing of gr:qualitativeProductOrServiceProperty and gr:QualitativeValue, we use the SKOS vocabulary to define a taxonomy of transportation modes.

   logistics:SeaTransport a skos:Concept, gr:QualitativeValue;
     skos:topConceptOf logistics:TransportationMode;
      rdfs:label "Sea Transport"@en;
      rdfs:comment "Transport over Sea" .
   logistics:AirTransport a skos:Concept, gr:QualitativeValue;
      skos:topConceptOf logistics:TransportationMode;
      rdfs:label "Air Transport"@en;
      rdfs:comment "Transport over Air" .
   logistics:LandTransport a skos:Concept, gr:QualitativeValue;
      skos:topConceptOf logistics:TransportationMode;
      rdfs:label "Land Transport"@en;
      rdfs:comment "Transport over Land" .
   logistics:RoadTransport a skos:Concept, gr:QualitativeValue;
      skos:broader logistics:LandTransport;
      skos:topConceptOf logistics:TransportationMode;
      rdfs:label "Road Transport"@en;
      rdfs:comment "Transport on Road" .
   logistics:RailTransport a skos:Concept, gr:QualitativeValue;
      skos:broader logistics:LandTransport;
      skos:topConceptOf logistics:TransportationMode;
          rdfs:label "Rail Transport"@en;
      rdfs:comment "Transport on Rail" .

Similarly the means of transportation (which kind of vehicle) can be defined by a vehicle taxonomy:

  logistics:meansOfTransportation a owl:Property ;
      rdfs:subPropertyOf gr:qualitativeProductOrServiceProperty;
      rdfs:label "Means of Transportation" ;
      rdfs:comment "Defines vehicles are used for the transport" ;
      dcterms:subject logistics:Vehicle ;
      rdfs:domain gr:ProductOrService ;
      rdfs:range gr:QualitativeValue .

Simlare taxonomies could be used for vehicle and cargo types, nature of cargo, container sizes, routes, etc. The following standards could be a starting point for these definitions:

Publishing the service description

Once the service description is available it must be accessible by the different components of the Business Framework. FI-WARE provides a concept of a Repository to be used for making service descriptions accessible by the environment via the interfaces in the Repository Open Specification. In a real Web-scale installation there will be many repositories, hosted either by the provider themselves or by some independent platform providers. Also due to the linked data nature a service description can be distributed over multiple repositories.

FI-Ware supports making business with services through the Marketplace Generic Enabler, which provides typical functionality, like publishing service offers or demand, service offering and demand matching, discovery, comparison, price simulation, user rating and reviews, etc. So the service will be offered through a Marketplace after the service description has been made accessible on the Repository.

Sustainable Services & Apps Ecosystem

FI-WARE provides the concept of a marketplace to foster a sustainable ecosystem around the service business. Various stakeholders like service providers, hoster, value-added resellers, broker, customers and end user, come together via the marketplace in order to make business. The Marketplace foresees different functionality like offering and demand matching, discovery, rating & review, comparison and market analytics, and more. All these functionality can be accessed through interfaces which are defined by Open Specifications in order to ensure compatibility and a high-degree of integration of components from different partners. Marketplaces could be vertical (domain-specific) or horizontal (general) and can be instantiated multiple times. Through the open specifications it is possible that offerings from one marketplace (e.g. Transport and Logistics services) can appear and be handled properly in another marketplace (e.g. Event Management).

The following picture shows how the FI-WARE Marketplace could be used in the domain-specific architecture of the FINEST Transport and Logistics use case. In this case the user interface to the Marketplace is realized through the Transport&Logistics Portal (e.g. by using WireCloud mashup widgets) and provides a role-specific work environment, instead of a general marketplace application.

Integration of Apps GE into the FINEST Transport&Logistics Architecture
Integration of Apps GE into the FINEST Transport&Logistics Architecture

Monetizing data

Monetizing data can be done with the FI-WARE business framework, if service enablement has been done. Service enablement means that functionality like access to resources, like sensor and actuator networks, computing and network infrastructure or data is possible through a service interface.

FI-WARE provides some Data Generic Enablers in its Data chapter. These enablers could form a standardized service interface for accessing data resources over the Internet. In order to make business with data, it would be possible and also recommended to describe these services (esp. their business aspects) in Linked USDL, make the descriptions accessible via Repositories and publish concrete data service offerings and their service level proviel and price on the market place. In this way Data services are no different than any other service.

Sample data service descriptions: (big data example?)

Revenue Sharing in collaborative business models

The FI-WARE Business Framework is not only for monetizing single services from one provider. It is also intended to enable collaborative businesses (value chains), where an overall customer service solution is composed out of multiple smaller services from different providers, which are connected according to a joined business model, where all partners contribute and benefit. According to the business model a revenue sharing model is derived, which can be used by the Revenue Sharing Enabler to automatically distribute revenues during runtime.

The T&L scenario described is quite complex and involves a high number of physical goods and processes. Some of these processes can be further supported by IT applications and services. Let’s take as an example the management of cargo fleets. Companies operating a cargo fleet may use an IT application that allows managing trucks, keeping track of their location, communicating with drivers, etc. Besides, the application can be integrated with other back office applications dealing with supplies, payroll management, etc. Moreover, it could also be integrated with the IT systems of other stakeholders in the value chain to enable better coordination.

This truck fleet management application, as a composite service, would in turn be made up of different atomic IT services, most likely provided by different providers. For instance, it will have to use location information from a context provider and telecom services from a Telco company. Moreover, additional IT services like data storage, connectivity middleware, etc. may also be offered as a service in the FI-WARE marketplace and used accordingly.

Each of these IT services needs to have a price plan assigned, together with its technical description. In addition, revenue sharing models have to be agreed between the different stakeholders, in particular between the marketplace and store owners and the providers offering their services in FI-WARE.

Once the fleet management application is contracted and deployed on FI-WARE’s Cloud Hosting infrastructure, its usage must be monitored and accounted for. Usage information would then be used, together with the corresponding price models, to rate the service and charge its customers accordingly. Finally, the revenues generated by the truck fleet management application can be distributed, according to revenue sharing models, amongst the different stakeholders contributing to its delivery. For instance, the context service provider would receive a certain amount of money for each location call served.

Supporting service compositions and crowd sourcing

In this scenario the mediator can play the roles of task dispatcher using the split/aggregator pattern and event broker.

1. As a task dispatcher the mediator receive a coarse grained task from the task manager and , base don some criteria related to the content of the task, split it into specific items and dispatch them to the right target (human or application)

2. As a event broker the mediator exposes a service in order to subscribe to events from devices or applications and allow ro configure various event sources. In this way it collect events from devices, optionally filter them and send them to the current listeners.

Mediator in crowd sourcing scenario
Mediator in crowd sourcing scenario

FI-WARE strives to exploit the composable nature of the application and services technologies in order to support cross-selling and achieve the derived network scaling effects in multiple ways. The platform enables composition either from the front-end perspective --application mashups- or the back-end perspective --composite services. Specifically, the Application Mashup GE targets composition from the front-end perspective and is expected to foster the creation and the execution of value-added web applications not only by application providers but also by intermediaries and end users acting as composers, a.k.a. prosumers. By prosumer we denote a consumer-side end user who cannot find an application which fits his/her needs and therefore modifies/creates (and possibly share) an application mashup in an ad-hoc manner for his/her own consumption. As the capabilities and skills of the target users being considered are expected to be very diverse, all kinds of usability aspects, conceptual simplification, recommendation and guidance, etc. are taken into consideration.

Web application mashups integrate heterogeneous data, application logic, and UI components (widgets) sourced from the Web to create new coherent and value-adding composite applications. They are targeted at leveraging the "long tail" of the Web of Services (e.g. the so called Web APIs, which have proliferated during the last years and have doubled in number during 2012. See programmableweb.com) by exploiting rapid development, the Do-It-Yourself (DIY) metaphor, and shareability. They typically serve a specific situational (i.e. immediate, short-lived, customized, specific) need, frequently with high potential for reuse. Is this "situational" character which preclude them to be offered as 'off-the-self' functionality by solution providers. And is their high potential for reuse and the fact that they target end users (as prosumers) that put web application mashups and their constituent mashable application components (MACs) in a unique position to enable the crowd-sourcing of applications.

Nothing precludes Web application mashups from being manually developed using conventional web programming technologies, but this fails to take full advantage of the approach. Application Mashup tools and platforms such as the one being specified by the FIWARE's Application Mashup GE aim at development paradigms that do not require programming skills and, hence, target end users (being them business staff, customers or citizens). They also help to leverage innovation through experimentation and rapid prototyping by allowing their users (a) to discover the best suited mashable components (widgets, operators and prefab mashup-lets) for their devised mashup from a vast, ever-growing distributed catalogue, (b) to visually mash them up to compose the application, and (c) to share them with other users.

To illustrate how the Application Mashup Generic Enabler can be used for real use cases, we have borrowed the following example scenario from the Finest Use Case Project (http://www.finest-ppp.eu/). The scenario is part of its Fish transport from Ålesund to Europeuse case:

"A fish producer needs to ship frozen/dried fish from Norway to a customer overseas. The scenario covers the feedering phase, i.e. the shipping from Ålesund to Northern Europe. The fish cargo is first delivered at the Port of Ålesund (ÅRH) and stored and stuffed in container at the terminal (Tyrholm & Farstad: TF). The shipping line NCL covers the North Sea voyage (feedering) from Ålesund to Hamburg/Rotterdam, and further shipped overseas by a deep-sea container shipping line (e.g. APL). The process involves customs and food health declarations. The transport set-up is mostly fixed."

As is: The Port updates the website with information on the port’s services, capacity, resources, and weather (in practice, port calls info updated systematically). This serves as information source for customers (ship agents, terminal operators) and all other stakeholders.

Challenges: Much manual info registration, and a lot of work duplication.

For improvement in the future, the port envisions to following improvements:

  • A marketing portal, like a resource hub accessible from the website, enabling online management of bookings, resources and services as well as communication and coordination with third party service provider systems.
  • Automatic update of WebPages (“ship calling”, “at port”, “departure”, etc.) based on information from SafeSeaNet and actual data from AIS.
  • Online registration of booking directly by the ship / ship agent.

In order to cope with these improvements, Finest demands from FI-WARE the following EPIC that will be covered by the Application Mashup GE:

"FInest.Epic.IoS.WidgetPlatformInfrastructure: A visual portal website is needed where each user can add, remove and use widgets. Therefore, also a widget repository is needed where a user can select widgets from. An infrastructure should be provided to deploy new widgets to the portal. It should be easy to use by an end-user."

What follows is the description of how the Application Mashup GE can be used to help to deal with the envisioned improvements:

  • The functionality and information sources are split in a set of widgets: one for each resource that would be made accessible from the original website: management of bookings and registrations, management of resources, management of services. Widgets from third party service provider capable of communicate and coordinate with their systems, event-driven widgets connected to SafeSeaNet and actual data from AIS, e.g. "ship calling", "at port", "departure", etc. are also added to the catalogue of available widgets
  • These widgets are shared and offered through a repository/store/marketplace from which the different stakeholders and customers implied: ship agents, terminal operators, etc. can search, select and retrieve the offerings of their interest. Different users demand different configurations for their “information/operation cockpit/dashboard”, and the Application Mashup GE allows them to customize it by selecting and placing widgets in the dashboard at their convenience. As opposed to having to use a generic website intended to serve as a common information source for all its users regardless of their role, which commonly leads to a big deal of manual work and work duplication when searching for and linking information from separate web pages/apps to perform a task.
  • Each customer and stakeholder involved in this scenario, regardless their level of technical or programming skills can leverage the application mashup editor to visually build a customized cockpit/dashboard with the most valuable data and operations for his/her work by adding, removing and using available widgets and mashup-lets (prefab mashups that can be customized by adding and removing widgets to/from them).
  • The Application Mashup GE also provides them with a mechanism to visually compose a full-fledged web application starting from widgets that can now interact with each other via events and data sharing. The visual mechanism used for this purpose is called wiring. It also support the use of operators (filters, aggregators, mediators, data bounds, etc.) and a piping mechanism that allows to visually connect them each other and to the target widget. Moreover, they even can share the resulting application mashup for future use by other customers or stake holders (that can futher customize it).
  • A widget platform (or application mashup container) will serve as the envisioned visual portal website where these customers and stakeholders can easily deploy and use the widgets that make up the application mashup (i.e. the customized cockpit or information/operations dashboard that best fit their interests).

Question Marks

Relationship to other generic chapters

The six generic domains Security, Internet-of-things service enablement, Interface to network and devices, Data/context management, Cloud hosting, and Applications/Services and delivery framework should also offer their functionality through the business framework and composition & mash-up platform. Indeed, during various meetings this question was discussed. It seems to be a desirable option for exposing the functionality of many GEs to offer it as a service, described in USDL and the business elements and models in order to be sold on the marketplace/store. However, this cannot be the task of the Apps Chapter alone. The Apps chapter will support to enable all kind of GE in FI-Ware for the business framework. It is the a task of the WP2 to foster this kind of collaboration and coherence between the different chapters. In the sense of "eat your own dog food" this would be an interesting proposition for FI-Ware. Nevertheless, we should focus on the Use Case projects, which are the real users of the FI-Ware platform and exploit internal synergies as much as possible, without neglecting the Use Case projects.

Security Aspects

This section identifies potential security issues raised by the proposed high-level architecture. The first analysis has shown that the currently available USDL security extension needs conceptual rework. The usefulness of the security enablers to applications and services ecosystems needs further analysis. The architectures, protocols, and description languages used to realize composition and mashup GEs have to be identified and collected to enable in-depth security analysis.

Service registry and repository

The following topics require further analysis:

  • Management of identities and authorization for the publication/management of service descriptions in the repository

Only the owner of the service should be able to define, modify and delete a service description in the registry

  • Access control / authentication for the discovery and service search (who can access to a service description)

Private or corporate services should not be visible for any user

Users should have a guarantee about the authenticity of the published services, for example an SAP service must be certified and during the search process only certified services should appear to the user if it is required

  • Protection against replays

Replaying discovery requests/response should be forbidden to prevent against Phishing attacks

  • Prevent against misusage and coalition attacks for the user feedback system

If the reputation of a service depends on the feedbacks of non authenticated users, this would lead to positive/negative feedbacks exaggerations. Prevent coalition attack to promote or discredit a service.

  • Message Confidentiality between registries, servers, users


The following topics require further analysis:

  • Virus and malware scanning for the deployed applications in the “store”

Services that are exposed to the users should be tested against viruses, malwares and adwares to protect the users during the consumption.

  • Service signature to authenticate services
  • Revocation list maintenance

If a service had a malicious behavior and was rejected from the marketplace, his clone should not be re-published.

  • Authentication for the services

Revenue Settlement and Sharing System

  • Confidentiality and integrity during the payment process
  • Strong authentication to verify the payment origin and destination
  • Privacy protection of the sensitive payment information (Credit cards, traces, logs etc.),
  • Accountability to keep trace about the payments
  • Secure payment (virtual money ?)

Compostion and Mashup

  • Cross domain authentication and access control (federation)

If services are using different identity providers and roles attributions a federation mechanism should be put in place in order to harmonize the translation between the two domains and preserve the security policy effects.

  • Avoid conflicts when composing security policies related to services

A composite service composed of two services: one encrypting messages and another one passing it in clear will create a conflict.

  • Keep the coherence of composed security policies

Avoid redundancy when enforcing security functionalities: double authentication for example.

Data Mediation

  • Data privacy: the mediator should not access to private data

Domain specific extensions for aggregation

In order to augment the orchestration engine towards applicability for IoT-aware services there is a need to also provide extensions to the composition language used in the orchestration engine.

There are idiosyncrasies of IoT services that impose significant differences between current and future business processes that will be IoT-aware. These include among others an inherently distributed execution: The automated and semi-automated execution of a modelled business process is one of the key benefits of process modelling. In contrast to having a central process engine in a WoS process, the execution of the process steps is usually distributed over the devices in an IoT-enabled process. The orchestration of these distributed execution activities must be possible with an IoT-aware process modelling language. Furthermore, IoT deals with distributed data: When business processes are realized in the standard enterprise world, central data storage, e.g. a database server, is normally the only data storage. In the IoT it is possible to distribute the data over several of the resources of a device, potentially even eliminating a central storage. A modelling language must allow arranging this distribution of data. Related to this is the issue of Scalability: In standard business processes there is generally only one central device and resource repository, but in the IoT processes multiple devices and resources (e.g. sensors and actuators of a fridge) can appear. The complexity of the modelled process should be independent from the number of devices, resources, and services. Additionally, the growing number of devices should not have an impact on the performance of the process execution. Therefore, the modelling language must provide concepts to describe the expected performance even for many devices.

There are even more IoT specific aspects to business process modelling such as availability/ mobility, fault tolerance, or the uncertainty of information that is often an issue with information gathered from IoT sensors. Within FI-WARE we will provide IoT notation extensions in a similar way as the USDL extensions.

Relationship to I2ND chapter

Interface to Network and Devices: A number of composition and mashup GEs is expected to consume Telco services. The relationship to Interface to Network and Devices requires in-depth analysi

Terms and Definitions

This section comprises a summary of terms and definitions introduced during the previous sections. It intends to establish a vocabulary that will be help to carry out discussions internally and with third parties (e.g., Use Case projects in the EU FP7 Future Internet PPP)

  • Aggregator (Role): A Role that supports domain specialists and third-parties in aggregating services and apps for new and unforeseen opportunities and needs. It does so by providing the dedicated tooling for aggregating services at different levels: UI, service operation, business process or business object levels.
  • Application: Applications in FIWARE are composite services that have a IT supported interaction interface (user interface). In most cases consumers do not buy the application, instead they buy the right to use the application (user license).
  • Broker (Role): The business network’s central point of service access, being used to expose services from providers that are delivered through the Broker’s service delivery functionality. The broker is the central instance for enabling monetization.
  • Business Element: Core element of a business model, such as pricing models, revenue sharing models, promotions, SLAs, etc.
  • Business Framework: Set of concepts and assets responsible for supporting the implementation of innovative business models in a flexible way.
  • Business Model: Strategy and approach that defines how a particular service/application is supposed to generate revenue and profit. Therefore, a Business Model can be implemented as a set of business elements which can be combined and customized in a flexible way and in accordance to business and market requirements and other characteristics.
  • Business Process: Set of related and structured activities producing a specific service or product, thereby achieving one or more business objectives. An operational business process clearly defines the roles and tasks of all involved parties inside an organization to achieve one specific goal.
  • Business Role: Set of responsibilities and tasks that can be assigned to concrete business role owners, such as a human being or a software component.
  • Channel: Resources through which services are accessed by end users. Examples for well-known channels are Web sites/portals, web-based brokers (like iTunes, eBay and Amazon), social networks (like Facebook, LinkedIn and MySpace), mobile channels (Android, iOS) and work centers. The access mode to these channels is governed by technical channels like the Web, mobile devices and voice response, where each of these channels requires its own specific workflow.
  • Channel Maker (Role): Supports parties in creating outlets (the Channels) through which services are consumed, i.e. Web sites, social networks or mobile platforms. The Channel Maker interacts with the Broker for discovery of services during the process of creating or updating channel specifications as well as for storing channel specifications and channeled service constraints in the Broker.
  • Composite Service (composition): Executable composition of business back-end MACs (see MAC definition later in this list). Common composite services are either orchestrated or choreographed. Orchestrated compositions are defined by a centralized control flow managed by a unique process that orchestrates all the interactions (according to the control flow) between the external services that participate in the composition. Choreographed compositions do not have a centralized process, thus the services participating in the composition autonomously coordinate each other according to some specified coordination rules. Backend compositions are executed in dedicated process execution engines. Target users of tools for creating Composites Services are technical users with algorithmic and process management skills.
  • Consumer (Role): Actor who searches for and consumes particular business functionality exposed on the Web as a service/application that satisfies her own needs.
  • Desktop Environment: Multi-channel client platform enabling users to access and use their applications and services.
  • Front-end/Back-end Composition: Front-end compositions define a front-end application as an aggregation of visual mashable application pieces (named as widgets, gadgets, portlets, etc.) and back-end services. Front-end compositions interact with end-users, in the sense that front-end compositions consume data provided by the end-users and provide data to them. Thus the front-end composition (or mashup) will have a direct influence on the application look and feel; every component will add a new user interaction feature. Back-end compositions define a back-end business service (also known as process) as an aggregation of backend services as defined for service composition term, the end-user being oblivious to the composition process. While back-end components represent atomization of business logic and information processing, front-end components represent atomization of information presentation and user interaction.
  • Gateway (Role): The Gateway role enables linking between separate systems and services, allowing them to exchange information in a controlled way despite different technologies and authoritative realms. A Gateway provides interoperability solutions for other applications, including data mapping as well as run-time data store-forward and message translation. Gateway services are advertised through the Broker, allowing providers and aggregators to search for candidate gateway services for interface adaptation to particular message standards. The Mediation is the central generic enabler. Other important functionalities are eventing, dispatching, security, connectors and integration adaptors, configuration, and change propagation.
  • Hoster (Role): Allows the various infrastructure services in cloud environments to be leveraged as part of provisioning an application in a business network. A service can be deployed onto a specific cloud using the Hoster’s interface. This enables service providers to re-host services and applications from their on-premise environments to cloud-based, on-demand environments to attract new users at much lower cost.
  • Marketplace: Part of the business framework providing means for service providers, to publish their service offerings, and means for service consumers, to compare and select a specific service implementation. A marketplace can offer services from different stores and thus different service providers. The actual buying of a specific service is handled by the related service store.
  • Mashup: Executable composition of front-end MACs. There are several kinds of mashups, depending on the technique of composition (spatial rearrangement, wiring, piping, etc.) and the MACs used. They are called application mashups when applications are composed to build new applications and services/data mash-ups if services are composed to generate new services. While composite service is a common term in backend services implementing business processes, the term ‘mashup’ is widely adopted when referring to Web resources (data, services and applications). Front-end compositions heavily depend on the available device environment (including the chosen presentation channels). Target users of mashup platforms are typically users without technical or programming expertise.
  • Mashable Application Component (MAC): Functional entity able to be consumed executed or combined. Usually this applies to components that will offer not only their main behaviour but also the necessary functionality to allow further compositions with other components. It is envisioned that MACs will offer access, through applications and/or services, to any available FIWARE resource or functionality, including gadgets, services, data sources, content, and things. Alternatively, it can be denoted as ‘service component’ or ‘application component’.
  • Monetization: Process or activity to provide a product (in this context: a service) in exchange for money. The Provider publishes certain functionality and makes it available through the Broker. The service access by the Consumer is being accounted, according to the underlying business model, and the resulting revenue is shared across the involved service providers.
  • Premise (Role): On-Premise operators provide in-house or on-site solutions, which are used within a company (such as ERP) or are offered to business partners under specific terms and conditions. These systems and services are to be regarded as external and legacy to the FIWARE platform, because they do not conform to the architecture and API specifications of FIWARE. They will only be accessible to FIWARE services and applications through the Gateway.
  • Prosumer: A user role able to produce, share and consume their own products and modify/adapt products made by others.
  • Provider (Role): Actor who publishes and offers (provides) certain business functionality on the Web through a service/application endpoint. This role also takes care of maintaining this business functionality.
  • Registry and Repository: Generic enablers that able to store models and configuration information along with all the necessary meta-information to enable searching, social search, recommendation and browsing, so end users as well as services are able to easily find what they need.
  • Revenue Settlement: Process of transferring the actual charges for specific service consumption from the consumer to the service provider.
  • Revenue Sharing: Process of splitting the charges of particular service consumption between the parties providing the specific service (composition) according to a specified revenue sharing model.
  • Service: We use the term service in a very general sense. A service is a means of delivering value to customers by facilitating outcomes customers want to achieve without the ownership of specific costs and risks. Services could be supported by IT. In this case we say that the interaction with the service provider is through a technical interface (for instance a mobile app user interface or a Web service). Applications could be seen as such IT supported Services that often are also composite services.
  • Service Composition: in SOA domain, a service composition is an added value service created by aggregation of existing third party services, according to some predefined work and data flow. Aggregated services provide specialized business functionality, on which the service composition functionality has been split down.
  • Service Delivery Framework: Service Delivery Framework (or Service Delivery Platform (SDP)) refers to a set of components that provide service delivery functionality (such as service creation, session control & protocols) for a type of service. In the context of FIWARE, it is defined as a set of functional building blocks and tools to (1) manage the lifecycle of software services, (2) creating new services by creating service compositions and mashups, (3) providing means for publishing services through different channels on different platforms, (4) offering marketplaces and stores for monetizing available services and (5) sharing the service revenues between the involved service providers.
  • Service Level Agreement (SLA): A service level agreement is a legally binding and formally defined service contract, between a service provider and a service consumer, specifying the contracted qualitative aspects of a specific service (e.g. performance, security, privacy, availability or redundancy). In other words, SLAs not only specify that the provider will just deliver some service, but that this service will also be delivered on time, at a given price, and with money back if the pledge is broken.
  • Store: Part of the Business Framework, offering a set of services that are published to a selected set of marketplaces. The store thereby holds the service portfolio of a specific service provider. In case a specific service is purchased on a service marketplace, the service store handles the actual buying of a specific service (as a financial business transaction).
  • Unified Service Description Language (USDL): USDL is a platform-neutral language for describing services, covering a variety of service types, such as purely human services, transactional services, informational services, software components, digital media, platform services and infrastructure services. The core set of language modules offers the specification of functional and technical service properties, legal and financial aspects, service levels, interaction information and corresponding participants. USDL is offering extension points for the derivation of domain-specific service description languages by extending or changing the available language modules.
Personal tools
Create a book