Get Ready for X-Road 7

The long-awaited X-Road 7 “Unicorn” is almost here. Building the new version started in October 2020, so it has been rather a marathon than a sprint. The first public beta version of the Security Server was released in December 2020, and it provided the first view on the new major version. The first public release introduced a new visual style, but it didn't offer any functional changes yet. Since the initial release, many things have changed under the hood, and the new visual style has been polished. Now it's an excellent moment to look into the mouth of the Unicorn and have a comprehensive overview of X-Road 7.

X-Road 6 was initially released in 2015, and it has experienced several major changes during its lifecycle, but the core principles have remained the same. Over the years, X-Road 6 has proven to be secure, reliable, and scalable. Therefore, it was decided to use it as a basis for the next major version. Thanks to the solid foundation, there’s no need to reinvent the wheel. The aim is to keep all the good in X-Road 6 and get rid of the bad and the ugly. In other words, X-Road 7 will have all the strengths of X-Road 6 with numerous improvements in various areas. All in all, X-Road 7 is an evolutionary version of X-Road 6.

What happens to X-Road 6 when X-Road 7 is released?

X-Road’s current lifecycle policy is to support the latest version plus two previous versions. The supported versions are defined on MAJOR.MINOR level so the release of patch versions (MAJOR.MINOR.PATCH) does not effect on the support. The supported versions receive patches in case of bugs and vulnerabilities. Also, version upgrades are supported and tested between the supported versions. However, with X-Road 7.0.0, there's one exception – upgrade to version 7.0.0 is supported from version 6.26.0 only. If you're running an older version of X-Road, upgrade to version 6.26.0 is required first before upgrading to version 7.0.0.

Version 6.26.0 is the last planned release for X-Road 6. According to X-Road’s lifecycle policy, it will be supported until version 7.2.0 is released. Currently, the planned release date for version 7.2.0 is November 2022. However, version 6.26.0 or any other older version doesn't stop working when a new X-Road version is released. Instead, older versions continue to work without any immediate effects.

Nevertheless, it’s strongly recommended to use a supported version of X-Road to be safe if bugs or security vulnerabilities are detected. In addition, X-Road member organisations should follow X-Road ecosystem-specific guidelines and policies regarding supported versions set by the X-Road operator. For example, a common practice is that the operator first upgrades the Central Server to the new version, and the member organisations are allowed to upgrade their Security Servers only after that. 

Is X-Road 7 backward compatible with X-Road 6?

The last major version upgrade from X-Road 5 to X-Road 6 required a lot of work from X-Road member organisations because the message protocol between information systems and the Security Server changed. It meant that upgrading the Security Server wasn't enough, but the information systems required changes too. The good news is that X-Road 7 is backward compatible on the interface level with X-Road 6 since there are no changes in the Message Protocol for SOAP and the Message Protocol for REST. Upgrading the Security Server software is enough, and no changes to the connected information systems are required. However, the Service Metadata Protocol for REST and the Security Server REST management API have some changes that might not be fully backward compatible depending on the client application.

However, some changes in the Security Server's configuration are not backward compatible, and manual actions may be required depending on the current configuration. Overall, the estimated effort for the activities is low, and the version upgrade from version 6 to version 7 is more like a version upgrade between two X-Road 6 minor versions, e.g., version upgrade from version 6.23.0 to version 6.24.0.

How is X-Road 7 developed?

X-Road 7 will be implemented iteratively using agile software development methods. All the changes will not be included in the first production version, but they will be introduced one by one over time in various X-Road 7 minor versions.

The development of X-Road 7 is divided into multiple high-level focus areas. Each focus area consists of several topics that will be turned into actual features. The high-level focus areas and their main topics are:

  • messaging patterns

  • message logs

  • onboarding process

  • architecture

  • operational insights

  • sustainability.

More information on the focus areas is available on the X-Road website.

What’s new in X-Road 7.0.0? 

In version 7.0.0, the focus is on the Security Server, and there are only a few minor changes on the Central Server side. The Security Server will have a new visual style that implements the X-Road 7 visual style guide, while the Central Server still has the version 6 visual style. Here’s a summary of changes included in version 7.0.0:

  • New X-Road 7 look and feel for the Security Server UI.

  • Security improvements on the Security Server:

    • Encrypt backup files (opt-in)

    • Verify integrity of backup files on restore.

  • Improvements in Security Server message logging:

    • Encrypt message payload in message log database (opt-in)

    • Encrypt message log archives (opt-in)

    • Group message log archives by member or subsystem (opt-in)

    • Support for fully disabling message logging.

  • Change PIN code on the Security Server.

  • Return REST API type (OPENAPI3 / REST) and API endpoints in REST metaservice responses.

  • Run the Security Server on Java 11 by default.

  • Make Security Server more modular by enabling installation without a local Postgres server.

  • Version compatibility check for version upgrades - it is no longer possible to update from an unsupported version.

  • Official Docker support for the Security Server with the Security Server Sidecar images.

  • Other enhancements and bug fixes.

The development focus will shift to the Central Server in 2022 when it gets a new user interface and a management REST API. Also, the new Central Server will include several architectural changes, which will make X-Road more extensible and easier to operate.

What is the release schedule?

Version 7.0.0 will be released in Q4 / 2021. Before the final release version, there will be a public beta version available in October 2021. The beta version will include all the main features available in the final release, and it's targeted at anyone interested in testing the new version in advance. The official release notes that provide detailed information about all the changes included in the release will be published together with the beta. In addition, a separate migration guide that offers detailed information about any manual actions required in the upgrade process will be published.

This writing is the first part in a series of X-Road 7 related blog posts. More writings providing insights on X-Road 7 will be published on the NIIS blog during the following months. Stay tuned!

Reducing the environmental impact of X-Road

Introduction 

Nordic Institute for Interoperability Solutions (NIIS) has set a long-term goal to make X-Road the most sustainable data exchange solution in the world. To better understand the direct and tangible environmental impacts of the use of X-Road software, NIIS commissioned research to assess the current emissions profile across X-Road’s operations and services. In doing so, it delivers targeted recommendations for emissions reduction and sustainable business practices that may be integrated into future decision making.

The study was performed by Gofore and Stockholm Environment Institute Tallinn (SEI) in close collaboration with NIIS and the X-Road Governing Authorities in Estonia and Finland. Together, experts built an emissions calculator to assess the emissions profile of the X-Road instance, subject to the emissions boundary defined in our first blog post. The calculations used in this process have been derived from peer-reviewed literature and tailored to X-Road use cases in Estonia and Finland, which is well described in our previous blog post. However, the emission calculator is comprehensive enough to perform calculations for any instance residing in any region.

Results

The results of the calculator give a clear picture of the total carbon footprint of X-Road, as well as its individual components. The key findings demonstrate: 

  1. Around 96 % of total emissions are related to the operations of X-Road Security Servers. 

  2. Data transmission and storage provide marginal contributions to the total carbon footprint (around 1 % and 3 %, respectively).

  3. The annual carbon footprint for Estonia and Finland was approximated as 45,685 KgCO2e and 22,593 KgCO2e, respectively.

  4. The discrepancy between the two countries relates mostly to the difference in average electricity grid emission factor – this reflects the larger percentage of low carbon electricity sources in the Finnish grid and the relative reliance in Estonia on shale oil.

Based on these results, we have derived recommendations to manage or mitigate emissions relevant to different stakeholder groups. These groups have been determined based on their different levels of access and influence over the way X-Road is used; including X-Road user organisations (X-Road members), the X-Road governing authorities overseeing regulations guiding the service application, NIIS as the product owner and developer of X-Road, and the broader public. The recommendations can briefly be summarised as follows:

  1. The future development of X-Road could ensure flexibility for stakeholders to disable or reduce the emissions of components, such as message logging and timestamping, subject to potential performance and security requirements.

  2. Wherever possible, NIIS can recommend an energy tracking application to enable emissions modelling of X-Road in real-time. This will integrate transparent monitoring of emissions in reporting and encourage users to evaluate ways to reduce emissions as they will have a reference (Business As Usual) data set to experiment with and understand their emissions footprint.

  3. X-Road governing authorities can provide clear information and support emissions reduction strategies, such as granting permission to host servers on the public cloud and reducing mandatory requirements for message logs and/or timestamping.

  4. X-Road members should implement X-Road services efficiently and in a manner optimised for emissions reduction from the outset. This could include ensuring that equipment is efficient and server utilisation is maximised. Alternatively, servers could be hosted on the public cloud – as it is beginning to be allowed for organisations requiring the highest level of security, such as UK Defence – if permitted by local regulations and the governing authority.

  5. Where the option exists, infrastructure should be powered using renewable electricity. Moreover, best practice should be followed, such as using power-efficient hardware devices and optimised data compression. 

Process of the study

As discussed in previous blog posts, this study was divided into three phases, with results and methods published for feedback by a steering committee and technical experts after each phase. The phases were as follows:

  1. Determining an emissions boundary and mapping the main causes of environmental impacts of the X-Road instance.

  2. Building a Carbon Footprint calculator for X-Road, based on best practice and X-Road use cases.

  3. Defining recommendations for improving the sustainability of X-Road.

The calculator was designed to ensure a high degree of flexibility while obtaining directional results that could be accepted with high confidence. This ensured validity across the instances in Finland and Estonia and across the varying circumstances and technical literacy of different X-Road users. 

Outcomes

The report is available here and the simplified X-Road Emissions Calculator is available here.

The X-Road Carbon Emissions Calculator – Methodology and Results

Introduction 

To support Nordic Institute for Interoperability Solution’s (NIIS) mission on making X-Road the ‘most sustainable data exchange solution in the world’, a study is being carried out by Gofore and Stockholm Environment Institute Tallinn to assess the carbon impacts of the solution. In the first stage we defined the scope of the project and described what would and wouldn’t be included. We have proceeded a step further and developed a unique emissions calculator that measures the carbon footprint of X-Road’s operations.  The calculator was built using data from instances in Estonia and Finland but is comprehensive enough to perform calculations for any instance residing in any region.

From energy to emissions

The methodology proceeded in two stages. First, the team developed a model to understand the electricity burden of operating an X-Road Security Server. Once the electricity consumption of one Security Server was determined, the results were multiplied with the total number of Security Servers in an entire X-Road ecosystem. Next, this was converted into an estimation of released emissions by multiplying this value by an ‘emission factor’ for grid electricity. The emission factor describes the emissions for each unit of electricity and considers all the different sources of electricity generation within a country or region. This is important as the content of this ‘electricity mix’ can be wildly different between different countries, leading to very different emissions outcomes for the same energy use. Results are reported in units of CO2 equivalent (CO2e), which includes other important greenhouse gases, such as methane. For these calculations, emission factors published by Association of Issuing Bodies (AIB) for 2019 are used. 

The model

This model was refined in subsequent steps based on new literature findings, updated assumptions testing among the internal team, and interviews with end-users and industry experts. As it is not possible to investigate every single source of emissions within a single project, it is important to focus on the largest that will define the size of the carbon footprint. As such, three main sources of emission were identified:  infrastructure, data transaction and datastorage. In this context, the ‘infrastructure’ component only included the Security Server which is required to process data and enable secure data exchange. The team hypothesized this would be the main source of emissions, since multiple processors can collectively contribute vast amounts of heat emission and energy consumption. To compensate, data storage was considered as a separate component to account for the recording of all the transactions occurring over X-Road servers.  

Infrastructure

The main physical infrastructure that enables secure data exchange through X-Road is the Security Server. For simplification, only CPU and RAM are considered as these components are responsible for almost all the energy consumption in a security server.  A Fujitsu server with a model “FUJITSU Server PRIMERGY RX1330 M4 is assumed to be a standard server throughout the calculations. This serves as a good representation of a sample server employed across the two instances based on surveys conducted with the Finnish and Estonian authorities. The server employs an Intel® Xeon® E-2288G Processor with 16M Cache, 3.70 GHz and 8 cores, and was published in the database in 2019. Values for the energy use of this server were obtained from a widely used published database, SPECpower_ssj2008.

The processor’s energy consumption can be modelled as a directly proportional relationship between CPU utilization and Average Active Power. In turn, the RAM energy consumption model is based on a study conducted by Pedro H. P. Castro et al. The RAM’s power consumption is divided into background power (depends only on memory states and on the frequency of the operation) and operational power (product of memory bandwidth and the power required to run a particular command). The total power consumption was taken as the sum of these two individual components.

A final factor to consider is the energy consumption of the supporting infrastructure. This is widely considered in the industry through a factor known as the power use effectiveness (PUE). PUE describes the amount of energy used by the IT devices compared to the amount used by the supporting infrastructure (in this project a PUE of 1.58 was used). Thus, the total energy consumption of a Security Server is calculated as a product of total electricity consumption and PUE.

Transaction

Work by Aslan et al pointed to an average data transmission efficiency of 0.06 kWh /GB for fixed line transmission, a value that was also seen empirically to half every 2 years. For 2021, including the expected efficiency gains, this value is found to be 0.0075 kWh of electricity consumed for 1 GB of data transferred over the local internet. The methodology simply involves taking the product of the total amount of data exchanged over X-Road and the electricity consumption per GB of data transferred. 

Storage

According to bilateral exchange of information with different X-Road members, data is stored in hard disk drives (HDDs). A study conducted by Adam Lewis et al from Athens state university outlines a comprehensive approach that enables accurate energy consumption calculations of data storage in an HDD. For calculating the energy consumption of data storage, an HDD’s start up power, power consumption during data writing, power consumption in idle mode and finally, power consumption in standby mode is all calculated and summed.  

Carbon footprint results

The following table summarizes the total emissions of the Estonian and Finnish instances.

Table 1. Estimated carbon footprint summary of X-Road operations in Estonia and Finland.

Table 1. Estimated carbon footprint summary of X-Road operations in Estonia and Finland.

The total carbon dioxide equivalent (CO2e) emissions for Estonia amount up to 45,685 kgs and for Finland up to 22,593 kgs. The results include emissions from all three main operations (i.e.,  infrastructure,  transaction and storage). Although the Finnish instance has a lot more Security Servers than the Estonian instance (173 in Estonia and 290 in Finland), the disparity in the electricity emission factor (0.723 kg of CO2e/kWh for Estonia and 0.136 kg of CO2e/kWh for Finland) is the key reason for the contrast between the aggregated emission amounts of both regions. A comparison between the emissions of the three main operations allows us to identify and analyse areas which contribute most significantly to the total emissions. Resulting emissions from the electricity consumption are depicted in figure below.

Figure 1. Emission by main operations in the Finnish and Estonian instances

Figure 1. Emission by main operations in the Finnish and Estonian instances

The electricity consumption (and therefore emissions) by the Security Servers in comparison to data transaction and storage is substantially high and dominates the total share. Emissions from servers contribute to 96% of the total emissions which amount to 21,617 kgs of CO2e for the Finnish instance and 43,999 kgs of CO2e for the Estonian instance. This directly points to the primary place of focus in order to make X-Road’s operations more sustainable. Data storage contributes to a mere 3% that amount to 1,507 kgs of CO2e for Estonia and 950 kgs of CO2e for Finland. The remaining are from data transaction (less than 1%) for both instances. 

In conclusion, the calculator depicts a holistic view of the total emissions due to X-Road’s operations. For the scope of this project, calculations are done for Estonia and Finland though the calculator is configurable for any instance in any region. This methodology aims to cater emission calculations to the key areas which emerge to have the greatest emissions, and to be flexible to the different circumstances used by X-road members. As there are hundreds of different Security Servers and storage devices spread all across Finland and Estonia, it is impossible to track down exact models of the components with their corresponding power consumption specifications and data exchange numbers. We therefore hope our approach strikes the ideal balance between accuracy and flexibility.

We welcome your feedback on the methodology and results and will shortly publish more details in a formal report. The next stage of the project will develop a series of recommendations for NIIS and X-Road members to ensure carbon emissions are minimized without compromising the effectiveness of the X-Road service. 


Adil Aslam is a Junior Expert At SEI Tallinn and joined the Climate and Energy programme in November 2020.

Adil has diverse experience in manufacturing, energy consultancy, modelling and project management. He is an expert in conventional and renewable energy systems while being proficient in a whole range of energy simulation software. He is adept in programming and has used Python and Matlab to develop various energy models. His interests lie in energy systems, sector coupling, green finance, sustainable business models, electricity markets and simulations.

He is currently working on simulating hydrogen potential in Estonia, climate neutral scenarios for 2050 and providing his expertise in various energy related projects. He actively takes part in the company’s business development operations and aims to explore different business opportunties.

He graduated from Technische Universität Berlin, Germany in 2020 with a Master’s in Business Engineering Energy. He did his Master’s thesis at Forschungszentrum Julich (one of the largest interdisciplinary research centres in Europe) and the Institut für Stromrichtertechnik und Elektrische Antriebe (ISEA) RWTH, where he worked on modelling an energy management system for residential PV home systems to sustain blackouts.

Introducing X-Road Emissions Calculator Project

Introduction

NIIS has set an ambitious goal to make X-Road the 'most sustainable data exchange layer solution in the world' and thereby align its operational model with the climate and sustainable development goals articulated under the Paris Agreement and the 2030 agenda for sustainable development. That is why X-Road's environmental impact is being identified by NIIS, Gofore and the Stockholm Environment Institute (SEI).

The study is being carried out by a team, which is a combination of the sectoral experience provided by Gofore and the academic competence provided by SEI Tallinn. Gofore's experts have experience with X-Road core development processes, X-Road implementations, and X-Road service developments. SEI Tallinn is a leading sustainability think tank ranked #2 globally in the environmental field by Uni Penn ranking. SEI Tallinn has long-standing experience in climate and energy policy scenarios, carbon footprint calculations and life-cycle impact assessments.

The current project measures the carbon footprint of X-Road operations within Estonia and Finland. It will give recommendations for all related parties, including X-Road governing authorities and user organisations around the world.

Project is in its early stages. The partners have scoped the most important environmental impacts of X-Road instance and outlined the initial approach for X-Road carbon footprint calculation that is expected to be carried out from February to March 2021. Based on this, additional feedback from all interested parties is now requested to improve the project outcomes.

Defining the scope

Figure 1: The architecture of X-Road

Figure 1: The architecture of X-Road

A critical first stage in determining environmental impacts is defining the boundary beyond which further emission sources shall not be included. In the case of X-Road, this is based on a rigorous understanding of the relevant operational and infrastructural components involved in the service, as defined in purple in the figure above. The essential operation is exchanging messages and data between two X-Road members over a pair of Security Servers. It is thought that the impact of the Central Server is likely to be quite minor and shall not be considered within the calculation.

In this context, the potential sources of emissions associated with X-Road have therefore been mapped, and can broadly be associated with the following categories:

  • Life-cycle of the components used by infrastructures that allow X-Road's operations

    • Infrastructure for running a Security Server that can be on-premises or in the cloud.

    • Physical device that might be necessary for holding the keys and certificates that assure the identification of the members.

  • Energy consumption of the Security Server infrastructure used by X-Road members or governing authorities.

  • Energy consumption, in addition, for exchanging messages / using X-Road services.

However, a decision was reached not to include the life-cycle of different components within the footprint calculation. This is because similar studies have indicated that these areas provide only a small contribution to the total emissions relative to the use phase. This notwithstanding, a detailed analysis of X-Road's infrastructure will lead to a series of effective recommendations to minimise the life-cycle emissions. This means the calculation of emissions consists of two aspects: the operational emissions from hardware and those associated with the transmission of data.

The quantification and subsequent recommendations shall further proceed via a relative comparison between different 'use cases'. These provide a range of current and prospective parameters by which service can be defined, and thereby also a complete range of potential emission impacts within the study. Building on the experiences of Estonia and Finland, the project plans to showcase the changes to carbon footprint when changes in the following parameters are considered:

  • Infrastructure level

    • The server used: On-premise infrastructure vs on cloud

    • Trust level: with and without a physical signing device

  • Service level:

    • Message size across different services

    • Message log requirements (whole message, metadata only, or no message logging).

Call for interested parties

There have only been a limited number of studies to assess the climate impacts of specific software. The continued growth in digital services and infrastructure also means that the importance of such emissions is only likely to increase further in the future.

Beyond showcasing the proposed methodology, the project partners are further open to making use of all relevant expertise to maximise the project's success. This can be both in terms of changes to project scope, or suggestions for improving the calculation methodology. Interested parties should contact NIIS.

Report: study of the environmental impact of X-Road and the possibilities of reducing it


The author Peter Robert Walke joined SEI Tallinn in September 2020. He works in the sustainable development programme on a project related to developing quantitative tools for assessing the role of spatial planning policies at different scales on greenhouse gas emissions.

Peter’s background is in the physical sciences and he holds a PhD in chemistry from Katholieke Universiteit Leuven in Belgium, where he worked in nanoscience. Before that he obtained an MSci in Natural Sciences from University College London, and is currently also a postdoctoral researcher in the department of materials and environmental technology at Tallinn University of Technology. He hopes to now successfully apply his skills in research and data analysis to a new field.

Resisting Failure

X-Road is based on a distributed architecture, which makes it extremely resilient to failure. Data is always exchanged directly between a service provider and a service consumer without third parties or intermediaries having access to it. Each data exchange party may have one or more Security Servers, but data is always exchanged between two Security Servers in a one-to-one fashion. Therefore, the failure of a single Security Server only affects services available on the Security Server in question, and all other Security Servers and services of the ecosystem remain unaffected.

Image 1. X-Road is based on a distributed architecture.

Image 1. X-Road is based on a distributed architecture.

Despite the distributed architecture, an X-Road ecosystem includes components that affect the availability of all the Security Servers. The good news is that the resiliency of the ecosystem against a failure of those components can be controlled and adjusted using various measures. What are the components, and how the X-Road ecosystem can be protected against their failures? Let’s find out!

Central Server

The Central Server is one of the critical components of the X-Road ecosystem. It contains a registry of X-Road member organisations and their Security Servers. Also, the Central Server contains the security policy of the X-Road instance that includes a list of trusted certification authorities, a list of trusted time-stamping authorities, and configuration parameters. Both the member registry and the security policy are made available to the Security Servers via HTTP protocol. This distributed set of data forms the global configuration that the Security Servers use for mediating messages sent via X-Road.

To be able to mediate messages, the Security Server must have a valid copy of the global configuration available all the time. The Security Server downloads the global configuration from the Central Server regularly and uses a local copy while processing messages. The Security Server remains operational as long as it has a valid copy of the global configuration available locally. It means that the Central Server may be unavailable for a limited time without causing any downtime to the ecosystem. However, registering new members or subsystems is not possible without the Central Server.

By default, the Security Server refreshes the global configuration every 60 seconds, and the configuration is valid for 10 minutes. It means that the Central Server may be unavailable for 9 minutes without affecting the ecosystem. Once the local copies of the global configuration on Security Servers expire, the message processing stops. When the Central Server starts to publish the global configuration again, the message processing continues. However, the Security Server does not queue messages or provide support for resending failed messages. It’s a service consumer’s responsibility to resend any failed messages regardless of the reason for the failure.

The global configuration download interval is configured using the “configuration-client.update-interval” parameter on the Security Server, and the default value can be overridden locally by the Security Server administrator. Instead, the global configuration validity period is configured using the “confExpireIntervalSeconds” parameter on the Central Server by the X-Road operator, and it cannot be changed on the Security Server. Therefore, all the Security Servers that are registered to the same X-Road ecosystem respect the same global configuration validity period. The download interval and global configuration validity period should be configured according to the requirements of the X-Road ecosystem. However, it is highly recommended to increase the global configuration validity period from minutes to hours or days.

Besides, the Central Server supports high availability through clustering. A Central Server cluster consists of two or more Central Server nodes. In case one of the nodes fails, the Security Server can failover to other available nodes. In a clustered environment, only a simultaneous problem with all the Central Server nodes would cause a situation where there isn’t a valid version of the global configuration available.

OCSP responder service

A certification authority (CA) issues certificates to Security Servers (authentication certificates) and X-Road member organizations (signing certificates). Authentication certificates are used for securing the connection between two Security Servers. Signing certificates are used for digitally signing the messages sent by X-Road members. Only certificates issued by trusted certification authorities that are defined on the Central Server by the X-Road operator can be used. The information about trusted certification authorities is distributed to the Security Servers in the global configuration.

The Security Server checks the validity of the signing and authentication certificates via the Online Certificate Status Protocol (OCSP, RFC 6960). An OCSP responder service providing the status information is maintained by the certificate authority that issued the certificates. Each Security Server is responsible for querying the validity information of its certificates and then sharing the information with other Security Servers as a part of the message exchange process. Only Security Servers with valid authentication certificates and members with valid signing certificates can exchange messages. If the validity information is not available or a certificate is not valid, the message exchange fails. 

To be able to mediate messages, the Security Server must have valid copies of authentication and sign certificates’ OCSP responses all the time. The Security Server downloads the OCSP responses from the OCSP responder service regularly and uses the local copies while processing messages. The Security Server remains operational as long as it has valid copies of the OCSP responses available locally, and the certificates are valid. This means that the OCSP responder service may be unavailable for a limited time without causing any downtime to the ecosystem. The period that the OCSP responder may be unavailable without affecting the ecosystem depends on various factors.

The Security Server fetches new OCSP responses using a fixed interval that is 20 minutes by default. The fetch interval is configured on the Central Server using the “ocspFetchInterval” configuration parameter by the X-Road operator. An OCSP response is considered expired by the Security Server if it was issued too far in the past OR there’s already new status information available. The validity period is defined on the Central Server using the “ocspFreshnessSeconds” configuration parameter by the X-Road operator. By default, the Security Server considers an OCSP response expired if there’s new status information available – meaning that the “nextUpdate” attribute in the OCSP response is in the past. However, the “nextUpdate” attribute can be omitted so that “ocspFreshnessSeconds” alone defines the validity period of an OCSP response. Omitting the “nextUpdate” attribute is done on the Central Server using the “verifyNextUpdate” configuration parameter by the X-Road operator.

All in all, an X-Road ecosystem’s resiliency to failures of an OCSP responder service is controlled through three configuration parameters that are all set on the Central Server by the X-Road operator and distributed to the Security Servers in the global configuration. The “ocspFetchInterval” parameter defines how often the OCSP responses are refreshed, and the “ocspFreshnessSeconds” parameter specifies the validity period of the responses, and the “verifyNextUpdate” parameter defines whether the “nextUpdate” attribute in the OCSP response is omitted. The most resilient configuration can be achieved by keeping the fetch interval short, the validity period long, and ignoring the “nextUpdate” attribute. Besides, when the “nextUpdate” attribute is omitted, it’s also possible to increase the validity period during a service break of the OCSP responder service, which buys more time to solve the problem without affecting the ecosystem. 

Besides, after the first failed OCSP request, the Security Server switches from the regular OCSP fetching interval to a failure mode during which fetching OCSP responses is attempted once a minute, by default. After the first successful OCSP request, the Security Server switches back to the regular interval.

It’s also good to be aware that not all OCSP responder services include the “nextUpdate” attribute in their OCSP responses. Usually, OCSP responder services that are based on a certification revocation list (CRL) include the attribute, but real-time OCSP services don’t. A CRL based OCSP service reads certificate statuses from a static CRL that’s refreshed regularly. In contrast, a real-time OCSP service checks certificate statuses in real-time. In case the “nextUpdate” attribute is missing from the OCSP response, the “ocspFreshnessSeconds” parameter alone defines the validity period for the response just like when the “nextUpdate” attribute is omitted using the “verifyNextUpdate” parameter.

When considering the values for the three parameters, it’s essential to consider how the values affect the evidential value of the logged messages. Since the OCSP response of the signing certificate is used to check the validity of the certificate that’s used to sign messages, the age of the OCSP response may affect the validity of the signature. Therefore, it is vital to understand the consequences that enabling the use of old OCSP responses may legally have. From a technical perspective, it is equally important that the values of the three configuration parameters are aligned with each other and the policies of the certificate authority. For example, the “ocspFetchInterval” parameter must be smaller than the “ocspFreshnessSeconds” parameter, or otherwise, the Security Server considers the responses expired before new ones are fetched.

Time-stamping service 

All the messages sent via X-Road are time-stamped and logged by the Security Server. The purpose of the time-stamping is to certify the existence of data items at a certain point in time. A time-stamping authority (TSA) provides a time-stamping service that the Security Server uses to time-stamp all the incoming/outgoing requests/responses. Only trusted TSAs that are defined on the Central Server by the X-Road operator can be used. The information about trusted TSAs is distributed to the Security Servers in the global configuration. The approved time-stamping authorities must implement the time-stamping protocol (RFC 3161) supported by X-Road.

By default, X-Road uses batch time-stamping, which means that new messages that have been processed since the previous batch time-stamping and do not have a time-stamp yet, are time-stamped once a minute. The time-stamping interval is defined on the Central Server using the “timeStampingIntervalSeconds” parameter by the X-Road operator, and it cannot be changed on the Security Server. If the time-stamping fails, the Security Server continues to process messages until the acceptable time-stamping failure limit is reached. By default, the limit is 4 hours, and it’s configured on the Security Server using the “message-log.acceptable-timestamp-failure-period” parameter. The default value can be overridden locally by the Security Server administrator. When the limit is reached, the Security Server quits processing messages. When the time-stamping service becomes available again, all the messages missing a time-stamp are time-stamped, and the Security Server continues normal operations.

Besides, after the first failed time-stamping attempt, the Security Server switches from the regular time-stamping interval to a failure mode during which time-stamping is attempted once a minute, by default. After the first successful time-stamp, the Security Server switches back to the regular time-stamping interval.

The Security Server also supports automatic failover between time-stamping services if it has more than one configured time-stamping service. It means that the Security Server tries time-stamping with all the configured services until time-stamping succeeds or all the configured services have failed. The behavior is repeated for every batch.

Alternatively, the Security Server can be configured to time-stamp messages synchronously. It means that every message is time-stamped immediately, and if time-stamping the message fails, processing the message fails too. In case a security policy requires that every processed message is time-stamped within a defined time window, this configuration option can be used to guarantee it. However, the downside of synchronous time-stamping is that it increases the load of the time-stamping service tremendously compared to batch time-stamping. When the batch time-stamping is used, the load does not depend on the number of messages exchanged over the X-Road. Instead, it depends on the number of Security Servers in the system. Another downside of the synchronous time-stamping is that it increases the processing time of each message since the time-stamping is done synchronously as a part of the message processing flow. It means that four time-stamping operations are added to the end to end processing time of each message.

All in all, an X-Road ecosystem’s resiliency to failures of a time-stamping service is managed through different factors. The time-stamping interval and number of available time-stamping services are defined on the Central Server by the X-Road operator. Instead, the acceptable time-stamping failure period and the time-stamping mode (batch / synchronous) are defined on the Security Server, and the default values can be overridden locally by the Security Server administrator. The most resilient configuration can be achieved by using batch time-stamping, keeping the time-stamping interval short, keeping the acceptable time-stamping failure period long, and configuring multiple time-stamping services on the Security Server.

However, just like with the OCSP related configuration, it’s essential to consider how the selected values affect the evidential value of the logged messages. For example, the age of a time-stamp may affect its evidential value from a legal perspective. Also, whether it is acceptable to have messages without a valid time-stamp must be considered, and the time-stamping mode (batch / synchronous) should be selected accordingly.

Conclusions

An X-Road ecosystem is exceptionally resilient to failure. Different components may fail separately or at the same time, and the ecosystem is still capable of processing messages and transferring data. How long a single component may be unavailable without affecting the ecosystem depends on the configuration of the ecosystem and the configuration of individual Security Servers. The X-Road operator is responsible for defining and managing the ecosystem’s configuration. Still, the Security Server administrators may define some configuration items locally since the requirements may vary between organisations and Security Servers.

The values of different configuration items vary between X-Road ecosystems, and they depend on the requirements and constraints regarding availability, the evidential value of the logs, costs, etc. Also, financial factors play a role when defining the OCSP fetch interval and time-stamping interval since some commercial trust service providers request a transaction-based fee for the use of their services. In those cases, costs can be optimized by adjusting the intervals without forgetting the legal requirements regarding the age of the OCSP responses and time-stamps. All in all, the configuration should be in balance between different requirements and constraints. Sometimes it may require compromises between objectives.

Security Server Sidecar (part 3)

This is a series of blog posts about X-Road® and containers. The first part provides an introduction to containers and container technologies in general. The second part concentrates on the challenges in containerizing the Security Server. The Security Server Sidecar – a containerized version of the Security Server – is discussed in the third part.

Security Server Sidecar is a containerized version of the Security Server that supports production use. The Sidecar is a Docker container that runs in the same virtual context (virtual host, Kubernetes Pod, etc.) with an information system. The containerized approach makes running the Security Server more cost-effective since no separate host server needs to be allocated for each Security Server. Besides, more and more information systems exchanging data using X-Road are running in containers too, so it’s beneficial to be able to run the Security Server on the same platform with the information systems that are connected to it.

The Sidecar solves the challenges related to running the Security Server in a container, and it uses the standard release versions of the Security Server software. In other words, the Sidecar is built from pre-built packages of the official X-Road releases, and it is a separate project that builds on the X-Road core.

The Sidecar project

From an administrative perspective, Security Server Sidecar is a project of the Finnish Digital Agency (DVV) that is implemented in collaboration with NIIS. The DVV owns the project, and NIIS is responsible for coordinating the daily development activities. All the deliverables are released on NIIS’s GitHub and Docker Hub accounts. The project is currently ongoing, and it will be completed by the end of 2020.

The project will produce a Security Server Sidecar Docker image with a couple of alternative configurations. The Sidecar slim is a lightweight version of the Security Server, and it does not include message log, operational monitoring, and environmental monitoring modules. It means that the slim version does not log messages or provide any monitoring capabilities. However, technically it can be used for both consuming and providing services if the capabilities mentioned before are not required.

Instead, the regular Sidecar includes message log, operational monitoring, and environmental monitoring modules too. Similarly, the regular Sidecar can be used for both consuming and producing services. The Sidecar slim is a lightweight version of the Security Server while the regular Sidecar provides all the features of a full-blown Security Server installation. Besides, also versions with country-specific meta-packages are available. Currently, the only country-specific configuration available is the Finnish meta-package.

In addition to the Security Server Sidecar Docker image, the project also produces documentation to support the use of the image. The documentation will cover best practices and examples of how to run the image on a Kubernetes cluster using Elastic Kubernetes Service (EKS) on the Amazon Web Services (AWS) cloud platform.

What is a sidecar? 

In general, the sidecar is a design pattern commonly used in a microservices architecture. A sidecar is an additional component that is attached to a parent application to extend its functionalities. The pattern aims to divide the functionalities of an application into separate processes. The approach allows adding new capabilities to an application without changing the application itself. In this way, a sidecar is loosely coupled with the application. For example, logging and monitoring are functionalities that are often implemented using a sidecar.

Applying the sidecar pattern to the Security Server

When using the regular Security Server version on a Linux host, it’s strongly recommended that the Security Server is running on its own host and not on the same host with an information system that is connected to it. It means that at least two separate hosts are required. Instead, the idea of the sidecar architecture pattern is that an application and a sidecar run on the same host or context, close to each other. With the containerized Security Server, the goal can be achieved since the Security Server is packaged in a container that runs in its own isolated process.

Image 1. The Security Server Sidecar and an application in the same virtual context.

Image 1. The Security Server Sidecar and an application in the same virtual context.

The original idea of the sidecar pattern is that multiple copies of the same sidecar are attached to the application so that each instance of the application has its own sidecar. In case different applications use the same sidecar, the same approach applies to all the applications and their instances.

Image 2. A single Security Server Sidecar instance is shared between multiple instances of an application, and between different applications.

Image 2. A single Security Server Sidecar instance is shared between multiple instances of an application, and between different applications.

Despite its name, the original sidecar pattern does not work very well with the Security Server Sidecar since the Sidecar requires the same configuration and registration process as the regular Security Server. Also, even if the Security Server is containerized, the footprint of the Sidecar container is still relatively massive compared to the footprint of average containers. Therefore, it’s recommended that a single Sidecar container is shared between multiple instances of the application, and it may also be shared between different applications too. For high availability and scalability, a Sidecar cluster consisting of a primary node and multiple secondary nodes can be considered. Let’s take a better look at the different deployment alternatives next.

Running the Sidecar on Kubernetes

The Sidecar can be deployed to different container management systems, thanks to standardization. One of the most popular container management systems is Kubernetes, which is available as a service on multiple cloud platforms, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform. Kubernetes is open-source, which means that it can be used in on-premise and private cloud environments too. In this case, we’re going to concentrate on running the Sidecar on AWS Elastic Kubernetes Service (EKS). During the development project, the Sidecar has been tested using the Docker Engine and EKS. 

Since the Security Server is a stateful application, it is required that an external database and persistent file storage are used in all the deployment alternatives. In this case, Amazon Relational Database Service (RDS) is used for the Security Server databases, and a Kubernetes persistent volume is used to store the configuration files.

Before going into the deployment models, a few words about Kubernetes and Pods since Pods play an essential role in the deployment models. In Kubernetes, a Pod is a group of one or more containers that run in a shared context, share the same storage and network resources, and share a specification of how the containers are run. Each Pod runs a single instance of an application, and scaling the application horizontally means using multiple Pods, one for each instance of the application.

Deployment is another Kubernetes concept that’s essential to the Sidecar deployment models. A Kubernetes deployment represents an application that consists of a set of identical Pods. The deployment specification defines the configuration of the Pods and the number of replicas to run. The deployment maintains the Pods and monitors that there’s a correct number of Pods running. Also, it’s possible to create a horizontal autoscaler for a deployment that automatically scales the number of running Pods based on the selected metrics.

Security Server as a sidecar

Image 3. The Security Server Sidecar as a real sidecar inside the same Pod with an information system.

Image 3. The Security Server Sidecar as a real sidecar inside the same Pod with an information system.

The first alternative is to deploy the Sidecar as a real sidecar, which means deploying it in the same Pod with an information system. It is a feasible approach if there’s always only one Pod running, and other information systems do not need to access the Security Server. The information system can be a service consumer, service producer, or both.

In case the information system must be scaled horizontally, this approach does not work very well. The reason is that adding a new Pod means that the new Security Server running in the Pod must always be configured and registered before it can be used. Since the onboarding process of the Security Server may take from days to weeks, the approach is not feasible. Also, deploying a Security Server for each Pod would generate quite much overhead from the resource consumption perspective.

Single Security Server

Image 4. The Security Server Sidecar in its own Pod and shared by multiple information systems.

Image 4. The Security Server Sidecar in its own Pod and shared by multiple information systems.

When multiple information systems or several instances of the same information system need to access the Security Server, it’s better to deploy the Security Server in a separate deployment using a single Pod. In this way, the information systems can be scaled independently from the Security Server, and some of them might even be running outside of the AWS EKS cluster. However, the number of Security Server instances is limited to one. Since the external database and persistent volume are used, Kubernetes can automatically recover the Security Server Pod in case of failures. Also, in this case, the information systems can be service consumers, service producers, or both.

In case multiple Security Servers are required because of high availability and/or scalability, there are two alternatives: multiple independent Security Servers or a Security Server cluster. Multiple independent Security Servers mean deploying several Security Servers, with each of them having a unique identity. This approach provides high availability. Instead, a Security Server cluster means deploying a group of Security Servers that share the same identity and that are accessed through an external load balancer. The cluster provides both high availability and scalability. More information about X-Road’s load balancing alternatives can be found here.

Multiple Security Servers

Image 5. Multiple instances of the Security Server Sidecar shared by several information systems.

Image 5. Multiple instances of the Security Server Sidecar shared by several information systems.

Deploying multiple independent Security Servers provides high availability but not scalability from a performance point of view. In this setup, multiple Security Servers with unique identities are deployed as separate, independent applications. In practice, Security Servers are deployed using separate deployments, which means that they have their own run specifications. Besides, the number of the Security Server Pods within a deployment is limited to one for each Security Server. Adding a new Security Server to the setup means creating a new deployment plus configuring and registering the newly created Security Server.

The information systems can be service consumers, service producers, or both. Service consumers may connect to the Security Servers directly, or there may be another load balancer between the consumer information systems and the Security Servers (omitted in the diagram). In the case of service producers, Security Server’s internal load balancing enables publishing services on multiple Security Servers and routing service requests to all of them. However, the configuration (e.g., available services, access rights) must be manually synchronized between Security Servers providing the service. Only the Security Server cluster provides automatic synchronization between Security Servers.

Security Server cluster

Image 6. Security Server Sidecar cluster with an external load balancer shared by multiple information systems.

Image 6. Security Server Sidecar cluster with an external load balancer shared by multiple information systems.

A Security Server cluster provides both high availability and scalability. It consists of a primary node and one or more secondary nodes that all share the same configuration and identity. In this setup, the primary node is used to manage the cluster, and it does not process messages. In practice, configuration changes are done on the primary node, and they’re automatically replicated to the secondary nodes. Replication covers the configuration database and configuration files. Changing the configuration on the secondary nodes is blocked. The secondary nodes are connected to a load balancer that distributes incoming traffic between them. Further implementation details of the Security Server cluster on Kubernetes are studied in more detail in the Sidecar project.

The information systems can be service consumers, service producers, or both. Service consumers may connect to the secondary nodes directly, or there may be another load balancer between the consumer information systems and the secondary nodes (omitted in the diagram).

Multiple Security Servers or a Security Server cluster?

The key difference between multiple independent Security Servers and a Security Server cluster is that the cluster provides both high availability and scalability. In contrast, independent Security Servers provide only high availability. In the cluster, secondary nodes can be scaled with less effort while setting up a new independent Security Server is a manual operation. Also, in the cluster setup, all Security Servers share the same identity and configuration that is synchronized automatically. Instead, multiple independent Security Servers each have their own unique identity and configuration, and there’s no synchronization between them. Which one is the best alternative depends on the use case and its requirements.

Containerized future?

The four deployment models described before give an overview of what kind of models can be considered for the Sidecar. The same models can be applied regardless of the underlying platform or environment where the Security Server is deployed. Of course, the implementation details vary between different platforms and environments, but the high-level architecture patterns remain the same. However, the models do not provide an exhaustive list of available alternatives since different models can be combined and new elements, such as load balancers, can be added to the described ones.

Adding support for containers does not mean dropping support for Linux – running the Security Server on Ubuntu and Red Hat will remain supported in the future too. Containers are an alternative way to run the Security Server, and X-Road members are free to choose between the available alternatives. Containers are a convenient way to run the Security Server when an organization already has the required capabilities to operate and manage containers in production environments. However, in case an organization is not quite there yet, using virtual machines might be a better alternative since mastering Security Server containers on a production level requires time, effort, and experience.

It must also be noted that the Security Server configuration process – registering and onboarding a fresh Security Server to an X-Road ecosystem – is always the same regardless of the Security Server packaging. From a process perspective, the containerized version of the Security Server is not different from the Linux packaged version. In this way, X-Road members can be sure that the same level of trust is always guaranteed in data exchange between X-Road members.

X-Road and Containers (part 2)

This is a series of blog posts about X-Road® and containers. The first part provides an introduction to containers and container technologies in general. The second part concentrates on the challenges in containerizing the Security Server. The Security Server Sidecar – a containerized version of the Security Server – is discussed in the third part.

Container support for X-Road – and for the Security Server especially - has been requested for some years already, but at the moment, production-level support is not available yet. However, both Central Server (xroad-central-server) and Security Server (xroad-security-server, xroad-security-server-standalone) Docker images are already available for testing purposes on NIIS’s Docker Hub account. This means that different X-Road components can be run inside containers, so why production use is not supported yet? Let’s consider the question from the Security Server’s point of view. What needs to be taken into account when running the Security Server in a container?

One process per container

According to the best practices, each container should have only one concern and run only a single process. The Security Server consists of multiple processes, including a PostgreSQL database, and the currently available Docker image runs them all in a single container. Decoupling all the Security Server processes into multiple containers would require a significant effort providing minimal benefits in exchange since the current architecture has not been designed to run and scale different application processes separately. Supporting that kind of approach would require significant changes to the Security Server architecture.

However, rules and best practices are made to be broken. After all, it is quite common to run multiple processes inside a container. A good approach for the Security Server is to deploy the Security Server application and Postgres database separately. In that way, the Security Server is split into two parts. Yet, the Security Server application processes remain in the same container. In this case, no software-level changes are required since the Security Server already supports using a remote database that can be a separate container, managed DB service on the cloud, etc.

Running multiple processes in a container requires that process management is appropriately implemented. When the Security Server is run on a Linux platform, the Security Server processes are managed using systemd service and system manager. The use of systemd is built in the Security Server packaging since it’s used by the Linux distributions supported by the Security Server. However, it is not recommended to run systemd inside a container since systemd does things that are typically controlled by the container runtime. Besides, some things systemd does are prevented inside containers by default, e.g., change host-level parameters. Therefore, the Security Server processes need to be managed using some other more lightweight process manager, such as supervisord.

Persistent storage

The Security Server is a stateful application. Therefore, the configuration in the database and on the filesystem must be persisted over a lifecycle of a single container. The data includes local overrides to the default configuration, keys and certificates, registered clients and their configuration, logs, backups, etc. Without persisting the configuration, the Security Server should be initialized, configured, registered, etc., whenever an existing container is recreated.

When an external database is used, the data in the database is already stored outside the container. However, the configuration data, backups, and message log archives stored on the filesystem must be persisted too. It can be done using persistent storage that is mounted to the Security Server container. Persistent storage stores the data on the host system and not in the container. Besides, X-Road application logs must be persisted as well. It can be done using the persisted storage or redirecting logging to console to enable the container management system to collect and store the logs.

Version upgrades

Security Server version upgrades sometimes require running database migrations and updating the contents of the configuration files. Since the way how version upgrades are handled with containers differs from traditional version upgrades done using Linux package management systems, special attention must be paid to the Security Server version upgrades. In practice, it means that the upgrade mechanism has to be built in the container image. The mechanism must detect that the application version used by the container differs from the version of the persistent configuration, and perform the steps required by the upgrade. In this way, it is possible to change from an older image to a newer one and keep the existing configuration and data.

First run

Similarly to version upgrades, there must be a mechanism that detects when a container is started for the first time, and there’s no existing, persisted configuration already available. For security reasons, each container must have a unique internal and admin UI TLS keys, certificates, and a database password. The secrets are typically generated during the installation process, which in the container context means when the image is created. In practice, it means that all the containers created from the same source image share the same secrets. In case of a public Security Server container image, anyone could access the secrets which would expose all containers created from the image to different kind of attacks. Therefore, the secrets must be recreated on the first run so that each container has its own unique set of secrets that are not shared with any other container.

Hardware security modules (HSMs)

One additional challenge that has not been discussed yet is related to hardware security modules (HSM). For extra security, sign keys and certificates of the Security Server clients may be stored on an HSM instead of a software token that’s used by default. Different cloud platforms provide cloud HSM services that can be accessed over a network, but in case using a physical HSM device is required, how to connect it to containers? Finding an answer to the question is out of the scope of this blog post.

Towards containerization

X-Road version 6 was initially designed to be deployed on Linux hosts (physical or virtual), and therefore, some additional effort is required to enable its production use in containers. However, the challenges related to containerizing the Security Server can be overcome without changing the application itself.

In the long run, the Security Server architecture should be refactored to be able to utilize the benefits that containers can offer fully. At the same time, it’s important to remember that the currently supported Linux platforms must be supported in the future too. Fortunately, the two alternatives are not mutually exclusive. Containers are not going to replace virtual machines, but they will provide an alternative way to run the Security Server.

From Virtual Machines to Containers (part 1)

This is a series of blog posts about X-Road® and containers. The first part provides an introduction to containers and container technologies in general. The second part concentrates on the challenges in containerizing the Security Server. The Security Server Sidecar – a containerized version of the Security Server – is discussed in the third part.

Nowadays, it’s hard to avoid hearing about Docker and containers if you work in the field of IT. It applies to X-Road, too, since questions regarding X-Road and support for containers have been arising regularly during recent years. But what containers are, and how do they differ from virtual machines?

What are the containers?

Containers package an application and all its dependencies, libraries, configuration files, etc., into a single package that contains the entire runtime environment needed to run the application. The package can then be deployed to different computing environments without having to worry about the differences between operating system distributions, versions of available libraries, etc. The differences are abstracted away by the containerization.

The difference between virtual machines and containers is that a virtual machine includes an entire operating system and the application. In contrast, a container only contains the application and its runtime environment. Therefore, containers are more lightweight and use fewer resources than virtual machines. The size of a container may be only tens of megabytes, and it can be started in seconds. Instead, a virtual machine with an entire operating system may be several gigabytes in size, and booting up may take several minutes.

Image 1. A physical server that runs multiple containers compared to a physical server that runs multiple virtual machines.

Image 1. A physical server that runs multiple containers compared to a physical server that runs multiple virtual machines.

A physical server that runs multiple virtual machines has a separate guest operating system running for each virtual machine on top of it. Instead, a server running multiple containers only runs a single operating system which resources are shared between the containers. However, each container runs in a separate, isolated process that has its namespace and filesystem. The number of containers that can be hosted by a single server is far higher than the number of virtual machines that the server can host.

Container technologies

Docker is commonly considered a synonym for containers, even if it’s not the only container technology out there. Besides, Docker is not the first container technology either since several other technologies had existed already before its launch in 2013. However, Docker was the first container technology, which became hugely popular among the masses, which is why the name Docker is often mistakenly used when referring to container technologies in general.

Nowadays, there are multiple container technologies available, and the fundamental building blocks of the technology have been standardized. The Open Container Initiative (OCI) is a project facilitated by the Linux Foundation, which creates open industry standards around container formats and runtime for all platforms. The standardization enables portability between infrastructures, cloud providers, etc., and prevents locking into a specific technology vendor. All the leading players in the container industry follow the specifications.

Images and containers

Images and containers are the two main concepts of container technologies. Therefore, understanding their difference on a high-level, at least, is essential.

A container image can be compared to a virtual machine image – except that it’s smaller and does not contain the whole operating system. A container image is an immutable, read-only file that contains executable code, libraries dependencies, tools, etc., that are needed for an application to run. An image represents an application and its virtual environment at a specific point in time, and it can be considered as a template of an application. An image is compiled of layers built on top of a parent or base image, which enables image reuse.

Containers are running images. When a new container is started, the container is created from a source image. In other words, the container is an instance of the source image, just like a process is an instance of an executable. Unlike images, containers are not immutable, and therefore, they can be modified. However, the image based on which the container was created remains unchanged. Consequently, it’s possible to create multiple containers from the same source image, and all the created containers have the same initial setup that can be altered during their lifecycle.

Images can exist independently without containers, but a container always requires an image to exist. Images are published and shared in image registries that may be public or private. The best-known image registry is probably Docker Hub. Images are published and maintained by software vendors as well as individual developers.

Stateful and stateless containers

Containers can be stateful or stateless. The main difference is that stateless containers don’t store data across operations while stateful containers store data from one time they’re run to the next. In general, a new container always starts from the sate defined by the source image. It means that the data generated by one container is not available to other containers by default. If the data processed by a container must be persisted over a lifecycle of the container, it needs to be stored on a persistent storage, e.g., an external volume stored on the host where the container is running. The persisted storage can then be attached to another container regardless of the source image of the other container. In other words, persistent storage can be used to share data between containers.

Handling upgrades

Upgrading an application running in a container also differs from the way how applications running on a virtual machine are traditionally upgraded. Applications running on a virtual machine are usually upgraded by installing a new version of the application on the existing virtual machine. Instead, applications running in a container are upgraded by creating a new image containing the latest version of the application and then recreating all the containers using the new image. In other words, instead of upgrading the application running in the existing containers, the existing containers are replaced with new containers that run the latest version of the application. However, the approach is not container-specific since handling upgrades on virtual machines in cloud environments often follows the same process nowadays.

Container management systems

Running a single container or an application consisting of a couple of containers on a local machine for testing or development purposes is a simple task. Instead, running a complex application consisting of tens of containers in a production environment is far from simple. Container management systems are tools that provide capabilities to manage complex setups composed of multiple containers across many servers. In general, container management systems automate the creation, deployment, destruction, and scaling of containers. Available features vary between different solutions and may include, for example, monitoring, orchestration, load balancing, security, and storage. However, running a container management system is not a simple task that brings additional complexity to management and operations.

Kubernetes is the best-known open-source container management system. Google originated it, but nowadays, it is widely used in the industry, and by different service providers. For example, all the major cloud service providers offer Kubernetes services. When it comes to commercial alternatives, Docker Enterprise Edition is probably the best-known commercial solution, but there are many other solutions available too.

Pros and cons

The benefits of containerization vary between different applications. And sometimes containerization may not provide any benefits. Therefore, instead of containerizing everything by default, only applications that benefit from containers should be containerized.

Containers provide a streamlined way to distribute and deploy applications. Containers are highly portable, and they can be easily deployed to different operating systems and platforms. They also have less overhead compared to virtual machines, which enables more efficient utilization of computing resources. Besides, containers support agile development and DevOps enabling faster application development cycles and more consistent operations. All in all, containers provide many benefits, but they’re not perfect, they have disadvantages too. 

In general, managing containers in a production setup requires a container management system. The system automates many aspects of container management, but implementing and managing the system itself is often complicated and requires special skills. Managing persistent data storage brings additional complexity as well, and incorrect configuration may lead to data loss. Besides, persistent storage configurations may not be fully compatible between different environments and platforms, which means that they may need to be changed when containers are moved between environments. For example, both Docker and Kubernetes have the concept of volume, but they’re not identical and, therefore, behave differently.

All in all, containers offer many benefits, and they provide an excellent alternative to other virtualisation options. However, containers cannot fully replace the other options, and therefore, different solutions will be used side-by-side in the future too.

New Security Server UI and management REST API are here

X-Road version 6 was released in 2015, and it has been continuously developed further throughout the years. As so far, the most significant change has been adding support for REST services in 2019. However, the system hasn’t changed much visually since its release in 2015. That’s about to change soon since X-Road version 6.24.0 will introduce the biggest changes X-Road 6 has experienced yet.

The beta version of X-Road 6.24.0 is already out, and the official release version will be published on the 31st of August 2020.

It’s got the look

The most significant change in X-Road version 6.24.0 is the fully renewed Security Server user interface (UI). The new UI aims to improve the usability and user experience of the Security Server. The new intuitive UI makes regular administrative tasks easier and supports streamlining the on-boarding process of new X-Road members.

Image 1. Add client wizard.

Image 1. Add client wizard.

For example, the new UI uses wizards to implement tasks that require completing multiple steps in a specific order, such as adding a new client with a new signature key and certificate. Before, the user needed to know what steps are required and their correct order, but from now on the UI provides the information to the user and guides the user through the process.

Image 2. The new UI provides additional information on different configuration options.

Image 2. The new UI provides additional information on different configuration options.

Another essential improvement is providing more additional information regarding different Security Server features in the UI. For example, the Security Server has multiple keys and certificates, and it may not always be clear what different keys and certificates are used for. Therefore, the new UI provides information about different keys, such as authentication and signature keys.

Management REST API

Another significant change in X-Road version 6.24.0 is the brand-new management REST API. The API provides all the same functionalities with the UI, and it can be used to automate common maintenance and management tasks. It means that maintaining and operating multiple Security Servers can be done more efficiently as configuration and maintenance tasks require less manual work. By the way, the new UI uses the same API under the hood too.

The Security Server User Guide provides more information about the API, and there’s also the API’s OpenAPI 3 description available on GitHub. Access to the API is controlled using API keys that can be managed through the Security Server UI or through the API itself. In addition, access to the API can be restricted using IP filtering.

Changes in the architecture

The new UI and management REST API have also caused changes in the Security Server architecture and packaging. The previously existed Nginx (xroad-nginx) and Jetty (xroad-jetty) components have been replaced with the new UI and API (xroad-proxy-ui-api) components. These changes have affected Security Server’s log files, directories, software packages, and services. It’s strongly recommended that Security Server administrators study the details of these changes from the release notes before upgrading to version 6.24.0.

Image 3. Changes in the Security Server architecture - before version 6.24.0 (left) and starting from version 6.24.0 (right).

Image 3. Changes in the Security Server architecture - before version 6.24.0 (left) and starting from version 6.24.0 (right).

Wait, there’s more!

Even though the new UI and management REST API are the most significant and most visible changes in version 6.24.0, the new version contains many other new features, improvements, and fixes. Here’s a short overview of other changes included in the latest version.

  • Support for running Security Server on Red Hat Enterprise Linux 8 (RHEL8).

  • Updates on operational monitoring protocols that enable monitoring of SOAP and REST services in more consistent manner. N.B.! The updates cause breaking changes in the Operational Monitoring protocols.

  • Better support for using external database services on different platforms (e.g. Amazon Web Services, Microsoft Azure, Google Cloud Platform) for both Central Server and Security Server.

  • Changes in allowed characters in X-Road system identifiers and improved validation of the identifiers.

  • Technology updates and decreased technical debt. 

The full list of changes with more detailed descriptions is available in the release notes.

It’s all about users

Another significant change in X-Road over the years is how X-Road is being developed. Nowadays, X-Road users play an essential role in the design and development as a source of input and as validators of the development results. It applies to the new UI, too, since X-Road users have participated in its design and development by providing input, feedback, and comments in different phases of the process. The involvement of the users in the design and development is here to stay, and also the new UI will be further developed and improved based on the feedback received from the field.

Towards the Unicorn

One major change has just been completed, but the next ones are already waiting around the corner. The very first flight of the Unicorn – the release of the beta version of X-Road 7 – is expected to happen by the end of this year, and the first release version should see the daylight in 2021. More information about X-Road 7 and the changes it will introduce will be provided at a later date. Meanwhile, please try out the new X-Road 6.24.0 and tell us your opinion about it!

X-Road Implementation Models

X-Road® has become known as the open-source data exchange layer that is the backbone of the Estonian X-tee and the Finnish Suomi.fi Data Exchange Layer ecosystems. Both ecosystems are nationwide, and they’re open for all kinds of organizations – both public and private sectors. Also, Iceland is currently setting up its national X-Road ecosystem called Straumurinn. Besides, X-Road has been implemented all around the world in many different shapes and sizes.

In general, an X-Road ecosystem is a community of organizations using the same instance of the X-Road software for producing and consuming services. The owner of the ecosystem, the X-Road operator, controls who are allowed to join the community, and the owner defines regulations and practices that the ecosystem must follow.

Image 1. Roles and responsibilities of an X-Road ecosystem.

Image 1. Roles and responsibilities of an X-Road ecosystem.

Technically, the X-Road software does not set any limitations to the size of the ecosystem or the member organizations. The ecosystem may be nationwide, or it may be limited to organizations meeting specific criteria, e.g., clients of a commercial service provider. Thanks to its scalable architecture and organizational model, X-Road is exceptionally flexible, and it supports various kinds of setups. Even if a nationwide implementation of X-Road is probably the best known implementation model, X-Road can be used in many other ways too. Let’s find out more about the different alternatives.

National data exchange layer

National implementation is probably the most typical way to implement X-Road. In a national implementation, X-Road is implemented nationwide within a country, and the aim is to use it in data exchange between organizations across administration sectors and business domains. Typically, the ecosystem is open for all kinds of organizations – both public and private sector organizations. However, it is also possible to restrict the implementation to cover only the public sector, specific administration sector, business domain, or a combination of these.

Besides, X-Road can be used to implement cross-border data exchange with other countries that have a national X-Road implementation. In practice, the ecosystems of different countries are connected using federation – an X-Road feature that enables connecting two X-Road environments. Federation enables member organizations of different ecosystems to exchange data as if they were members of the same ecosystem.

Image 2. X-Road federation - connecting two X-Road ecosystems.

Image 2. X-Road federation - connecting two X-Road ecosystems.

In a national implementation, a government agency is usually the owner of the ecosystem. The owner takes the role of the X-Road operator, who is responsible for all the aspects of the operations. The responsibilities include defining regulations and practices, accepting new members, providing support for members, and operating the central components of the X-Road software. Technical activities can be outsourced to a third party, but administrative and supervising responsibilities are carried out by the operator.

There are multiple implementations around the world where X-Road is used as a national data exchange layer. The best known national X-Road ecosystems are in IcelandFinland, and Estonia.

Data exchange solution for regions

Regional implementation means implementing X-Road within a region or an autonomous community, such as a region, a province, or a state. In a regional implementation, X-Road is used within a region, and the scope is usually very similar to the national implementation – data exchange between organisations across administration sectors and business domains. However, the scope may be more restricted, as well. Besides, X-Road may be used to exchange data with the central government and/or other regions. 

In a regional implementation, a regional agency or authority is usually the owner of the ecosystem. The owner takes the role of the X-Road operator, who is responsible for all the aspects of the operations. Some of the technical activities may be outsourced, just like in the national implementation.

As an alternative approach, the national implementation described earlier may consist of multiple regional implementations too. Every region or some of the regions within a country can have their X-Road ecosystems that are connected using federation. However, compared to a single national implementation, this approach generates more overhead since every region must manage and operate its X-Road ecosystem. Therefore, when targeting for national implementation, a single national ecosystem is recommended over multiple regional ecosystems that are connected using federation.

One example of a regional implementation can be found from Argentina. The province of Neuquén in Argentina is using X-Road as a regional data exchange platform. Also, some regions in other countries are currently considering the use of X-Road on a local level.

Data exchange within a business domain or sector

In national and regional applications, X-Road is implemented within a geographic area, such as a country or a region. However, there are no restrictions on why an X-Road ecosystem could not span multiple states and/or regions as long as there’s an organisation that takes the role and responsibilities of the X-Road operator. A practical example of this kind of approach is implementing X-Road within a business domain or sector in which members are located in different countries around the world. However, X-Road could be implemented within a business domain or sector on the national level too.

The critical factor is that all members commit to follow the rules and policies of the ecosystem set by the X-Road operator. In this case, the use of X-Road is based on a mutual agreement between the members of the ecosystem. In national and regional implementations, the use of X-Road is often based on a law or a regulation issued by a governmental or regional authority. 

In case different business domains have their X-Road ecosystems, they can be connected using federation, which enables data exchange between member organisations of different business domains. Technically, a business domain-specific implementation can be connected to a national or regional X-Road ecosystem too.

X-Road based business domain-specific solutions have been implemented in several countries. For example, in Germany X-Road is being used to exchange healthcare data, and in Estonian, the X-Road based Estfeed platform is utilised in energy sector data exchange. Besides, Estfeed is also applied by the Data Bridge Alliance to exchange energy data on a cross-border level.

A platform for data exchange within an organisation

The primary use case for X-Road is data exchange between organisations, but there are no restrictions on why X-Road could not be used to exchange data within an organisation too. For example, a large international organisation that has branches and departments in different countries and continents may have information systems that communicate over the public Internet. X-Road provides a solution to connect those systems in a standardised and secure manner guaranteeing confidentiality, integrity, and interoperability of the data exchange.

When it comes to the organisational model of X-Road, one of the departments takes the role of the X-Road operator, and other branches and departments are members of the ecosystem. In addition to connecting information systems communicating over the Internet, X-Road could be used inside a private network of an organisation too.

One example of corporate use of X-Road can be found in Japan. A major Japanese gas company uses an X-Road based solution to exchange data between its different organisation units. Another interesting approach to corporate use is building a commercial product on top of X-Road. Since X-Road is open source and licensed under the permissive MIT license, it can be utilised in commercial closed source products too. For example, Planetway, a Japanese-Estonian company, has built its PlanetCross platform using X-Road.

For clarity, X-Road is not a service mesh platform for microservices, such as Istio. X-Road is meant for data exchange between information systems over the public Internet, and service mesh platforms are used as a communication layer between different microservices in a microservices architecture. The high-level capabilities that X-Road and many service mesh solutions provide may seem very similar. Still, the way how they have been implemented is optimised for very different use cases. Therefore, X-Road is not to be mixed with service mesh solutions.

How would you use X-Road?

As we have learned, X-Road can be implemented in many different ways. The right way always depends on the use case, requirements, and operating environment. Thanks to its distributed architecture, X-Road is highly scalable and is, therefore, a good fit for all sizes of implementations. It also enables different approaches when it comes to the speed and scale of the implementation – starting small with few member organisations and services, or going live with a big bang with a bunch of members and connected systems.  

If you’re interested in the upcoming changes in the X-Road core, please visit the X-Road backlog. Anyone can access the backlog, and leave comments and submit enhancement requests through the X-Road Service Desk portal. Accessing the backlog and service desk requires creating an account that can be done in a few seconds using the signup form.