X.500 Standard status
(Implementors' Guide)

X.509 Related activities

How to be involved

More Information

Tutorial section 1
X.500 General

Tutorial section 2
X.509 specific

Other PKI organizations

edit SideBar


Building a Highly Replicated Directory: The case for X.500 DISP

This whitepaper has been cross-posted from the Isode website.


This white paper looks at issues related to replication, when building a highly distributed and replicated directory. It argues that X.500 DISP (Directory Information Shadowing Protocol) is the best solution to this problem. This paper looks particularly at military directory, which has strong requirements for highly replicated directory. The paper is also applicable to other environments.

Military Requirements

ACP 133 Directory is a key component of most modern military communication and messaging systems. ACP 133 defines a complex military schema, based on X.500, which provides support for both applications and information services. This international core specification is used as the basis for national and NATO directories. Military directories have two characteristics that are key for this paper, that apply to both tactical and strategic directories:

  • Highly Distributed. Servers are deployed in many locations. This is important, as it enables users and services who are reliant on directories to have local servers, and not to be reliant on availability of remote servers and network connectivity.
  • Highly Replicated. It is critical that necessary data is available on the local server. For this reason data is highly replicated.

These fundamental requirements drive procurement and solutions. There are two further considerations that are important:

  • International Collaboration. National military organizations, and international organizations (in particular NATO) have a high degree of collaboration. It is important to replicate directory data between nations to support this collaboration.
  • Filtering. When data is replicated, there is a need to filter data. This is important to optimize performance over slow links, and to distribute data on a need to know basis and in line with policy.

What is X.500 DISP

X.500 DISP (Directory Information Shadowing Protocol) is the standard replication protocol that is a part of X.500 and of ACP 133. It is the only open standard for directory replication. DISP has a number of important features:

  • Incremental replication, to optimize network and resource usage.
  • Scheduled and on demand updates.
  • Manager controlled updates.
  • Supplier and Consumer initiated updates.
  • Hierarchical shadowing.
  • Flexible definition of data to be shadowed.
  • Total updates, which allow for setup and reset.
  • Flexible data filtering.
  • Security features, including digital signature of associations and replicated data.

Single Vendor Solutions

Most directory deployments use directories from a single vendor. They will use the directory replication supported by that directory vendor. There are many complex and highly replicated single vendor directory solutions using DISP. Most organizations choosing to procure directory choose to have a single vendor.

Schema: Why Multi-Vendor is Hard

Dealing with multi-vendor solutions always adds complexity. Experience has shown that there is one fundamental reason why multi-vendor directory replication is hard, and that is schema consistency. In a single vendor deployment, schema will generally be configured in the same way for each server. In a multi-vendor deployment, there will tend to be differences including:

  • Addition of vendor specific schema.
  • Addition of customer specific schema.
  • Inconsistencies or minor errors in configuration of standard schema.

When data is being read by a directory client, such inconsistencies do not usually cause significant problems. Where data is being replicated, minor schema inconsistencies between the data being replicated and the schema definitions in the directory server into which the data is being replicated can lead to problems. A directory, like any other database, needs to have the data it stores in line with its schema. In order to make things work, schema needs to be aligned, and information not needed by the consumer filtered out.

While this is the hardest problem for multi-vendor deployments, Isode believes that it is quite practical to address it in an operational environment.

JWID: Lessons for Vendors

At Joint Warrior Interoperability Demonstration (JWID) 2004, there was a multi-vendor demonstration of directory replication using X.500 DISP. While this was a significant success in demonstrating DISP interoperability, the effort to make it work was much higher than it should really have been. There are two key lessons that vendors supporting DISP should take away from JWID 2004.

Lesson 1: Report DISP Problems Clearly

Basic DISP protocol interoperability was very good. A http://www.x500standard.com/pmwiki.php?n=Participate.Replicated?action=edittypical problem scenario was that data would be sent correctly, but would not be accepted by the server. It would often take significant detective work to find out why the data was not accepted. Once the reason was determined, it was usually straightforward to correct the problem. Good error reporting on DISP errors is extremely important, and some products are lacking in this area.

Lesson 2: DISP Consumers should be tolerant

Jon Postel, who designed many of the core Internet protocols, had an implementer's dictum "Be strict in what you send and be tolerant in what you receive". This excellent guideline is particularly appropriate for DISP. While schema consistency is important, overly fussy DISP Consumers (products receiving a DISP update) make deployment harder.

JWID: Lessons for Customers

DISP advocates can argue the JWID 2004 showed that DISP is the right way forward. DISP detractors can argue that JWID 2004 showed that DISP is too complex to deploy. I think that there two important lessons for customers:

Lesson 1: Multi-Vendor DISP is viable

Lesson 2: Multi-Vendor DISP is not currently in wide use

The difficulties showed clearly that this deployment is not something that is widely done in the field. Customers need to remember that most directory and DISP deployments are single vendor and that multi-vendor deployments have special requirements. In order to get multi-vendor systems to work, customers need to procure in a manner which will achieve this. It is naive for customers to procure independent (single vendor) implementations and then expect them to work together. Requirements for interworking with external directories using DISP need to be included in procurements.

Myths about DISP

There are many in the directory industry who are critical of DISP, and in particular some major directory vendors whose products do not support DISP. This section addresses some of the "myths" that have been propagated about DISP.

Myth 1: DISP is Too Slow and Does Not Scale

This is nonsense. DISP needs implementing with care, and some early implementations were very poor. DISP's incremental update makes very efficient use of bandwidth, and Isode customers use DISP every day for replicating multi-million entry directories.

Myth 2: DISP is Too Complex

DISP is quite complex, but most of its features are very useful for configuring and managing replication in non-trivial configurations. The IETF attempted to standardize replication in its LDUP initiative. They rejected DISP as too complex and proceeded to develop specifications of much greater complexity. The LDUP activity was eventually abandoned.

Myth 3: DISP is Not Secure

DISP contains useful security features. These are not widely implemented, primarily because they are less important for single vendor deployment.

Myth 4: DISP does not Interoperate between Vendors

JWID 2004 shows that this is not true.

Myth 5: Total Update is "Broken"

It has been argued that the Total Update feature of DISP is bad design. It is true that it has operational issues with very large replicated areas (millions of entries). Isode provide an out of band update mechanism for this type of deployment. For smaller replicated areas (hundreds of thousands of entries and less) it is a very useful feature. It enables automatic setup of replication, and for automatic reset in the event of certain types of failure and corruption.

Approaches to solving the "Schema Problem"

Dealing with schema inconsistency in a multi-vendor replication configuration is a problem that cannot be avoided. There are two basic approaches, independent of the replication mechanism used.

Approach 1: Work Around the Inconsistency

This approach leaves the two directories as they are, and then uses an integration mechanism, typically directory synchronization (meta directory), which deals with the inconsistencies. An example of where this would be a good approach is a commercial organization where two business units have developed directories using different products. Reasons:

  • It prevents disruption of the operational directory.
  • There is no single corporate schema to work to.
  • There is only a need for lowest common denominator information type services to operate across both directories.

Approach 2: Fix the Inconsistency

This approach appears best in the military environment. Reasons:

  • There is a standard schema to work to (ACP 133). This gives a framework for interoperability. It also means that schema deviations are likely to be minor.
  • The directory is used by applications as well as users. As applications are typically less tolerant of schema inconsistencies, it is beneficial to sort out schema inconsistencies to ensure that applications work smoothly with replicated data.
  • The directory is highly replicated, and working around the inconsistencies will lead to an "n squared" problem as this is dealt with between each pair of directories. Fixing the schema will lead to a "virtuous cycle" where replication gets easier as schema is better aligned across participating directory servers.
  • Getting schema consistent will minimize risk of client interoperability problems.

The strategy of fixing the inconsistency is independent of the replication mechanism used, and the work to make the fixes will be independent of the mechanism used.

Why LDIF would Make things Harder

Most alternatives that have been proposed to DISP are based on LDIF (LDAP Data Interchange Format). They typically use an out of band data transfer mechanism, coupled with periodic, supplier initiated incremental replication of a single subtree, without mechanisms for detecting updated loss. The basic replication approach is much less flexible than that offered by DISP. This restriction may be acceptable, but much more serious problems are expected in relation to the underlying LDIF format:

  • LDIF, like a number of text formats, has many vendor variants. A typical directory synchronization product will have options to deal with LDIF variants from different vendors. LDIF is a useful format for loading data into a directory from an external source, but it is not designed for supporting multi-vendor replication.
  • String representations of attribute names, and attribute syntax representations are often vendor specific.
  • Binary and structured attributes are a big problem with LDIF. This will particularly affect some of the more complex ACP 133 attributes.
  • There is no mechanism to transfer X.500/ACP 133 access control information.

It is naive to believe that wide vendor support for LDIF makes it a good format for replication.

Military Requirements for Replication

In looking at approaches to replication, it is important to consider the driving requirements for replication in a military directory:

  1. The directory is highly replicated, so there are going to be lots of directory agreements to put in place. This means that it needs to be easy to set up and manage replication agreements.
  2. The replication mechanism needs to be efficient, as networks can be bandwidth constrained.
  3. The replication mechanism needs to be highly robust, and to work with a minimum of operator intervention.

The Benefits of Direct Replication

DISP is an bandwidth efficient protocol, with many configuration and management features. The key benefit of DISP for meeting military requirements is that it is a part of an ACP 133 directory server product. Configuring replication is integrated with the directory server, and there are no third party components sitting between a pair of replicating directory servers. This clean architecture is key to meeting requirements 1 and 3.

If directory replication uses third party products or custom integration scripting, it is going to be more complex to configure and will introduce a number of failure points and additional components to be managed. Any architecture along these lines is going to be grossly inferior to using integrated direct replication.

Data Filtering

Data filtering is important for military directory deployments. When data is replicated, it is important to optimize performance over slow links, and to distribute data on a need to know basis and in line with policy.

Data filtering is a key capability offered by DISP. While sophisticated data transformation is best done by a directory synchronization product (meta directory), standard directory replication is an ideal place to perform data filtering. Robustness is improved by controlling data filtering without the need to integrate a third party product. As well as standard DISP capabilities, such as attribute and object class filtering, it also makes sense for vendors to extend directory filtering to support specific markets. For example, support of the UK Military "Publish To" attribute, which controls entry level replication fits well with DISP replication.


This paper has set out why DISP is the best approach for replicating data in a highly distributed and replicated directory. It also explains key requirements, so that those building replicated directories can procure necessary functionality from directory vendors.

Page Actions

Recent Changes

Group & Page

Back Links