Creating unique constraints (indexes) for multidimesional datasets


there sometimes exists requirements to enforce unique constraints on datasets that do not have a simple layout of distinct values over one or, if logically concatenated, more columns. for example, imagine the real world scenario of a 1:n, parent-child table relationship, where a parent table represents a number of document records and the child table holds a multitude of typed document attachement information. imagine futher, that only one document attachement type, say 'TYPE_0', should serve as a master or most current attachement per document. other document attachement types, say 'TYPE_1' and 'TYPE_2', may recur per document without any limitation.

the classic approach of a two-column unique constraint, comprising some doc_id and some doc_type columns, will not be appropriate in that case. the constraint will be violated as soon as any recurring of document types, 'TYPE_1' and 'TYPE_2', per document takes place.

a really impressive solution

the following problem solution is not new, nor has it been invented by me at all (thanks to m.s. for pointing in out to me) but i always go crazy about the way this very solution evolves from a merge of, on the first spot, rather different database schema design techniques. that is:

  • the nature of unique constraints, compared to primary key constraints and
  • the world of index definitions, specifically unique and function-based indexes.

code discussion

there is not much code to write to implement the solution actually. in fact, one just creates a typical function-based index on a deterministic expression and positions the keyword UNIQUE appropriately:


any attempt to insert/update another 'TYPE_0'-attachement per document will raise the envisaged ORA-00001: unique constraint (string.string) violated exception to be catched and processed as usual. but, however, does one really create a constraint doing so or just an index? what about the constraint options of enable/disable or deferability or even constraint exception tables?

without jumping ahead to far in the text, one has to ascertain that only an index and not a constraint will be created. a simple look into the all_constraints (where constraint_type = ‘U’) dictionary view may reveal this. that implies further, that any of the constraint options, mentioned above, will not be available. and, for the curious, the alter index ... enable/disable syntax, which is defined on function-based indexes, will just mark the indexes deterministic expression, the body if you like, being under code review.

so, how does it all fit together? firstly, one has to recall from basic oracle schema design principles that primary key (pk) and unique key (uq) constraints require dedicated unique indexes for an efficient lookup of constraint violations. these indexes may be created automatically with the constraint, the most common case probably. also, they will be dropped when the constraint gets disabled and, in turn, will be recreated when the constraint becomes enabled again.

the relationship between pk-/uq-constraints and the underlying indexes is though much less tied. for example a constraint definition may “recycle” an already existing unique index for proper support, if the column selection matches, not automatically creating another one. even a non-unique index may become “recycled” if the relevant column selection leads the index definition. by the way, this is another great stuff, because one may integrate requirements of data integrity and data access within one database schema object using this technique.

such a way, on learns that uniqueness is actually enforced with the index and not the constraint, may it be a pk or uq (remember the enable/disable discussion above). on the other hand, what about uniqueness and NULL values? again, from basic oracle schema design principles, one may recall that a pk-constraint does not allow NULL column values. NOT NULL constraints, another constraint class, may even become applied to a selection of columns if missing. a uq-constraint, in fact, does allow for NULL values as long as all considered columns sum up to a total value of NULL. this (NULL) sum is regarded as valid by default.

but, no one still talks about a creating a uq-constraint anymore. a unique index stars the lesson and reminds one of another piece of oracle schema design principles: indexes do not store NULL values. that is, given the code above, definitely only 'TYPE_0'-attachement records are stored in the index, making up just this world of uniqueness for a dedicated multidimesional dataset. or putting it another way, one preprocesses the underlying data to specify what should be testes for uniquenes, where the wealth of the used expression is only limited by the deterministic character of the employed functions.


using compiled knowledge about:

  • pk-/uq-constraints, unique indexes and their relationships,
  • adopting NULL values responsibly,
  • integrating function-based indexes data preprocessing power

allows for the set up of uniqueness for multidimesional datasets, as described in the document attachement type example. the implemented and resulting database object is, however, not the popular uq-constraint but actually the underlying unique index. it delivers the envisaged ORA-00001: unique constraint (string.string) violated exception but misses the common constraint options of enable/disable and deferability.

note: the characteristic of indexes against a scalar/sum of NULL column values may also be adequate for guided data distributions in workflow tables and the alike, where only a small subset of records is under work at one point in time.

have fun!


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.