schema design

The look of : transaction deadlock due to missing foreign key index

This about transaction deadlocks resulting from an index missing on foreign key colum(s). Actually, the post will not dive into the odds resulting from this failure but show how to detect this dedicated reason for an “ORA-00060: deadlock detected while waiting for resource” message. In particular, what to look out for in the trace file and how to find the table objects (instead of rows) being deadlocked against each other. The best resource on the web tackling this subject is of course (Reading deadlock trace files) from Tom Kyte, which also guided me along the way of a “no row deadlock”, see below.
Ok, on first place, some tool, Enterprise Manager for example, or whatever will notify you about entries in the alert.log, reading roughly the like as “Sun May 29 02:51:07 2016 ORA-00060: Deadlock detected. More info in file H:\ORACLE\diag\rdbms\got\got\trace\got_ora_4536.trc“. Inspecting the trace file, you should focus your attention on the first 70 lines, in most cases neglegting the trailing process information dumps. I got this, anything noteworthy will be highlighted and discussed below:

(more…)

Oracle global temp tabs or nologging/append – saved redo in numbers

In an attempt to reduce redo entries (and archive space) generated for only temporarily computed contents of a couple of simple tables, I regarded employing Global Temporary Tables as well as Nologging Inserts as redesign options. This article, however, is not meant to dive into the details of these redesign options at all. There’s plenty of examples and discussions about the pros and cons of each option around the net. Most notably to mention, and a stimulus to post just my redo reduction test results, is Tim Hall’s compact articles on the subject like https://oracle-base.com/articles/misc/temporary-tables and https://oracle-base.com/articles/misc/append-hint that do include an assessment of the redo matter.

So, what’s the lineup for short? On startup, some data will be sourced throughout the database, losely computed and brutely written into a couple of simple tables. Then some third party will read this data for own purposes and will report completion such that the data can eventually be discarded completely. In making shure that an iteration step always starts unsoiled, any data left over from a crashed precursor will be rubbed out as well. Nobody cares about backups of this data, iteration management runs on other tables. You see, this is not really something that takes a database to complete successfully but…

(more…)

Materialized views ddl and dml stati (and the infamous NEEDS_COMPILE)

introduction

Employing materialized views (or snapshot or mview for short) for producing simple sum-up reports is really cool stuff. You just launch some statement like below and will be served with a well performing collection of data that is more or less current (that is why i call that simple). Oracle will care for the refresh of the mview by self-registering a job on calling dbms_refresh.refresh('some_mv');, that starts immediately and takes the interval from the statement. Handing in the force keyword just chooses a fast refresh method iff possible and a complete refresh method otherwise. There are more defaults applied by leaving other keywords unset, the refresh mode for example, but this is not the point of the article. It is:

Why does that … mview always show a state of NEEDS_COMPILE although no ddl action has happend along the objects dependency chain lately?

create materialized view some_mv
  build immediate
  refresh force next (sysdate + 1/24/12)
as
select X.ts, Y.name from (
  select to_char(max(A.ts), 'dd.mm.yyyy hh24:mi') as ts, B.blubb
    from tab1 A, tab1_1 B where A.id = B.id
    group by B.blubb
) X, tab2 Y
  where X.blubb = Y.id;

(more…)

Creating unique constraints (indexes) for multidimesional datasets

introduction

there sometimes exists requirements to enforce unique constraints on datasets that do not have a simple layout of distinct values over one or, if logically concatenated, more columns. for example, imagine the real world scenario of a 1:n, parent-child table relationship, where a parent table represents a number of document records and the child table holds a multitude of typed document attachement information. imagine futher, that only one document attachement type, say 'TYPE_0', should serve as a master or most current attachement per document. other document attachement types, say 'TYPE_1' and 'TYPE_2', may recur per document without any limitation.

the classic approach of a two-column unique constraint, comprising some doc_id and some doc_type columns, will not be appropriate in that case. the constraint will be violated as soon as any recurring of document types, 'TYPE_1' and 'TYPE_2', per document takes place.

a really impressive solution

the following problem solution is not new, nor has it been invented by me at all (thanks to m.s. for pointing in out to me) but i always go crazy about the way this very solution evolves from a merge of, on the first spot, rather different database schema design techniques. that is:

  • the nature of unique constraints, compared to primary key constraints and
  • the world of index definitions, specifically unique and function-based indexes.

(more…)