scn

Flashback version query and the proper use of timestamp and scn clauses

Flashback version query essentially enables you to lookup the incarnations of a row (defined by primary key) in the past, in a consecutive manner. Version information is depicted by a couple of pseudo-columns, namely versions_xid, versions_startscn, versions_endscn, versions_starttime, versions_endtime and versions_operation. See Using Oracle Flashback Version Query in the docs for explanations.

In combination with flashback query or flashback transaction query, one may restore a row incarnation from the past into a new table or even rollback to a past row incarnation within the same table.

This article will discuss flashback version query together with flashback query to restore one to many rows, just shown for a row of a unique key here for brevity, detailing when and when not to use timestamp and scn select where clauses to prevent pitfalls. An example table / dataset will be given, representing a real world scenario where some past data needs to be identified first and is then to be made available again.

Flashback version query uses the following pattern, including the pseudo-columns introduced above, on an actual application-, but not a system-table (alike flashback transaction query). A timestamp– or scn-range must be supplied to define the lookup window (defined by the stock of the available undo-data, remember) and to actually populate the pseudo-columns, respectively:

(more…)

rman duplicating an oracle database in a new sid / two host scenario

introduction

there are many ways to clone an oracle database. you may just copy and init a cold backup, do a rman restore / recover or employ datapump. according to the number of possibilities there is another multitude number of things you have to evaluate in advance, prepare, configure, keep in mind, postprocess etc., for every clone procedure. me, i mostly use cloning by cold backup, preferably if cloning to another host is requested and if the database system identifier (sid) remains the same. although this way of cloning nearly needs no additional work, you have to take into account that copying the (cold) files from source to destination may take a considerable amount of time, especially for large databases (i’m going to show in another post how to accelerate that by san-snapshotting, a technique becoming deservedly popular lately).

if you cannot go with source database downtimes at all, the rman duplicate functionality might be the preferred choice. it is sort of rman restore / recover, can therefore live on a hot backup, but saves yourself from playing around with controlfile backups / scripts or the nid tool for example. much more far from it, rman duplicate even uses a live, the source, controlfile for automated database structure setup on the destination. but now let’s see how a rman duplicate may work for a given scenario.

(more…)