snippets

Flashback version query and the proper use of timestamp and scn clauses

Flashback version query essentially enables you to lookup the incarnations of a row (defined by primary key) in the past, in a consecutive manner. Version information is depicted by a couple of pseudo-columns, namely versions_xid, versions_startscn, versions_endscn, versions_starttime, versions_endtime and versions_operation. See Using Oracle Flashback Version Query in the docs for explanations.

In combination with flashback query or flashback transaction query, one may restore a row incarnation from the past into a new table or even rollback to a past row incarnation within the same table.

This article will discuss flashback version query together with flashback query to restore one to many rows, just shown for a row of a unique key here for brevity, detailing when and when not to use timestamp and scn select where clauses to prevent pitfalls. An example table / dataset will be given, representing a real world scenario where some past data needs to be identified first and is then to be made available again.

Flashback version query uses the following pattern, including the pseudo-columns introduced above, on an actual application-, but not a system-table (alike flashback transaction query). A timestamp– or scn-range must be supplied to define the lookup window (defined by the stock of the available undo-data, remember) and to actually populate the pseudo-columns, respectively:

SELECT pseudo_col1, ..., app_col1, ... FROM app_tab
VERSIONS BETWEEN
  { SCN | TIMESTAMP } { expr | MINVALUE }
  AND { expr | MAXVALUE } ]

Flashback query then provides for the historic data selection in terms of application-table columns, typically used to run a create table as select..., if you like, supplying an as of timestamp– or scn selector:

-- CTAS, maybe
SELECT app_col1, ... FROM app_tab
  AS OF { SCN | TIMESTAMP } expr

All theory so far, yet the real example upfront, make shure have grasped what this trinity in fact means:

  • Flashback version query
  • Flashback query
  • Flashback transaction query

Also be prepared to unhesitantly get down to pen and paper when it comes to history lookup time windows and row level incarnations of data. It’s quite tricky and bewildering sometimes, be concentrated, have a sketch of timelines at hand (you know for https://en.wikipedia.org/wiki/Primer_(film), for example, some visualization like https://en.wikipedia.org/wiki/File:Time_Travel_Method-2.svg may come in very handy).

(more…)

Advertisements

Getting a raw constant number of rows from oracle’s table sample function

You may of course know these two famous posts called To sample or not to sample… (part-2) about data sampling by Mark Hornick. Although very limited in scope, the two posts (imho) very well sketch why we may employ data sampling and how we may lift off table sampling in oracle.
In general, sampling is used to make a representative statement about a collection of data while only regarding a limited random selection, the sample. As long as you are ok to analyze just a sufficient subset of your 1o million rows table for an analysis, you will save your environment a lot of resources and time. On some other scenario, a limited random data selection may also serve verification or testing purposes where, however, not the representativeness but the randomness at a more or less constant sample size, determines the quality of the sample output. Again, as long as you are ok to not exceed this 15 minutes time window overnight, you will be allowed to run that live unit test on any table in question, on 1, 10 or 100 million rows.
In sql, selecting in regard to gain a representative statement will feed the sample function with a requested percentage of rows to sample from. This is what the oracle sample function already offers. Yet another sql to accept a requested actual number of rows to return, independent of the table size, is not available so far (although most people do expect exactly this behaviour when they spot the sql sample function for the first time, weird). The following text will outline a snippet of pl/sql to provide for a sample function to accept the expected number of rows as a parameter.

(more…)

Scheduling / descheduling linux host reboots via shutdown

Scheduling and/or descheduling linux host reboots is possible with the shutdown -r command using the time parameter (the reboot command, that I usually prefer in favour of clarity, does’nt feature the time parameter, so shutdown -r is the only choice here). Aside from discussing the quite straightforward man page of shutdown, there is two points here to register in your knowledge cells.
First, a (scheduled) shutdown -r hh24:mi execution will hangup itself into background, no need to use job-tools or an &. shutdown -r hh24:mi actually puts systemd-shutdownd in charge of serving the party, this is what you”ll want to expect to see in your running process list, looking for some command effect. Also, a running scheduled shutdown may be cancelled using shutdown -c any time before hh24:mi. Note however, that from around five minutes before hh24:mi, you’ll you’ll no longer be allowed to login the machine, essentially impeding any further control from your side.

(more…)

Using the DataImportHandler XPathEntityProcessor on a Database Resultset Column

The Solr documentation for XPathEntityProcessor introduces a spezialization subtype of EntityProcessor that is primarily depicted to process data (to be) imported from xml/http-datasources (for example, Usage with XML/HTTP Datasource). However, using XPathEntityProcessor on a FieldReaderDataSource instead on the original URLDataSource or !HttpDataSource (search for FieldReaderDataSource in Uploading Structured Data Store Data with the Data Import Handler) enables reading xml instances contained in columns delivered from database requests through SqlEntityProcessor.
Bewildered out of words and meanings…? Don’t worry, the following will give you a living example of how to craft the xml from an Oracle database easily and what to do on the Solr side to map the information datums into indexing fields. To me, this is really a nice example of how to employ xml in a true sense of a defined (well-forming, encoding) data exchange layer, hiding most if not all of the implementation details of xml processing on the database and on the search-engine. Note however, that this great time-to-market, through xml processing technically, always comes at a certain extra cost such that the xml-instances shall not become to large for this solution pattern. I will also use xml attributes for small size values instead of tags in the xml generation as one step of optimization.

(more…)

Rman duplicating an oracle 11g database in active state on the same host

I already published a post about rman> duplicate... a couple of years ago (Rman duplicating an oracle database in a new sid two host scenario), still on 10g at that time and using a backup set being transfered to a new host. With 11g, however, rman> duplicate... offers another option to not only restore from a source backup, leaving source online, but from an up and running database (with takes archivelog mode and some rman catalogued entries in the control file or catalogue nevetheless). Following below therefore, I’m going to show the do’s for an rman> duplicate ... from active database... in a same host scenario, on windows again, using orapwd and oradim as well as lsnrctl this time. The main difference, however, is employing the spfile clause of the duplicate command, such that rman will set up the destination spfile on its own. Only some file name mappings, actually like before, need to be specified. My main reference to review any new features was Duplicating a Database from the oracle 11g1 docs, other references, concerning errors that showed up underway, will be given below.

Ok, working on the same host, nothing is due to be done for software installation and stuff and we can immediately set up the new instance (note that source will be denoted tgt, for target and the destination aux for auxiliary, respectively). Firstly, we create a new password file for destination, with the same sysdba password as on source.

cd /d e:\oracle\product\11.2.0\dbhome_1\database
orapwd file=PWDAUX.ora ignorecase=y force=y

(more…)