sql

XMLType.schemaValidate is your only true friend for xmlschema validation


When it comes to introducing xml-data into your database storage, you might, as all sincere developers do, at first attempt to take care of proper data integrity checking on import. Since xml is a really powerful but a somewhat complex document-alike data representation, such integrity checking must incorporate proving the document strucuture, as sort of integrated data types, iff you like, as well as the document data, in terms of facets of given actual values. According to the xml standard, xmlschema is the means of choice here, offering another xml-spelled specification layer to achive the two beforementioned goals. Eventually, a given xml-instance will have to be thrown against the xmlschema provided, to assure its integrity (and being well-formed too, btw), the earlier the better, at best on data import already.

The paragraph above is actually not very oracle specific. Any implementation of the xml standard, as oracle’s xml db does, proposes this course of action. However, since some xml operations may become quite costly when the xml-instances get large and the whole dom tree has to be set up in memory, oracle, as others, dabbles at dodging and shifting pricey work to the most reasonable extend possible. You may load xml into the database, claiming any xml-instances are just fine, validate xml-instances only to be well-formed, manage the validation status of xml-instances at your own responsibility and so forth. There is nothing wrong about that whatsoever. The point is, though, it might be a stony walkway to learn to distinguish the maybe from the certain, to establish a reliable check to safeguard your xml data integrity, just as simple as ALTER TABLE mytab ADD(CONSTRAINT mytab_CHK CHECK(INSERT_TS is not null) NOT DEFERRABLE ENABLE);.

Jep, what next? I’m going to briefly discuss the xmlschema / xmlinstance used below and then show what I experienced will happen using the various ways of “validating” the instance against the schema. The post is somewhat lengthy, but do not get frightened, this is for the c&p examples all along the way.

(more…)

The look of : transaction deadlock due to missing foreign key index


This about transaction deadlocks resulting from an index missing on foreign key colum(s). Actually, the post will not dive into the odds resulting from this failure but show how to detect this dedicated reason for an “ORA-00060: deadlock detected while waiting for resource” message. In particular, what to look out for in the trace file and how to find the table objects (instead of rows) being deadlocked against each other. The best resource on the web tackling this subject is of course (Reading deadlock trace files) from Tom Kyte, which also guided me along the way of a “no row deadlock”, see below.
Ok, on first place, some tool, Enterprise Manager for example, or whatever will notify you about entries in the alert.log, reading roughly the like as “Sun May 29 02:51:07 2016 ORA-00060: Deadlock detected. More info in file H:\ORACLE\diag\rdbms\got\got\trace\got_ora_4536.trc“. Inspecting the trace file, you should focus your attention on the first 70 lines, in most cases neglegting the trailing process information dumps. I got this, anything noteworthy will be highlighted and discussed below:

(more…)

Using the DataImportHandler XPathEntityProcessor on a Database Resultset Column


The Solr documentation for XPathEntityProcessor introduces a spezialization subtype of EntityProcessor that is primarily depicted to process data (to be) imported from xml/http-datasources (for example, Usage with XML/HTTP Datasource). However, using XPathEntityProcessor on a FieldReaderDataSource instead on the original URLDataSource or !HttpDataSource (search for FieldReaderDataSource in Uploading Structured Data Store Data with the Data Import Handler) enables reading xml instances contained in columns delivered from database requests through SqlEntityProcessor.
Bewildered out of words and meanings…? Don’t worry, the following will give you a living example of how to craft the xml from an Oracle database easily and what to do on the Solr side to map the information datums into indexing fields. To me, this is really a nice example of how to employ xml in a true sense of a defined (well-forming, encoding) data exchange layer, hiding most if not all of the implementation details of xml processing on the database and on the search-engine. Note however, that this great time-to-market, through xml processing technically, always comes at a certain extra cost such that the xml-instances shall not become to large for this solution pattern. I will also use xml attributes for small size values instead of tags in the xml generation as one step of optimization.

(more…)

Loop over sqlplus connection identifiers from in dos shell


Nearly at the same season last year, I wrote about ways and uses of piping sqlplus commands into a sqlplus session from a dos shell script (see : Piping newlined commands into sqlplus from a dos shell). The pattern worked fairly well so far such that I employed the technique more often lately to deploy code changes around database instances. However, when the piping stuff relieved me from writing that show errors and exit over and over again for every sql-script, I was still forced to duplicate script code over any database identifier and database session in action.

There was a point, eventually, when I felt I need to take this further, introduce a loop like in any programming language, in fact suffer this dos shell syntax quirksmode (a ss64 syntax redirection article was of great help) and just do it. Ok, it took some time to grab this enabledelayedexpansion thing, the array syntax and so on but here you go. Regard the doubled percent signs prefixing the loop variable, the array for the tns variable and again, the tremendous clear text password use within a script. Ahem, yes, compared to the antecessor article, you do not need the ampersand sign anymore, when the individual echo comands occupy own distinct lines (changed it over there).

@echo off
setlocal enabledelayedexpansion
set USR=joe
set PWD=joe
set TNS=(db1 db2)

chcp 1252
cd /D d:\database

for %%I in %TNS% do (
  (echo set timing off
    echo select '%USR%/%%I' as connection from dual;
    echo @package1.pck
    echo @package2.sql
    echo @package3.sql
    echo exit
  ) | sqlplus -s %USR%/%PWD%@%%I
)

Have fun, Peter

Oracle global temp tabs or nologging/append – saved redo in numbers


In an attempt to reduce redo entries (and archive space) generated for only temporarily computed contents of a couple of simple tables, I regarded employing Global Temporary Tables as well as Nologging Inserts as redesign options. This article, however, is not meant to dive into the details of these redesign options at all. There’s plenty of examples and discussions about the pros and cons of each option around the net. Most notably to mention, and a stimulus to post just my redo reduction test results, is Tim Hall’s compact articles on the subject like https://oracle-base.com/articles/misc/temporary-tables and https://oracle-base.com/articles/misc/append-hint that do include an assessment of the redo matter.

So, what’s the lineup for short? On startup, some data will be sourced throughout the database, losely computed and brutely written into a couple of simple tables. Then some third party will read this data for own purposes and will report completion such that the data can eventually be discarded completely. In making shure that an iteration step always starts unsoiled, any data left over from a crashed precursor will be rubbed out as well. Nobody cares about backups of this data, iteration management runs on other tables. You see, this is not really something that takes a database to complete successfully but…

(more…)