sql

Grepping the actual bind values of sql through v$sql_bind_capture


Programmatically or intercepted by forced cursor sharing, sql where clause actual values shall always be bound to variable placeholders and never / no longer be literally coded. You know Tom Kyte talking that story on each and every occasion, pointing out that so many sql areas got stuck in size and performance because of literal values and the resulting multitude of variants of sql statements. On the other hand, people complain about the sporadic issue that the optimizer takes wrong decisions due to one-time probing of actual values in a given session, a nightmare in 10g, much improved by adaptive cursor sharing throughout 12c meanwhile. Well, I do not want to dive into this again, what follows is just a commented recipe sql statement to inspect what actual bind values have had been in action for a (potentially) sub-optimal sql execution. Please note, that the STATISTICS_LEVEL initialization parameter takes to be greater than BASIC to have the statement deliver any data.

The statement is neither complex nor long-running, just using two perfomance views v$sql_bind_capture and v$sqlarea, resp. While v$sql_bind_capture provides for the bind names, values, capture state and time, v$sqlarea offers different ways to approach the subject, by schema or sql-id or module etc, and not at least, the raw text of the statement in question.

(more…)

Advertisements

Getting a raw constant number of rows from oracle’s table sample function


You may of course know these two famous posts called To sample or not to sample… (part-2) about data sampling by Mark Hornick. Although very limited in scope, the two posts (imho) very well sketch why we may employ data sampling and how we may lift off table sampling in oracle.
In general, sampling is used to make a representative statement about a collection of data while only regarding a limited random selection, the sample. As long as you are ok to analyze just a sufficient subset of your 1o million rows table for an analysis, you will save your environment a lot of resources and time. On some other scenario, a limited random data selection may also serve verification or testing purposes where, however, not the representativeness but the randomness at a more or less constant sample size, determines the quality of the sample output. Again, as long as you are ok to not exceed this 15 minutes time window overnight, you will be allowed to run that live unit test on any table in question, on 1, 10 or 100 million rows.
In sql, selecting in regard to gain a representative statement will feed the sample function with a requested percentage of rows to sample from. This is what the oracle sample function already offers. Yet another sql to accept a requested actual number of rows to return, independent of the table size, is not available so far (although most people do expect exactly this behaviour when they spot the sql sample function for the first time, weird). The following text will outline a snippet of pl/sql to provide for a sample function to accept the expected number of rows as a parameter.

(more…)

XMLType.schemaValidate is your only true friend for xmlschema validation


When it comes to introducing xml-data into your database storage, you might, as all sincere developers do, at first attempt to take care of proper data integrity checking on import. Since xml is a really powerful but a somewhat complex document-alike data representation, such integrity checking must incorporate proving the document strucuture, as sort of integrated data types, iff you like, as well as the document data, in terms of facets of given actual values. According to the xml standard, xmlschema is the means of choice here, offering another xml-spelled specification layer to achive the two beforementioned goals. Eventually, a given xml-instance will have to be thrown against the xmlschema provided, to assure its integrity (and being well-formed too, btw), the earlier the better, at best on data import already.

The paragraph above is actually not very oracle specific. Any implementation of the xml standard, as oracle’s xml db does, proposes this course of action. However, since some xml operations may become quite costly when the xml-instances get large and the whole dom tree has to be set up in memory, oracle, as others, dabbles at dodging and shifting pricey work to the most reasonable extend possible. You may load xml into the database, claiming any xml-instances are just fine, validate xml-instances only to be well-formed, manage the validation status of xml-instances at your own responsibility and so forth. There is nothing wrong about that whatsoever. The point is, though, it might be a stony walkway to learn to distinguish the maybe from the certain, to establish a reliable check to safeguard your xml data integrity, just as simple as ALTER TABLE mytab ADD(CONSTRAINT mytab_CHK CHECK(INSERT_TS is not null) NOT DEFERRABLE ENABLE);.

Jep, what next? I’m going to briefly discuss the xmlschema / xmlinstance used below and then show what I experienced will happen using the various ways of “validating” the instance against the schema. The post is somewhat lengthy, but do not get frightened, this is for the c&p examples all along the way.

(more…)

The look of : transaction deadlock due to missing foreign key index


This about transaction deadlocks resulting from an index missing on foreign key colum(s). Actually, the post will not dive into the odds resulting from this failure but show how to detect this dedicated reason for an “ORA-00060: deadlock detected while waiting for resource” message. In particular, what to look out for in the trace file and how to find the table objects (instead of rows) being deadlocked against each other. The best resource on the web tackling this subject is of course (Reading deadlock trace files) from Tom Kyte, which also guided me along the way of a “no row deadlock”, see below.
Ok, on first place, some tool, Enterprise Manager for example, or whatever will notify you about entries in the alert.log, reading roughly the like as “Sun May 29 02:51:07 2016 ORA-00060: Deadlock detected. More info in file H:\ORACLE\diag\rdbms\got\got\trace\got_ora_4536.trc“. Inspecting the trace file, you should focus your attention on the first 70 lines, in most cases neglegting the trailing process information dumps. I got this, anything noteworthy will be highlighted and discussed below:

(more…)

Using the DataImportHandler XPathEntityProcessor on a Database Resultset Column


The Solr documentation for XPathEntityProcessor introduces a spezialization subtype of EntityProcessor that is primarily depicted to process data (to be) imported from xml/http-datasources (for example, Usage with XML/HTTP Datasource). However, using XPathEntityProcessor on a FieldReaderDataSource instead on the original URLDataSource or !HttpDataSource (search for FieldReaderDataSource in Uploading Structured Data Store Data with the Data Import Handler) enables reading xml instances contained in columns delivered from database requests through SqlEntityProcessor.
Bewildered out of words and meanings…? Don’t worry, the following will give you a living example of how to craft the xml from an Oracle database easily and what to do on the Solr side to map the information datums into indexing fields. To me, this is really a nice example of how to employ xml in a true sense of a defined (well-forming, encoding) data exchange layer, hiding most if not all of the implementation details of xml processing on the database and on the search-engine. Note however, that this great time-to-market, through xml processing technically, always comes at a certain extra cost such that the xml-instances shall not become to large for this solution pattern. I will also use xml attributes for small size values instead of tags in the xml generation as one step of optimization.

(more…)