Oracle database 11g administrator certified professional finally


ocp_odb11gadmin_clr Last thursday. Glad I’ve managed to get this completed aside every day work now. Took the Oracle Database 11g: New Features for Administrators 1Z0-050 upgrade path (Oracle Database 11g – Certification Path). Weird, the exam comprises so many topics that are really expensive in terms of (additional) enterprise edition options to be licenced for operation set up (Oracle Technology Global Price List).

Peter

Using the DataImportHandler XPathEntityProcessor on a Database Resultset Column


The Solr documentation for XPathEntityProcessor introduces a spezialization subtype of EntityProcessor that is primarily depicted to process data (to be) imported from xml/http-datasources (for example, Usage with XML/HTTP Datasource). However, using XPathEntityProcessor on a FieldReaderDataSource instead on the original URLDataSource or !HttpDataSource (search for FieldReaderDataSource in Uploading Structured Data Store Data with the Data Import Handler) enables reading xml instances contained in columns delivered from database requests through SqlEntityProcessor.
Bewildered out of words and meanings…? Don’t worry, the following will give you a living example of how to craft the xml from an Oracle database easily and what to do on the Solr side to map the information datums into indexing fields. To me, this is really a nice example of how to employ xml in a true sense of a defined (well-forming, encoding) data exchange layer, hiding most if not all of the implementation details of xml processing on the database and on the search-engine. Note however, that this great time-to-market, through xml processing technically, always comes at a certain extra cost such that the xml-instances shall not become to large for this solution pattern. I will also use xml attributes for small size values instead of tags in the xml generation as one step of optimization.

(more…)

Log-dedicated loop device throughput and time overhead on btrfs 4.x


This is again about real world numbers (which I like so much for being authentic😉. The context being the throughput and time overhead of a loop device, as a poor or late man’s replacement for a real disk partition, jep I know, on copy-on-write filesystem btrfs, exclusively dedicated for logging, that is appending over and over. Why? The loop device may overflow with data without affecting the underlying filesystem, see Btrfs subvolume quota still in its infancy with btrfs version 4.2.2 for more why’s and what I tried to get btrfs subvolume with quota to work. By the way, about that though, see debian org Btrfs for a down-to-earth assessment of btrfs to date, even uttering a recommendation from what version number (4.4) to start off at the earliest near production. Anyway, what follows, adapts a test setup as in Performance of loopback filesystems, prime credits go there, and expands the layout somewhat for the btrfs C or nodatacow flag. Here we go.

Have this baseline, if you like, test on the raw iron. Well, its not raw iron really, its vmware and tons of storage below, and the shown performance is terrible I know and I only take one testset of bs / count but that won’t matter. There’s something to start off.

mkdir /tmp/loop0
dd if=/dev/zero bs=1M of=/tmp/loop0/file oflag=sync count=1000
1048576000 bytes (1.0 GB) copied, 8.9605 s, 117 MB/s
1048576000 bytes (1.0 GB) copied, 6.52867 s, 161 MB/s
1048576000 bytes (1.0 GB) copied, 5.35716 s, 196 MB/s
1048576000 bytes (1.0 GB) copied, 5.48745 s, 191 MB/s
1048576000 bytes (1.0 GB) copied, 5.14736 s, 204 MB/s

(more…)

Investigating DBV-00201: Block, DBA number, marked corrupt for invalid redo application


More or less forced😉 by the procedures discussed in my last post (Rman duplicating an oracle 11g database in active state on the same host), this post will investigate how to verify and fix data corruption induced by RMan trying to restore nologging objects. Indeed, we know, there’s no actual restore of nologging objects, since there’s no cold or hot redo to process. But what does this mean in practice? Me, I learned now. Let’s have a look.

Ok, there’re some (partitioned) tables in the database duplicated, previous post, that for performance reasons (see : Oracle global temp tabs or nologgingappend saved redo in numbers) have been set nologging. I won’t explain the why here, that’s another subject, but, however, the first post-duplicate database backup showed up with block corruption errors in Quest’s Backup Reporter for Oracle Community. I today examined the overall database integrity status with a rman validate, verified the affected tablespace, but rman list failures, asking Data Recovery Advisor under the covers, did not seem to be up to complain about anything. We see that 10 blocks have been marked corrupt but the file check status is ok though.

RMAN> validate check logical database;
Starting validate at 22.07.2016-08:55:54
allocated channel: ORA_DISK_1
...
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
31   OK     10             58357        557056          8784584429990
  File Name: H:\ORACLE\SAN_4\ORADATA\UTL_01.DBF
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              265313
  Index      0              223903
  Other      0              9483
...
File Type    Status Blocks Failing Blocks Examined
------------ ------ -------------- ---------------
SPFILE       OK     0              2
Control File OK     0              778
Finished validate at 22.07.2016-08:59:46

RMAN> list failure all;
no failures found that match specification

(more…)