backup / recovery

Undo retention, flashback query and ora-01555 snapshot too old

Setting up a fault aware database environment is in charge of regarding possible physical and logical error scenarios. On the physical side of the medal, you got real-application-cluster or dataguard at your disposal. The logical one usually comprises backup and replication. Recently, oracle and others introduced features that enables a database to recover from logical errors without or with less remote systems and data. Namely flashback is a powerful technology to look back into (query), back out (transaction) or fully restore (database) the history of data. Flashback of course takes accompanying (history) (meta)data to do its job, namely again, undo-before-images, archived-redo-logs and flashback-logs. So, essentially, the convenience of your course into history depends on the amount of history metadata that is available to the database at the time of a logical error, say an inadvertant delete from.

You may already have learned or heard about or even hardly experienced that flashback-logs may be purged from the recovery-area in favour of archived-redo-logs, rendering your configured flashback-retention-time (DB_FLASHBACK_RETENTION_TARGET) a value of theory. So far, sizing the file-system recovery-area is a task to be performed in a clear-sighted manner. The same is true then for undo-before-images (compare to logs) und the undo-tablespace (compare to area) to accomodate enough data to meet your configured undo-retention-time (UNDO_RETENTION) in UNDO_MANAGEMENT=AUTO mode, while in addition still support long running workload queries (not to overwrite old undo). This article discusses the usability of the (enterprise-manager) undo-advisor in sizing your undo-tablespace to always foster successful flashback queries up to the configured undo-retention-time (as opposed to the nasty surprise of getting an ora-01555, see below, which is also misleading here, imho, in suggesting that the undo-tablespace is to small now, as we know it from workload queries – nope, it has been to small before now, in the past, as regarded to flashback queries – also see : ora-01555 snapshot too old when running flashback query).


Investigating DBV-00201: Block, DBA number, marked corrupt for invalid redo application

More or less forced 😉 by the procedures discussed in my last post (Rman duplicating an oracle 11g database in active state on the same host), this post will investigate how to verify and fix data corruption induced by RMan trying to restore nologging objects. Indeed, we know, there’s no actual restore of nologging objects, since there’s no cold or hot redo to process. But what does this mean in practice? Me, I learned now. Let’s have a look.

Ok, there’re some (partitioned) tables in the database duplicated, previous post, that for performance reasons (see : Oracle global temp tabs or nologgingappend saved redo in numbers) have been set nologging. I won’t explain the why here, that’s another subject, but, however, the first post-duplicate database backup showed up with block corruption errors in Quest’s Backup Reporter for Oracle Community. I today examined the overall database integrity status with a rman validate, verified the affected tablespace, but rman list failures, asking Data Recovery Advisor under the covers, did not seem to be up to complain about anything. We see that 10 blocks have been marked corrupt but the file check status is ok though.

RMAN> validate check logical database;
Starting validate at 22.07.2016-08:55:54
allocated channel: ORA_DISK_1
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
31   OK     10             58357        557056          8784584429990
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              265313
  Index      0              223903
  Other      0              9483
File Type    Status Blocks Failing Blocks Examined
------------ ------ -------------- ---------------
SPFILE       OK     0              2
Control File OK     0              778
Finished validate at 22.07.2016-08:59:46

RMAN> list failure all;
no failures found that match specification


Rman duplicating an oracle 11g database in active state on the same host

I already published a post about rman> duplicate... a couple of years ago (Rman duplicating an oracle database in a new sid two host scenario), still on 10g at that time and using a backup set being transfered to a new host. With 11g, however, rman> duplicate... offers another option to not only restore from a source backup, leaving source online, but from an up and running database (with takes archivelog mode and some rman catalogued entries in the control file or catalogue nevetheless). Following below therefore, I’m going to show the do’s for an rman> duplicate ... from active database... in a same host scenario, on windows again, using orapwd and oradim as well as lsnrctl this time. The main difference, however, is employing the spfile clause of the duplicate command, such that rman will set up the destination spfile on its own. Only some file name mappings, actually like before, need to be specified. My main reference to review any new features was Duplicating a Database from the oracle 11g1 docs, other references, concerning errors that showed up underway, will be given below.

Ok, working on the same host, nothing is due to be done for software installation and stuff and we can immediately set up the new instance (note that source will be denoted tgt, for target and the destination aux for auxiliary, respectively). Firstly, we create a new password file for destination, with the same sysdba password as on source.

cd /d e:\oracle\product\11.2.0\dbhome_1\database
orapwd file=PWDAUX.ora ignorecase=y force=y


How to backup your images and attachments

This is about catching all the files stuff that is not contained with the export provided by wordpress for basic accounts (see for the differences between the com and org plans of wordpress and in particular for the export options at

Essentially, hitting export from wp admin -> tools -> export creates an xml file in a so called “WordPress eXtended RSS or WXR” format that may and shall contain any of your content and will in turn also comprise links to any of your files. Since (well formed) xml is truly machine readable, we may therefore extract those (https) links for backing up all the files.

There are a couple of options how to execute the link extraction and file grabbing. Me, I just use linux shell utilities for the ease of use in a small and simple call. However, while file grabbing is of course a dedicated wget job, link extraction can be done with grep or xmllint, whatever you prefer in terms of availability and effort. The difference is, basically, that grep will only succeed as long as the links, including the tags, do not span more than one line, because grep is line oriented, like so:


xmllint, on the other hand, will always catch the text node successfully, no matter how many newlines surround the tags. Anyway, since this is not an issue currently, grep ma ybe safely used and will be much faster for large image and attachment collections.

Having the wrx file at hand, we may proceed using a script as follows:

  • grep style file grabbing
    BKP_DIR=/home/.../Wordpress/`date +"%Y-%m-%d"`
    if [ -f $BKP_FILE ]
        mkdir -p $BKP_DIR
        cp $BKP_FILE $BKP_DIR
        cd $BKP_DIR
        cat $BKP_FILE | grep -oP '(?<=wp:attachment_url>)[^<]+' | wget -xi -
        echo "File not found"
  • xmllint style file grabbing
    # just exchange the grep line like so, a one-liner, may be wrapped here
    # takes one hack to read the namespaced tag and another to have lined output
    xmllint --xpath "//*[local-name()='attachment_url']/text()" <(sed 's/<\/wp:attachment_url>/\n<\/wp:attachment_url>/g' $BKP_FILE | wget -xi -

… did not know that I already run so many files over at

Enjoy, Peter

Recovering disk failures that comprise undo segements

This is just a snapshot of some sitation that happened quite a while ago on a 10g oracle database. I’m not going to dive into the details that much, just providing some selects to decipher the scenario and show up one of the probable ways of resolving the problem.

Jep, due to a disk failure, as i noticed during the analysis, some statements and things complained about datafiles being offline. An attempt to re-online some datafile resulted in an ORA-00376 error as follows:

FEHLER in Zeile 1:
ORA-00376: Datei 6 kann zur Zeit nicht gelesen werden
ORA-01110: Datendatei 6: 'H:\app_ORACLE\SAN_3\ORADATA\appxyz\UNDOTBS03.DBF'

Looking up dba_data_files produced this output: