installation

Oracle linux 7.x reboot systemd swap target timeout workaround


There is an issue around currently for Red Hat based systems, Oracle Linux here, in version 7.3. running systemd version 219-30.0.1, having a shutdown or reboot seemingly hang on a failed swap unit (deallocation). A lot of posts on the net discuss the issue, After switching to Ubuntu 15.04 laptop won’t shutdown suggested the w/a, doing a swapoff/swapon bounce in advance, that worked for me, referencing reboot hangs at ‘Reached target Shutdown’. Systemd: Hangs indefinitely on >90% of reboot attempts comprises in in-depth analysis.

I’m not shure, though, whether tmp.mount.hm4: After swap.target is releated, really, because the implemented change is already present on my systems (systemd‘s). I’m much more tending to suspect widespread storage fragmentation on the swap area to cause the relatively long term swap off run-times. In addition, I almost only experienced the issue for systems that have been up for a longer number of days, say from 90 days onwards.

(more…)

Advertisements

Deprecation announcement of oracle restart along 12c withdrawn


As being spotted on Bjoern Rost’s blog, commented by Trap, today, Oracle obviously has been backing down on the deprecation announcement of oracle restart along 12c. On metalink, see:

Withdrawn: Deprecation Announcement of Oracle Restart with Oracle Database 12c (Doc ID 1584742.1)

This is good news, accepted with delight, since we do no longer need to turn back the hands of time into the nifty-oracle-bounce-handycraft-scripts-era. In fact, I wonder how many dba’s are already comfortable with systemd service registration. I suppose, a lot of dba’s would have been resorting back to the dusty sysv configurations, using the systemd-sysv-compatibility engine, which is sort of retrofitting a car-key-starter with a car that already comes with wireless keying and just some starter button. Look around, the major share of on-premise oracle database installations is still single instance compared to rac and even 12c-containers.

Have fun, Peter

Some irritation due to extended output of systemd / systemctl status for sysv services


Bad is an unpleasant word, right? “On” or “off” implies some final statement, or “failed” may signal something wrong but at least terminated. But “bad”… uuuhh, blameworthy, guilty, unaccountable, still being around. Ok, before diving into linguistic depression, the change eventually turned out simple and was actually in good faith but, however, produced remarkable irritation. You know, systemctl status {service} will show an overview of some systemd unit definition with load state, current activity and so on. The load state, in particular, details, in parantheses, into the path of the unit file, the enablement and the vendor enablement preset, respectively. Original systemd units may give a load state as follows:

Loaded: loaded (/usr/lib/systemd/system/atop.service; enabled; vendor preset: disabled)

However, systemd units, that have just been derived from systemv init scripts, only printed the init script path since lately:

Loaded: loaded (/etc/rc.d/init.d/sysv-thing)

The new irritating factor now is an extension for those derived systemv init scripts, to also state the enablement, but show up as “bad” for the running enablement for whatever weird reason:

Loaded: loaded (/etc/rc.d/init.d/sysv-thing; bad; vendor preset: disabled)

(more…)

Splitting a btrfs 4.1x root partition with debian live system gparted btrfs tools 3.17-x


This is a short picture log of doing a btrfs 4.1x root partition split on a (down) oracle linux 7.2 using a debian live system applying gparted based on btrfs tools 3.17-x. Lot’s of names and version codecs, right? But this is what matters. The important message is : it works using this flavours.
Actually, running oracle or redhat linux as the live system may have been much more appropriate concerning compatibility reasons. The odd things is, no redhat-based (enterprise) linux system features gparted. Only fedora does, sourcing the epel-repository but not having kinf of a live system release as debian.

(more…)

Investigating DBV-00201: Block, DBA number, marked corrupt for invalid redo application


More or less forced 😉 by the procedures discussed in my last post (Rman duplicating an oracle 11g database in active state on the same host), this post will investigate how to verify and fix data corruption induced by RMan trying to restore nologging objects. Indeed, we know, there’s no actual restore of nologging objects, since there’s no cold or hot redo to process. But what does this mean in practice? Me, I learned now. Let’s have a look.

Ok, there’re some (partitioned) tables in the database duplicated, previous post, that for performance reasons (see : Oracle global temp tabs or nologgingappend saved redo in numbers) have been set nologging. I won’t explain the why here, that’s another subject, but, however, the first post-duplicate database backup showed up with block corruption errors in Quest’s Backup Reporter for Oracle Community. I today examined the overall database integrity status with a rman validate, verified the affected tablespace, but rman list failures, asking Data Recovery Advisor under the covers, did not seem to be up to complain about anything. We see that 10 blocks have been marked corrupt but the file check status is ok though.

RMAN> validate check logical database;
Starting validate at 22.07.2016-08:55:54
allocated channel: ORA_DISK_1
...
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
31   OK     10             58357        557056          8784584429990
  File Name: H:\ORACLE\SAN_4\ORADATA\UTL_01.DBF
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              265313
  Index      0              223903
  Other      0              9483
...
File Type    Status Blocks Failing Blocks Examined
------------ ------ -------------- ---------------
SPFILE       OK     0              2
Control File OK     0              778
Finished validate at 22.07.2016-08:59:46

RMAN> list failure all;
no failures found that match specification

(more…)