Some day, you may trap the following error message with your standby database alert log (seen on a 10.2.0.3):
RFS: No standby redo logfiles available
The reason for this message is most probably that you have cloned your standby database with
rman and configured your dataguard environment to directly write into redo log at standby (using
LGWR), instead of just transfering the archive log items from primary (using
ARCH), but have missed to provide the necessary standby redo log files.
RFS may also log
RFS: Unable to open standby log 9: 313 or something similar (the primary will also complain about connection problems to the standby destination target, which is the standby redo log, actually).
RFS: No standby redo logfiles available of size 104857600 bytes
I was wondering what will happen iff some archivelog gap really happens in a dataguard environment. How to resolve it? What has to be done manually and what happens automatically due to the remote file service (fal), for example.
Well, my attempt in producing the problem goes like this:
- verify normal operation by doing manual log switches on primary and watching the alert.log on primary and standby what happens
- defering the remote log destination (usually #2) on primary
- doing another manual log switches on primary that do now, don’t get shipped to standby
- backup primary with removing all backed up archive logs from the recovery destination
- bounce primary and standby
- activating the remote log destination (usually #2) on primary again, doing another manual log switches on primary, watching the alert.log on primary and standby what happens
Ok, let’s follow the alert.log on primary and standby …