A method of add partition for maxvalue range partitions

introduction

this article discusses a selection of methods, as well as their prerequisites and consequences, of adding a partition to the high end of an oracle range partitioned table, where the high end is dedicatedly denoted by maxvalue. it also regards questions arising when the partitioned table contains storage optimized lob-colums, a text-domain-index and, for size and failover reasons, every table and several index partitions reside in dedicated tablespaces.

range partitioned tables and add partition

literally seen, adding a partition to the high end of an oracle range partitioned table should easily be possible with the alter table <table> add partition ... statement. the common issue with alter table <table> add partition ... is, however, the fact that the statement will only succeed if the high end is a constant, a fixed value, may it be a number or a date or whatever. if you try to add a partition to a table, that has an upper limit of maxvalue, you may receive the following error:

ORA-14074: partition bound must collate higher than that of the last partition
Cause: Partition bound specified in ALTER TABLE ADD PARTITION statement did not collate higher than that of the table’s last partition, which is illegal.
Action: Ensure that the partition bound of the partition to be added collates higher than that of the table’s last partition.

the point goes without speaking. you cannot specify a newer upper limit that is in fact larger than maxvalue, which is infinite by definition.

real world business does hardly allow to preestimate a meaningful fixed value for the high end of a range partitioned table. business rules may change quite often such that an immediate adjustment of a fixed high end value may pose considerable efforts or may even be impossible.

if, on the other hand, one uses maxvalue to denote the partitioned table high end, what other methods are available to evolve a partitioning scheme, a time windowing technique for example, and what will be the prerequisites and consequences of such a declaration? the following presents a fully functional test case that aims to simulate the execution of a split partition action upon a rather complex table layout having lob-colums, a text-domain-index and partition-related tablespaces. quite a couple of views are provided to follow each and every change in terms of table- and index-storage and object operation validity. just playing around with the test case should smootly introduce a way to some dedicated production site implementation.

the split partition approach

the only way to have both an upper limit of maxvalue and nonetheless apply some add partition logic is to employ the alter table <table> split partition ... statement. some date oriented range partitioning scheme, or better, time windowing technique supposed, the current working partition will be split into the latest archive and the next working partition. see the following figure for illustration.

partitioning scheme before/after split operation

partitioning scheme before/after split operation

the point arising with alter table <table> split partition ... is the necessity to rebuild any global or local index objects concerning the table as a whole or the affected partition respectively. oracle obviously pays attention to the case where the next working partition will not contain any data and none of the index objects need any rebuild actually. the 10gR2 documentation states in Optimizing SPLIT PARTITION and SPLIT SUBPARTITION Operations:

Sometimes after a split operation, one of the new partitions contains all of the rows from the partition being split, while the other partition contains no rows. This is often the case when splitting the first partition of a table. The database can detect such situations and can optimize the split operation. This optimization results in a fast split operation that behaves like an add partition operation.

as with many sort of intelligent optimizations of oracle administration tasks, the above statement requires quite a couple of conditions to be met as follows:

* One of the two resulting partitions must be empty.
* The non-empty resulting partition must have storage characteristics identical to those of the partition being split. Specifically:
– If the partition being split is composite, then the storage characteristics of each subpartition in the new non-empty resulting partition must be identical to those of the subpartitions of the partition being split.
– If the partition being split contains a LOB column, then the storage characteristics of each LOB (sub)partition in the new non-empty resulting partition must be identical to those of the LOB (sub)partitions of the partition being split.
– If a partition of an index-organized table with overflow is being split, then the storage characteristics of each overflow (sub)partition in the new nonempty resulting partition must be identical to those of the overflow (sub)partitions of the partition being split.
– If a partition an index-organized table with mapping table is being split, then the storage characteristics of each mapping table (sub)partition in the new nonempty resulting partition must be identical to those of the mapping table (sub)partitions of the partition being split.

hhm, i would add that it will neither work when there is a local text-domain-index around but, however, then:

If these conditions are met after the split, then all global indexes remain usable, even if you did not specify the UPDATE INDEXES clause. Local index (sub)partitions associated with both resulting partitions remain usable if they were usable before the split. Local index (sub)partition(s) corresponding to the non-empty resulting partition will be identical to the local index (sub)partition(s) of the partition that was split.

ok, as far as one can see here, the environment being mentioned with the introduction above does have to regard lob-columns, a text-domain-index and different, evolving storage options as well. bad luck, no automatic optimization possible.

the split partition practice how-to

the following presents a test case of an approach being employed in practice lately. it contains all this stumbling blocks usually diametrically opposed to an easy going oracle administration and is directly derived from the requirements of a customer site implementation.

the basic idea is to have some next working partition already at hand, may it be the next but one month or quarter or year or whatever. that is, one splits an empty and quiet partition to minimize operation resource and time efforts. also, one may benefit from avoiding blocked access to the current working partition along business hours, which last for 24×7 these days.

the test case scenario may be sketched like this:

  • set up a date range partitioned test table with a yearly time window where every partition resides in its own tablespace, do explicitly specify the tablespace storage options for the comprised lob-columns
  • create a local primary key and a global unique key constraint (and index)
  • populate the table with test data
  • create a local text-domain-index that in turn does has its own tablespace storage options for the secondary index tables
  • do the split partition, the pseudo-add, and any necessary index rebuilds and continuously keep an eye on the ongoing storage option and index status changes using provided views
  • run the test case for an empty or a non-empty next working partition

the test case set up

there are some database objects to be set up in advance, specifically a couple of tablespaces and text-domain-index preferences.

-- have some test tbs to represent the different storage options
create tablespace tbs_test_1
  extent management local autoallocate segment space management auto
  datafile 'c:\temp\tbs_test_1_01.dbf'
  size 128 m autoextend on next 128 m maxsize 2048 m
/

create tablespace tbs_test_2
  extent management local autoallocate segment space management auto
  datafile 'c:\temp\tbs_test_2_01.dbf'
  size 128 m autoextend on next 128 m maxsize 2048 m
/

create tablespace tbs_test_3
  extent management local autoallocate segment space management auto
  datafile 'c:\temp\tbs_test_3_01.dbf'
  size 128 m autoextend on next 128 m maxsize 2048 m
/

create tablespace tbs_test_4
  extent management local autoallocate segment space management auto
  datafile 'c:\temp\tbs_test_4_01.dbf'
  size 128 m autoextend on next 128 m maxsize 2048 m
/

create tablespace tbs_test_1_ctx
  extent management local autoallocate segment space management auto
  datafile 'c:\temp\tbs_test_1_ctx_01.dbf'
  size 128 m autoextend on next 128 m maxsize 2048 m
/

create tablespace tbs_test_2_ctx
  extent management local autoallocate segment space management auto
  datafile 'c:\temp\tbs_test_2_ctx_01.dbf'
  size 128 m autoextend on next 128 m maxsize 2048 m
/

create tablespace tbs_test_3_ctx
  extent management local autoallocate segment space management auto
  datafile 'c:\temp\tbs_test_3_ctx_01.dbf'
  size 128 m autoextend on next 128 m maxsize 2048 m
/

create tablespace tbs_test_4_ctx
  extent management local autoallocate segment space management auto
  datafile 'c:\temp\tbs_test_4_ctx_01.dbf'
  size 128 m autoextend on next 128 m maxsize 2048 m
/
-- these are the storage options for the ctx index secondary tables
BEGIN
  CTX_DDL.CREATE_PREFERENCE ('CTX_STORAGE_TEST_1', 'BASIC_STORAGE');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_1', 'I_TABLE_CLAUSE',
    'TABLESPACE tbs_test_1_ctx pctfree 5 STORAGE (pctincrease 0)');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_1', 'K_TABLE_CLAUSE',
    'TABLESPACE tbs_test_1_ctx pctfree 5 STORAGE (pctincrease 0)');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_1', 'R_TABLE_CLAUSE',
    'TABLESPACE tbs_test_1_ctx pctfree 5 STORAGE (pctincrease 0) LOB (DATA) STORE AS (CACHE)');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_1', 'N_TABLE_CLAUSE',
    'TABLESPACE tbs_test_1_ctx pctfree 5 STORAGE (pctincrease 0)');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_1', 'I_INDEX_CLAUSE',
    'TABLESPACE tbs_test_1_ctx STORAGE (pctincrease 0) COMPRESS 2');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_1', 'P_TABLE_CLAUSE',
    'TABLESPACE tbs_test_1_ctx pctfree 5 STORAGE (pctincrease 0)');

  CTX_DDL.CREATE_PREFERENCE ('CTX_STORAGE_TEST_2', 'BASIC_STORAGE');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_2', 'I_TABLE_CLAUSE',
    'TABLESPACE tbs_test_2_ctx pctfree 5 STORAGE (pctincrease 0)');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_2', 'K_TABLE_CLAUSE',
    'TABLESPACE tbs_test_2_ctx pctfree 5 STORAGE (pctincrease 0)');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_2', 'R_TABLE_CLAUSE',
    'TABLESPACE tbs_test_2_ctx pctfree 5 STORAGE (pctincrease 0) LOB (DATA) STORE AS (CACHE)');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_2', 'N_TABLE_CLAUSE',
    'TABLESPACE tbs_test_2_ctx pctfree 5 STORAGE (pctincrease 0)');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_2', 'I_INDEX_CLAUSE',
    'TABLESPACE tbs_test_2_ctx STORAGE (pctincrease 0) COMPRESS 2');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_2', 'P_TABLE_CLAUSE',
    'TABLESPACE tbs_test_2_ctx pctfree 5 STORAGE (pctincrease 0)');

  CTX_DDL.CREATE_PREFERENCE ('CTX_STORAGE_TEST_3', 'BASIC_STORAGE');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_3', 'I_TABLE_CLAUSE',
    'TABLESPACE tbs_test_3_ctx pctfree 5 STORAGE (pctincrease 0)');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_3', 'K_TABLE_CLAUSE',
    'TABLESPACE tbs_test_3_ctx pctfree 5 STORAGE (pctincrease 0)');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_3', 'R_TABLE_CLAUSE',
    'TABLESPACE tbs_test_3_ctx pctfree 5 STORAGE (pctincrease 0) LOB (DATA) STORE AS (CACHE)');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_3', 'N_TABLE_CLAUSE',
    'TABLESPACE tbs_test_3_ctx pctfree 5 STORAGE (pctincrease 0)');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_3', 'I_INDEX_CLAUSE',
    'TABLESPACE tbs_test_3_ctx STORAGE (pctincrease 0) COMPRESS 2');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_3', 'P_TABLE_CLAUSE',
    'TABLESPACE tbs_test_3_ctx pctfree 5 STORAGE (pctincrease 0)');

  CTX_DDL.CREATE_PREFERENCE ('CTX_STORAGE_TEST_4', 'BASIC_STORAGE');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_4', 'I_TABLE_CLAUSE',
    'TABLESPACE tbs_test_4_ctx pctfree 5 STORAGE (pctincrease 0)');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_4', 'K_TABLE_CLAUSE',
    'TABLESPACE tbs_test_4_ctx pctfree 5 STORAGE (pctincrease 0)');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_4', 'R_TABLE_CLAUSE',
    'TABLESPACE tbs_test_4_ctx pctfree 5 STORAGE (pctincrease 0) LOB (DATA) STORE AS (CACHE)');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_4', 'N_TABLE_CLAUSE',
    'TABLESPACE tbs_test_4_ctx pctfree 5 STORAGE (pctincrease 0)');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_4', 'I_INDEX_CLAUSE',
    'TABLESPACE tbs_test_4_ctx STORAGE (pctincrease 0) COMPRESS 2');
  CTX_DDL.SET_ATTRIBUTE ('CTX_STORAGE_TEST_4', 'P_TABLE_CLAUSE',
    'TABLESPACE tbs_test_4_ctx pctfree 5 STORAGE (pctincrease 0)');
end;
/

also, some views have to be defined to easily look up the current operation status.

-- have some information collected and easily being hosted in views
create or replace force view v_test_tab_partitions
( table_name, partition_name, partition_position, high_value, low_value,
  tablespace_name)
as
select A.table_name, A.partition_name, A.partition_position, A.high_value,
  B.high_value as low_value, A.tablespace_name
from user_tab_partitions A, user_tab_partitions B
  where A.table_name = B.table_name(+)
    and (A.partition_position-1) = B.partition_position(+)
order by A.table_name, A.partition_position desc
/

create or replace force view v_test_lob_partitions
( table_name, column_name, partition_name, cache, logging, chunk, in_row,
  pctversion, tablespace_name, lob_name, m_bytes, extents)
as
select A.table_name, A.column_name, A.partition_name, A.cache, A.logging, A.chunk, A.in_row,
  A.pctversion, B.tablespace_name, A.lob_name, B.bytes/1024/1024 as m_bytes, B.extents
from user_lob_partitions A, user_segments B
  where B.partition_name is not null
    and A.lob_name = B.segment_name and A.lob_partition_name = B.partition_name
order by A.table_name, A.column_name
/

create or replace view v_test_index_stat
as
select A.table_name, A.index_name, A.partitioned as parted, B.partitioning_type as part_type,
  B.partition_count as part_cnt, A.index_type, A.ityp_name, B.alignment, A.status,
  A.domidx_status as di_status, A.domidx_opstatus as di_opstatus, A.tablespace_name as tbs_name,
  B.def_tablespace_name as def_tbs_name
from user_indexes A, user_part_indexes B
  where A.table_name = B.table_name(+) and A.index_name = B.index_name(+)
order by A.table_name, A.index_name
/

create or replace view v_test_index_stat_part
as
select index_name, partition_name as part_name, status, domidx_opstatus as di_opstatus,
  tablespace_name as tbs_name
from user_ind_partitions
order by index_name, partition_name
/

create or replace force view v_test_ctx_index_status
as
select USERNAME, USER_ID, idx_name as IDX_NAME, ityp_name as IDX_TYPE, PART_NAME,
  idx_docid_count as DOCID_COUNT, idx_opt_token as OPT_TOKEN, idx_opt_type as OPT_TYPE,
  idx_opt_count as OPT_COUNT, CTX_STATUS, SYS_STATUS, PART_STATUS
from (
  select C.username, A.idx_owner# as user_id, A.idx_name, B.ityp_name, 'N/A' as part_name,
    A.idx_docid_count, A.idx_opt_token, A.idx_opt_type, A.idx_opt_count, A.idx_status as ctx_status,
    B.status || '/' || B.domidx_status || '/' || b.domidx_opstatus as sys_status,
    'N/A' as part_status
  from ctxsys.dr$index A, all_indexes B, all_users C
    where B.index_type = 'DOMAIN' AND B.ityp_name in ('CONTEXT', 'CTXRULE') and B.partitioned = 'NO'
      and C.user_id = A.idx_owner# and C.username = B.owner and A.idx_name = B.index_name
union
  select C.username, A.idx_owner# as user_id, A.idx_name, B.ityp_name, D.ixp_name as part_name,
    D.ixp_docid_count, D.ixp_opt_token, D.ixp_opt_type, D.ixp_opt_count, D.ixp_status as ctx_status,
    B.status || '/' || B.domidx_status || '/' || B.domidx_opstatus as sys_status,
    E.status || '/' || E.domidx_opstatus as part_status
  from ctxsys.dr$index A, all_indexes B, all_users C, ctxsys.dr$index_partition D, all_ind_partitions E
    where B.index_type = 'DOMAIN' AND B.ityp_name in ('CONTEXT', 'CTXRULE') and B.partitioned = 'YES'
      and C.user_id = A.idx_owner# and C.username = B.owner and A.idx_name = B.index_name
      and D.ixp_idx_id = A.idx_id
      and E.index_owner = C.username and E.index_name = A.idx_name and E.partition_name = D.ixp_name
  ) where user_id = userenv('SCHEMAID')
order by username, idx_name, part_name
/

create or replace view v_test_CTX_INDEX_Storage
as
select table_name, view_type, obj_name, obj_type, tablespace_name, partitioned
from (
  select table_name, 'CTX 2nd TABLE' as view_type, table_name as obj_name, iot_type as obj_type,
    tablespace_name, partitioned
  from all_tables
  union
  select table_name, 'CTX 2nd BTREE- OR IOT-INDEX' as view_type, index_name as obj_name, index_type as obj_type,
    tablespace_name, partitioned
  from all_indexes)
order by table_name, obj_name
/

setting up the test table and basic indexes gets the test case scenario going.

-- test case make clean
drop TABLE test
/

-- test case table and text index set up
CREATE TABLE test
( ts DATE DEFAULT sysdate NOT NULL,
  id NUMBER NOT NULL,
  n1 number not null,
  cl1 CLOB,
  cl2 CLOB)
PARTITION BY RANGE(ts)
( PARTITION test_1 VALUES LESS THAN (to_date('01.01.2003', 'dd.mm.yyyy'))
  TABLESPACE tbs_test_1
    lob (cl1) store as (tablespace tbs_test_1)
    lob (cl2) store as (tablespace tbs_test_1 cache reads),
  PARTITION test_2 VALUES LESS THAN (to_date('01.01.2004', 'dd.mm.yyyy'))
  TABLESPACE tbs_test_2
    lob (cl1) store as (tablespace tbs_test_2)
    lob (cl2) store as (tablespace tbs_test_2 cache reads),
  PARTITION test_3 VALUES LESS THAN (MAXVALUE)
  TABLESPACE tbs_test_3
    lob (cl1) store as (tablespace tbs_test_3)
    lob (cl2) store as (tablespace tbs_test_3 cache reads))
enable row movement
/

alter table test add (
constraint test_pk primary key (ts, id)
  not deferrable initially immediate using index tablespace users local enable)
/

alter table test add (
constraint test_uq_pk unique (id)
  not deferrable initially immediate using index tablespace users global enable)
/

populate some data and create the ctx-text-index, reminding that the high-end partition of the table shall not contain any data this time.

-- populate some data
insert into test (ts, id, n1, cl1, cl2)
  values (to_date('01.10.2002', 'dd.mm.yyyy'), 1, 1, 'a1', 'b1');
insert into test (ts, id, n1, cl1, cl2)
  values (to_date('01.10.2003', 'dd.mm.yyyy'), 2, 1, 'a2', 'b2');
commit;

-- create the ctx-text-index
CREATE INDEX CTX_test ON test (cl1) INDEXTYPE IS ctxsys.context
LOCAL
( PARTITION test_1 PARAMETERS ('STORAGE CTX_STORAGE_TEST_1'),
  PARTITION test_2 PARAMETERS ('STORAGE CTX_STORAGE_TEST_2'),
  PARTITION test_3 PARAMETERS ('STORAGE CTX_STORAGE_TEST_3'));

-- check for any ctx-text-indexing errors and execute a test select
select count(*) as cnt from ctx_user_index_errors;
select count(*) as cnt from test where contains(cl1, 'a%')>0;

next follows a first review of the operation status using the views being defined above. main attention should be paid to the current storage options of tables and to the indexes at their different index stati.

-- monitor the table storage options
select * from v_test_tab_partitions where table_name = 'TEST';

add-part-fig2

-- monitor the table lobs storage options
select * from v_test_lob_partitions where table_name = 'TEST';

add-part-fig3

-- monitor the indexes storage options
select * from v_test_index_stat where table_name = 'TEST' and index_type != 'LOB';

add-part-fig4

-- monitor the partitioned indexes usabilty status
select * from v_test_index_stat_part where index_name like 'TEST%';

add-part-fig5

-- monitor the ctx-index secondary table and index storage options
select * from v_test_CTX_INDEX_Storage where table_name like '%CTX_TEST%';

add-part-fig6

-- monitor the partitioned ctx-text-index document count and status
select idx_name, part_name, docid_count, ctx_status, sys_status, part_status
from v_test_CTX_INDEX_STATUS where idx_name = 'CTX_TEST';

add-part-fig7

this is were the actual alter table <table> split partition ... statement will be executed. please note the dedicated table and lob storage options given along the statement.

-- prepare for an easy renaming in the split below
alter table test rename partition test_3 to test_3_tmp;
alter index test_pk rename partition test_3 to test_3_tmp;
alter index CTX_test rename partition test_3 to test_3_tmp;
-- do the split, regard the clob storage settings as well
alter table test split partition test_3_tmp
  at (to_date('01.01.2005', 'dd.mm.yyyy'))
  into (
    partition test_3 TABLESPACE tbs_test_3
      lob (cl1) store as (tablespace tbs_test_3)
      lob (cl2) store as (tablespace tbs_test_3 cache reads),
    partition test_4 TABLESPACE tbs_test_4
      lob (cl1) store as (tablespace tbs_test_4)
      lob (cl2) store as (tablespace tbs_test_4 cache reads));

after executing the alter table <table> split partition ... statement you will find any affected index status be valid, as desired. however, the storage options of the text-domain-index secondary tables and indexes, that is the DR#-objects, do not meet the demands given in the text-domain-index preferences above. they reside in the users default tablespace (see the tablespace_name column for the DR#CTX_TEST003- and DR#CTX_TEST004-objects in the figure below).

add-part-fig8

the two affected text-domain-index partitions have to be rebuild, which is quite an efficient operation because no base table data has to be processed so far.

-- rebuild the participating ctx-index partitions to reset their storage preferences
alter index ctx_test rebuild partition test_3
  PARAMETERS ('REPLACE STORAGE CTX_STORAGE_TEST_3');
alter index ctx_test rebuild partition test_4
  PARAMETERS ('REPLACE STORAGE CTX_STORAGE_TEST_4');

rebuilding the text-domain-index partitions also finished the test case scenario for an empty next working partition.

do now replay the test case scenario for an non-empty next working partition, finding any related index partition as well as the global index becoming unsusable. to rebuild these objects execute the following:

-- firstly rebuild any global indexes on the table (regards the omitted
-- UPDATE GLOBAL INDEXES  with the split above) to reenable global table dml
alter index test_uq_pk rebuild;
-- rebuild the unusable local indexes, affects just only test_pk
alter table test modify partition test_3 rebuild unusable local indexes;
alter table test modify partition test_4 rebuild unusable local indexes;

conclusions

the split partition approach may work well and without much noise against the high end of an oracle range partitioned table if there still is an empty next working partition available. but even in that case, the resulting text-domain-index partitions have to be rebuild to realize specific storage options for the secondary index tables.

the provided views may also be applicable for other investigations outside the presented test case. specifically v_test_ctx_index_status, in its highly compact layout, can be recommended to diagnose all the users text-domain-index statuses at one spot.

have fun!

3 comments

  1. When someone writes an post he/she retains the idea of a
    user in his/her brain that how a user can understand it.
    Therefore that’s why this article is great. Thanks!

  2. Very nice example and explanation!

    Frustrating that even when specifying the SAME tablespace for the TEXT index partition being split (the old MAXVALUE partitions) that the index has to be rebuilt when data resides in the MAXVALUE partition. This doesn’t seem to be the case with non-TEXT index partitions though.

    1. Jep, true. When oracle text indexes (aka domain indexes) come into play, yet another sophisticated administration toolsets cease to work. dbms_redefinition is the most depressing example, for it claims that everything is ok for the check step but it eventually fails at the sync step (because they use hot mat-views under the covers that have no idea of domain indexes).

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.