Mike Dean, Sr Staff Consultant
I recently wrote a blog (http://dbspecialists.wpengine.com/blog/database-maintenance/resizing-datafiles-and-understanding-the-high-water-mark/) about understanding the High Water Mark (HWM) in datafiles. This knowledge of the HWM came in handy for me recently at a customer site. Over time, their database had grown to over 500GB and was threatening to fill up their filesystem. They had recently done a purge of about 400GB worth of old data and wanted to shrink the datafiles but were unable to do so because of the High Water Mark. In order to resolve this and limit downtime, I decided to create a new tablespace and use DBMS_REDEFINITION to move objects from the existing tablespace to the new tablespace. Here ( http://dbspecialists.wpengine.com/files/presentations/online_redef.html ) is a great presentation by Chris Lawson and Roger Schrag of Database Specialists that gives an overview of how DBMS_REDEFINITION works. Rather than repeat what they already said, I will describe some of the issues that I ran into when I implemented this procedure on a live database. This is on a 10.2.0.1 Enterprise Edition database running on 64-bit Linux.
As a very quick review, here are the steps involved in using DBMS_REDEFINITION to move objects to a new tablespace:
1) Run DBMS_REDEFINITION.CAN_REDEF_TABLE to make sure the table is able to be redefined online
2) Create the “Interim” table as an empty copy of the source table in the new tablespace
3) Run DBMS_REDEFINITION.START_REDEF_TABLE to begin the process.
4) Copy the table dependents (constraints, triggers, indexes) to the Interim table with DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS
5) Sync the Interim table with DBMS_REDEFINITION.SYNC_INTERIM_TABLE
6) Finish the redefinition process with DBMS_REDEFINITION.FINISH_REDEF_TABLE. It is during this part of the operation that Oracle will require an exclusive lock on the source table but it is really very brief.
The USERS tablespace consisted of 18 datafiles ranging from 20GB to 32GB. My goal was to decrease each datafile to 10GB so I ran the following query to determine exactly which objects needed to move.
select unique segment_name, owner from dba_extentswhere file_id in(select file_id from dba_data_fileswhere tablespace_name=’USERS’)and (block_id + blocks-1)*8192 > 10737418240;From this I found that there were more than 100 tables along with the indexes that needed move from the USERS tablespace to DATA tablespace.
Issue #1) Handling Not Null constraints
The first issue I encountered occurs if the interim table has NOT NULL constraints on it. If it does, during the COPY_TABLE_DEPENDENTS procedure, you will get this error:
DECLARE*ERROR at line 1:ORA-01442: column to be modified to NOT NULL is already NOT NULLORA-06512: at “SYS.DBMS_REDEFINITION”, line 984ORA-06512: at “SYS.DBMS_REDEFINITION”, line 1726ORA-06512: at line 4The workaround for this is simple. After creating the table with “Create Table As Select where 0=1”, I run a little PL/SQL loop to drop the NOT NULL constraints from the interim table. This issue is documented in Oracle Support Article ID 1116785.1: “ORA-01442 DURING DBMS_REDEFINTION.COPY_TABLE_DEPENDENTS”
The second issue I had with NOT NULL constraints is described in Oracle Support Article ID 1089860.1 “Why Are NOT NULL Constraints Not Copied By DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS?“. As it turns out, if the compatible parameter is set to to 10.2 or higher, it will copy the NOT NULL constraints but will do so in NOVALIDATE mode. This means that when you do a describe of the table, you won’t see “NOT NULL”. If compatible is not set to 10.2 or higher, then the constraints really don’t get copied. In this case, the compatible is set to 10.2.0.1 so the workaround was to run “ALTER TABLE..ENABLE VALIDATE CONSTRAINT” for each NOT NULL on the table. Again, a simple PL/SQL loop took care of this for me.
Issue #2) Rebuilding Indexes
When the COPY_TABLE_DEPENDENTS runs, it will rebuild the indexes but it rebuilds them in their original tablespace which is not what I wanted. So, after running COPY_TABLE_DEPENDENTS, I rebuilt the indexes in the new tablespace
alter index <index name> rebuld tablespace DATA online nologging
Issue #3) Handling Context indexes
In this database, there are several tables that have Context Indexes built on them. There is a bug (#4688172) in Oracle 10.2 that won’t allow DBMS_REDEFINITION to work with these tables. According to the Bug: “Online Table Redefinition with domain index works fine in 9.2.x and 10gR1 whereas it fails in 10gR2 with following errors:
DBMS_REDEFINITION.FINISH_REDEF_TABLE(‘tc’, ‘ptab’, ‘int_ptab’);
ERROR at line 1:
ORA-42012: error occurred while completing the redefinition
ORA-12008: error in materialized view refresh path
ORA-29886: feature not supported for domain indexes
ORA-6512: at “SYS.DBMS_REDEFINITION”, line 76
ORA-6512: at “SYS.DBMS_REDEFINITION”, line 1376
ORA-6512: at line 2
The workaround is to set Event 10995 to Level 1 prior to executing any dbms_redefinition API. Unfortunately, setting this event requires the database to be restarted and has not yet been done.
Issue #4) Library Catch Latching
Each time I moved a table, all dependent procedures, triggers, functions etc of that table would become invalid. Normally, Oracle will just recompile any invalid procedures as they are needed and there won’t be a problem. In this case, however, there is one procedure that runs constantly in the database. So, when I moved a table and caused this procedure to become invalid, it couldn’t be recompiled because the procedure was already in use. Other sessions that tried to execute this procedure would wait on Library Cache Latch but could never get the Latch. The solution was to kill the blocking session and allow the procedure to recompile.
Issue #5) Execution Plan Changing
This is something that can potentially happen anytime that Oracle parses an SQL statement and chooses the execution plan. There is always the possibility that Oracle will not choose the best plan and sometimes it will be orders of magnitude worse than what it had been using minutes before. This happened a couple of times while I was moving all of these objects. In each case, I forced Oracle to re-parse again by collecting optimizer statistics and luckily Oracle chose the better plan the second time around.
So in spite of these issues I ran into, I would say that DBMS_REDEFINTION is really a great way to redefine tables with little if any downtime needed. Of course, nothing is perfect and things will change from version to version but it is definitely a tool I would recommend you become familiar with.