Thursday, October 22, 2009

Use NULL for unknown data...

Hah, it goes back much further than I thought... pointed out this morning that the use of a 'bad default value' dates back to at least 1979...

See for the original reference.

Wednesday, October 21, 2009

Back from Oracle OpenWorld

I am back and have been taking a set of new questions on asktom. Last week was a busy one out in California and I'm finally getting caught up on emails and questions (100% on the former, still working on the latter)

Anyway, I saw an article and it made me laugh - and sort of cry at the same time. It has to do with the use of default values. A lot of developers/DBAs have a very certain fear (that is the best word I can think of to describe their attitude) of NULL - the 'unknown' value. So, instead of using NULL for an effective_end_date field (for records that we don't know the end date for, they don't have one) or using NULL for values they do not know the value of - they use some 'fake' value. This fake value is assumed to be a value that could never possibly be used.


Things change over time.

I'm sure when the developers implemented this system - using XXXXXXX for a license plate value that was unknown seemed 'reasonable'. I mean - who would ever ask for a vanity plate with seven X's on them?

Talk to the guy with almost $20k in fines that aren't his to see if he might know someone that might want a vanity plate with XXXXXXX on it :)

Do not fear NULL.

Understand it, but don't fear it.

Sunday, October 11, 2009

Oracle OpenWorld first presentation

My first presentation of Oracle OpenWorld is done - it was the one I was most nervous about. It was a keynote, the opening session for the Oracle Develop conference track. As a keynote - I try to make it fun and informative but not a brain dump of all things technical.

The room filled up - which surprised me, it was 9am on a Sunday morning...

And the presentation seemed to work - I ran about 3 minutes over (need to tighten it up a tiny bit) - but it went well. Everyone laughed when they were supposed to, and didn't when they weren't. Even got a bit of spontaneous applause every now and then :)

For those that missed it, or those not at the conference that want to see it - there will be a replay tomorrow (Monday, October 12th) at 11am pacific time - see this link for details.

ID#: S312577
Title: Keynote: Oracle Develop "What Are We Still Doing Wrong"
Track: Oracle Develop: Database
Date: 11-OCT-09
Time: 09:00 - 10:00
Venue: Hilton Hotel
Room: Grand Ballroom B

Friday, October 09, 2009

Off to Oracle OpenWorld

I've been a bit busy with things recently - getting ready for Oracle OpenWorld in particular. I haven't been able to pay as much attention to asktom as I normally do - that'll change the week after OpenWorld (I'll be taking questions again that week - until then, probably not)

ID#: S312577
Title: Keynote: Oracle Develop "What Are We Still Doing Wrong"
Track: Oracle Develop: Database
Date: 11-OCT-09
Time: 09:00 - 10:00
Venue: Hilton Hotel
Room: Grand Ballroom B

ID#: S311235
Title: All About Metadata: Why Telling the Database About Your Schema Matters
Track: Oracle Develop: Database
Date: 12-OCT-09
Time: 11:30 - 12:30
Venue: Hilton Hotel
Room: Imperial Ballroom B

ID#: S311322
Title: DBA 2.0: Battle of the DBAs Revisited
Track: Database
Date: 12-OCT-09
Time: 17:30 - 18:30
Venue: Moscone South
Room: Room 103

ID#: S311234
Title: The top 10 - No, 11 - New Features of Oracle Database 11g Release 2
Track: Database
Date: 13-OCT-09
Time: 17:30 - 18:30
Venue: Moscone South
Room: Room 103

ID#: S311236
Title: Efficient PL/SQL: Why and How to Use PL/SQL to Its Greatest Effect
Track: Database
Date: 14-OCT-09
Time: 17:00 - 18:00
Venue: Moscone South
Room: Room 103

I'll be in the OTN lounge on Wednesday from 12:45pm till 1:30pm with hte "Heavy Hitters" program. Anyone is free to attend that session.

Also, I'll be in the database demo grounds in Moscone West - in the middle of the demo area. It'll say "database overview" in big letters on top and "Oracle Database 11g Release 2" underneath. I'll be manning a station there on Tuesday from 10:30am till 11:30 am and on Wednesday from 9am till 10:15am

Looking forward to OpenWorld - both the beginning and the end :) It'll be nice to get back home after a week at a conference.

Thursday, October 08, 2009

How Can I unload data to a flat file

Many times we are asked

  • "Does anyone know an easy way of dumping the data from an Oracle table into a delimited(comma, tab etc) ascii file?"
  • "Does anyone know an easy way to unload data in a format for sqlldr to reload later?"

Well here is a way to do it into Excel or any spreadsheet that understands the industry standard SYLK file format.

Here is a pro*c program that does it to a flat file very fast.

and here is a PLSQL routine that uses utl_file to do the same

A reader (Andy Rivenes) offers this more "robust" plsql implementation based on the original code.

And lastly, SQLPlus can do this quite easily but it is a pain to have to write a script/table. What I've done is setup scripts for UNIX and NT that allow you from the command line to execute things like:

$ sqlldr_exp scott/tiger dept

As you can see, this script unloaded the scott.dept table into a format that sqlldr can easily reload. All you would need to do is execute:

$ sqlldr_exp scott/tiger dept > dept.ctl

to create a control file that can be moved somewhere else and reloaded back into a dept table very quickly.

I also use a slight modification of this script called "flat". Flat does the same thing as sqlldr_exp does mostly except that it dumps the data into a tab delimited format without the sqlldr extras at the top. This makes the extract usable in spreadsheets and such.

In both cases some things you need to be aware of are with regards to this script:

  • There is an absolute limit of 2000 bytes in 7.x and 4000 bytes in 8.x per line/row for unloaded data. The total size of the unloaded data is unlimited -- the maximum size of an individual row of data is what is limited.
  • It makes no attempt to unload dates with the century or time component -- you must change your default NLS_DATE_FORMAT if this is a problem.
  • Beware of data with pipes or tabs in it!
  • Beware of data with newlines as well...
  • The NT scripts (.cmd files) need modifications if your command line sqlplus is not called SQLPLUS (eg: its plus33 or something similar)
  • On NT, you need to set your SQLPATH environment variable and put these files into that directory OR you need to run flat and sqlldr_exp from those directories so sqlplus can find the corresponding flat.sql and sqlldr_exp.sql files.
download that script here

How Can I find out who called me or what my name is

Many times we are asked "in a procedure/function, can I find out who called me" or "can I dynamically figure out the name of the procedure or package that is currently executing".

You can find it in the call stack returned by dbms_utility.format_call_stack. I wrote a small routine called who_called_me that returns this sort of information (it doesn't tell you who you are, it lets you know who called you). If you wrap who_called_me with a function who_am_i, you'll get what you need. If you create the who_called_me/who_am_i routines, you'll be able to:

SQL> create or replace procedure demo
2 as
3 begin
4 dbms_output.put_line( who_am_i );
5 end;
6 /

Procedure created.

SQL> exec demo;

In current releases of the database, this code has been incorporated into the OWA_UTIL package - you probably already have it in your database. If not, you can use this really old version that might need a tweak or two to work in your database release:

create or replace procedure who_called_me( owner out varchar2,
name out varchar2,
lineno out number,
caller_t out varchar2 )
call_stack varchar2(4096) default dbms_utility.format_call_stack;
n number;
found_stack BOOLEAN default FALSE;
line varchar2(255);
cnt number := 0;
n := instr( call_stack, chr(10) );
exit when ( cnt = 3 or n is NULL or n = 0 );
line := substr( call_stack, 1, n-1 );
call_stack := substr( call_stack, n+1 );
if ( NOT found_stack ) then
if ( line like '%handle%number%name%' ) then
found_stack := TRUE;
end if;
cnt := cnt + 1;
-- cnt = 1 is ME
-- cnt = 2 is MY Caller
-- cnt = 3 is Their Caller
if ( cnt = 3 ) then
lineno := to_number(substr( line, 13, 6 ));
line := substr( line, 21 );
if ( line like 'pr%' ) then
n := length( 'procedure ' );
elsif ( line like 'fun%' ) then
n := length( 'function ' );
elsif ( line like 'package body%' ) then
n := length( 'package body ' );
elsif ( line like 'pack%' ) then
n := length( 'package ' );
elsif ( line like 'anonymous%' ) then
n := length( 'anonymous block ' );
n := null;
end if;
if ( n is not null ) then
caller_t := ltrim(rtrim(upper(substr( line, 1, n-1 ))));
caller_t := 'TRIGGER';
end if;

line := substr( line, nvl(n,1) );
n := instr( line, '.' );
owner := ltrim(rtrim(substr( line, 1, n-1 )));
name := ltrim(rtrim(substr( line, n+1 )));
end if;
end if;
end loop;

create or replace function who_am_i return varchar2
l_owner varchar2(30);
l_name varchar2(30);
l_lineno number;
l_type varchar2(30);
who_called_me( l_owner, l_name, l_lineno, l_type );
return l_owner || '.' || l_name;

Unindexed Foreign Keys

Having Unindexed foreign keys can be a performance issue. There are two issues associated with unindexed foreign keys. The first is the fact that a table lock will result if you update the parent records primary key (very very unusual) or if you delete the parent record and the child's foreign key is not indexed.

To read about this issue, please see the Concepts Guide the section on Maintaining Data Integrity/Concurrency Control, Indexes, and Foreign Keys.

The second issue has to do with performance in general of a parent child relationship. Consider that if you have an on delete cascade and have not indexed the child table (eg: EMP is child of DEPT. Delete deptno = 10 should cascade to EMP. If deptno in emp is not indexed -- full table scan). This full scan is probably undesirable and if you delete many rows from the parent table, the child table will be scanned once for each parent row deleted.

Also consider that for most (not all, most) parent child relationships, we query the objects from the 'master' table to the 'detail' table. The glaring exception to this is a code table (short code to long description). For master/detail relationships, if you do not index the foreign key, a full scan of the child table will result.

So, how do you easily discover if you have unindexed foreign keys in your schema? This script can help. When you run it, it will generate a report such as:

SQL> @unindex

---- ------------------------------ -------------------- --------------------

The **** in the first row shows me that I have an unindexed foreign key in the table APPLICATION_INSTANCES. The ok in the second row shows me I have a table EMP with an indexed foreign key.

Base Conversion Routines

This set of routines is useful to convert between various 'bases' in Oracle. Once you install these functions, you will be able to perform operations such as:

SQL> select to_bin( 123 ) bin, to_hex( 123 ) hex, to_oct( 123 ) oct from dual
2 /

--------------- --------------- ---------------
1111011 7B 173

SQL> select to_dec( '1111011', 2 ) base2, to_dec( '7B' ) base16,
2 to_dec('173',8) base8
3 from dual
4 /

---------- ---------- ----------
123 123 123

Of course, with current releases - you would never user TO_HEX since TO_CHAR already does this (faster). Also TO_NUMBER would be used to convert HEX to decimal these days as well

ops$tkyte%ORA10GR2> select to_char( '1234', 'XXXX' ) from dual;


Here is the original code:

create or replace function to_base( p_dec in number, p_base in number )
return varchar2
l_str varchar2(255) default NULL;
l_num number default p_dec;
l_hex varchar2(16) default '0123456789ABCDEF';
if ( p_dec is null or p_base is null )
return null;
end if;
if ( trunc(p_dec) <> p_dec OR p_dec < 0 ) then
end if;
l_str := substr( l_hex, mod(l_num,p_base)+1, 1 ) || l_str;
l_num := trunc( l_num/p_base );
exit when ( l_num = 0 );
end loop;
return l_str;
end to_base;

create or replace function to_dec
( p_str in varchar2,
p_from_base in number default 16 ) return number
l_num number default 0;
l_hex varchar2(16) default '0123456789ABCDEF';
if ( p_str is null or p_from_base is null )
return null;
end if;
for i in 1 .. length(p_str) loop
l_num := l_num * p_from_base + instr(l_hex,upper(substr(p_str,i,1)))-1;
end loop;
return l_num;
end to_dec;
show errors

create or replace function to_hex( p_dec in number ) return varchar2
return to_base( p_dec, 16 );
end to_hex;
create or replace function to_bin( p_dec in number ) return varchar2
return to_base( p_dec, 2 );
end to_bin;
create or replace function to_oct( p_dec in number ) return varchar2
return to_base( p_dec, 8 );
end to_oct;


Generates needed package and triggers to support update cascade
in Oracle without removing or infringing on DECLARITIVE RI.

This package supports:

  • Tables with multi-part primary keys (primary key(a,c,b))
  • Update cascade to many child tables from one parent
  • Self-referencing integrity such as that found in the SCOTT.EMP table

  • Is completely application transparent. The application does not
    know it is happening
  • Versions 7.0 and above of the database.
  • Tuned and optimized to fully avoid full table scans on all tables
    (complete with utility to show you un-indexed foreign keys in a schema,
    Cascading an update to un-indexed foreign keys can be bad).

This solution has the following restrictions:

  1. All foreign keys to the parent table must point to the primary key
    constraint of the parent table
    . They cannot point to a unique
    constraint on the parent table, they must point to the primary key.

  2. No other unique constraints/indexes may be in place on the parent
    table other then the primary key constraint.

  3. Updates to primary keys that do not generate 'new' primary keys
    are not currently supported. For example, take the standard DEPT
    table. The update statement "update dept set deptno = deptno+10"

    not work whereas the update "update dept set deptno = deptno+1"
    The first update will change 10->20, 20->30 and so on. Problem is
    that 10->20 is not generating a 'new' primary key. On the other hand,
    deptno=deptno+1 does not have this problem since 10->11, 20->21 and
    so on.

    NOTE: an update that affects a single row will never suffer from this

  4. the owner of the parent table must also be the owner of the child

  5. the owner of the parent table must run the following package in
    their schema. This package must be installed for each user that wants
    to generate update cascade support. It may be dropped after the cascade
    support has been generated.

  6. the owner of the parent table must have been granted
    create procedure and create trigger. these
    priveleges may not be inherited from a role.

Installing, Using, and seeing how the demo works

This package consists of four SQL scripts

  • uc.sql
  • demobld.sql
  • unindex.sql
  • generate.sql


UC.SQL should be run by any user wanting to implement update cascade. It will

  • A package spec for update_cascade
  • A package body for update_cascade

Once this package is installed, you are able to implement update cascade on any
table via the pl/sql call:

PROCEDURE update_cascade.on_table
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------

Input NameDefaultUsage
p-table-name NO DEFAULTis the name of the parent table
p-preserve-rowid TRUE
affects the generation of the code used to implement
the generated packages. If set to TRUE (the default)
the rowid of the updated parent row will not change
due to the update. If set to FALSE, the rowid
of the updated row WILL change BUT the code executes
in about 66% of the time.
p-use-dbms-outputFALSEdefaults to FALSE which means the update_cascade
package will execute (create) the packages/triggers.
If set to true, the generated code will be printed
using dbms_output.put_line (make sure to set
serveroutput on size 100000 before using TRUE if you
want to see the code).

For example....

SQL> REM will do the default cascade support on
SQL> REM the dept table
SQL> REM ROWIDS will be preserved (unchanged).
SQL> REM Package will be created in the database
SQL> exec update_cascade.on_table( 'dept' )

SQL> REM will do the same as above but will not
SQL> REM preserve the rowids of the parent table
SQL> REM (the rowids will change). This version
SQL> REM runs in about 2/3 of the time of the first
SQL> REM one
SQL> exec update_cascade.on_table( 'dept', false )

SQL> REM will do the same as above but will not
SQL> REM preserve the rowids of the parent table
SQL> REM (the rowids will change). This version
SQL> REM runs in about 2/3 of the time of the first
SQL> REM one. Also, the packages will be written
SQL> REM to the SCREEN, not into the database.
SQL> exec update_cascade.on_table( 'dept', false, true )

The above would generate and compile all of
the packages/triggers need to support
cascading updates on the dept table to any other table.
You would run this any time you

  • changed the primary key definition of the dept table
  • added a child table to the schema
    (eg: executed a create table and that table points to dept)
  • removed a child table from schema

NOTE: The user executing update_cascade must have been granted


directly to themselves. They cannot just have the privilege via a role.

The other mode of calling update_cascade.on_table is as follows:

SQL> set feedback off
SQL> spool tmp.sql
SQL> exec update_cascade.on_table( p_table_name => 'dept', p_use_dbms_output =>TRUE )
SQL> spool off

The above would generate and print (as opposed to compile) all of the
packages/triggers/views needed to support cascading update on the dept table
to any other table. You would use this mode to

    directly to the person (they can get these priveleges via a role like DBA).
  • To inspect the generated code to understand what it does.
  • To generate a package that can be used to install update cascade support
    at another site without giving them the update_cascade package itself.

The other mode of calling update_cascade.on_table is as follows:

SQL> exec update_cascade.on_table( p_table_name => 'dept', p_preserve_rowid =>FALSE )

The above would generate faster versions of the udpate
cascade packages. They run in 2/3 of the time of the default version but
the rowid's of the updated parent records will change. This makes this
version less desirable with FORMS
. If you use forms heavily, use the
default mode so that rowids are preserved.
The triggers to cascade can
get away with a lot less work in this mode. The
triggers fire half the time they would in the default mode and an update that
would normally fire and affect 2x the number of rows is not needed. The
generated package code is streamlined as well (less actuall code is
generated, procedural code not just updates).


This script creates one user and three tables in that users schema. The user is
called UCDEMO.
WARNING: This script does a "drop user ucdemo cascade". It is
run as SYSTEM. Please review it before running it. Modify if you want

Once the script creates the user it will create six tables and populate them.
The tables are:

  • DEPT with a primary key
  • EMP with a primar key, DECLARATIVE RI to DEPT, DECLARATIVE RI to EMP

  • PROJECTS with a primary key, DECLARATIVE RI to EMP
  • T1 with a three part primary key
  • T2 with a three part primary key and a three part foreign key to T1
  • T3 with a three part primary key and a three part foreign key to T2

To begin the demo, you might issue:

SQL> update dept set deptno=deptno+1;
update dept set deptno=deptno+1
ERROR at line 1:
ORA-02292: integrity constraint (UCDEMO.SYS_C005184) violated - child record

Then to see the update cascade in action, you would:

SQL> @uc

Table created.

Table altered.

Package created.

Package body created.

SQL> exec update_cascade.on_table('dept');

PL/SQL procedure successfully completed.


SQL> select * from dept;

---------- -------------- -------------

SQL> select empno, deptno from emp where deptno = 10;

---------- ----------
7839 10
7782 10
7934 10

SQL> update dept set deptno = deptno+1;

4 rows updated.

SQL> select * from dept;

---------- -------------- -------------

SQL> select empno, deptno from emp where deptno in ( 10, 11 );

---------- ----------
7839 11

7782 11
7934 11


As you can see, after the package is installed,
the updates cascade successfully to
the child tables. The effect of this is even more
dramatic when you do it to the
emp table. The MGR column of the EMP table points to
the EMPNO column of the EMP
table. In addition, the EMPNO column of the PROJECTS table points to the EMPNO
column of the EMP table. The following scenario is a good demo of more complex

SQL> select empno, mgr from emp;

---------- ----------
7698 7839
7782 7839
7566 7839
7499 7698
7521 7698
7654 7698
7902 7566
7369 7902
7788 7566
7844 7698
7876 7788
7900 7698
7934 7782

14 rows selected.

SQL> update emp set empno = 8000 where empno = 7698;

update emp set empno = 8000 where empno = 7698
ERROR at line 1:
ORA-02292: integrity constraint (UCDEMO.SYS_C005186) violated - child record

SQL> exec update_cascade.on_table('emp')

PL/SQL procedure successfully completed.

SQL> update emp set empno = 8000 where empno = 7698;

1 row updated.

SQL> select empno, mgr from emp;

---------- ----------
7782 7839
7566 7839
7499 8000
7521 8000

7654 8000
7902 7566
7369 7902
7788 7566
7844 8000
7876 7788
7900 8000
7934 7782
8000 7839

14 rows selected.


The lack of an index on a foreign key will adversely impact the performance
of a cascading update. For example, the emp table is created via:

(EMPNO NUMBER(4) primary key,
MGR NUMBER(4) references emp,
DEPTNO NUMBER(2) references dept);

The MGR column and the DEPTNO column are not indexed by default. So, for
example, if one were to issue:

SQL> update emp set empno = empno+1;

This would eventually issue:

update emp set mgr = SOME_NEW_VALUE
where mgr = SOME_OLD_VALUE;

Since the MGR field is not indexed, this update would do a full scan. It
would do this full scan once for each row in EMP that was updated.

Unindex.sql will generate output such as:

SQL> @unindex.sql

---- ------------------------------ -------------------- --------------------
**** EMP MGR
ok T2 A A, B
ok T3 A, B A, B, C


The four **** indicate that MGR is a foreign key in the EMP table that is
not indexed. It should be (also see chapter 6 in the server application
developers guide for other reasons why un-indexed foreign keys are bad).


This is a simple script that will generate all of the needed
update_cascade.on_table( 'table_name' ) statements that need to be executed
for all parent tables in a schema. For example, in the demo account it
would create a spool file that contains:

SQL> @src/update_cascade/generate

prompt Update Cascade on table: DEPT
execute update_cascade.on_table( 'DEPT' )

prompt Update Cascade on table: EMP
execute update_cascade.on_table( 'EMP' )

prompt Update Cascade on table: T1
execute update_cascade.on_table( 'T1' )

prompt Update Cascade on table: T2
execute update_cascade.on_table( 'T2' )

HOW it works:

When you update the primary key of a parent table, you might want to
cascade the update to the children. This is hard to do for many
reasons and can be problematic. This package works around the lack of
an update cascade option.

This package uses three triggers to perform it magic.

  • A before update trigger; used to reset some package variables
  • A before update, for each row trigger; used to capture the before
    and after images of the primary keys in pl/sql tables.
    It also 'undoes' the update to the primary key.
  • An After update trigger that does the following steps:

    • 'Clones' the parent records with their new primary key, eg:
      insert into parent select NEW_KEY, other_cols
      from parent where CURRENT_KEY = ( SELECT OLD_KEY
      FROM DUAL)

      for example, given "update dept set deptno=deptno+1", this would
      insert the values of 11, 21, 31, 41 into the dept table. 11 would
      have the values in the rest of the columns that 10 had. 21 would
      look like 20 and so on.

    • If p_preserve_rowids = TRUE, then the primary keys of the row that
      was cloned and the clone would be flip flopped. For example, if you
      issue: update dept set deptno = 11 where deptno = 10 we would make 10
      become the new value 11 and 11 become the old value 10.

    • Re-Parents the child records in all subordinate tables.
      Performs the equivalent of:
      update child set fkey = ( select new_key
      from DUAL )
      where fkey = ( select old_key from DUAL )

    • It then removes the 'cloned' parent records or the record with the
      old primary key value.

A look at the code

The follow annotated code is the generated packages and triggers you
would create by generating support for the dept table.

The following package spec is what would be generated for the
typical 'DEPT' table
found in the scott schema (when declaritive RI is used). See the annotations in
the spec for a description of what each entry means and how it is used. The
annotations are not found in the generated code, the generated code is not

This generated code preserves rowids. The code that preserves rowids will
be in bold. This code would not be present in the generated package if
rowid preservation was disabled.

SQL> exec update_cascade.on_table('dept',true,true);

The following is a typical package specification generated for a table.
The package spec name is always u || TABLE_NAME || p. The package name is
in mixed case (to prevent collisions with other user objects).

create or replace package "uDEPTp"

Rowcnt is used to collect the number of rows processed by a given update
statement. It is reset in the uDEPTp.reset routine in a before update
trigger. The 'inTrigger' variable is used to prevent recursive firing of
triggers when p_preserve_rowid = TRUE;

rowCnt number default 0;
inTrigger boolean default FALSE;


For each element in the primary key, a table type will be declared and then
an array of that type will be declared to 1.) hold the before image, 2.) the
after image, and 3.) an empty array used to zero out the previous two

type C1_type is table of "DEPT"."DEPTNO"%type index by binary_integer;
empty_C1 C1_type;
old_C1 C1_type;
new_C1 C1_type;

Reset is the routine fired by the BEFORE UPDATE trigger that resets the
rowcnt variable and empties out the arrays from the previous invocation.

procedure reset;

Do cascade is the work horse routine. It performs the actual cascade when
fired from an AFTER UPDATE trigger.

procedure do_cascade;

Add Entry simply increments the rowcnt and collects the before/after images
of the primary keys. It also 'undoes' the update to the primary key by
accessing the :new and :old variables.

procedure add_entry
p_old_C1 in "DEPT"."DEPTNO"%type
,p_new_C1 in out "DEPT"."DEPTNO"%type
end "uDEPTp";

This is the package body generated. It implements the above specification

create or replace package body "uDEPTp"
procedure reset

This line is present in all routines when p_preserve_rowids = TRUE. It
prevents recursive firing of the triggers.

if ( inTrigger ) then return; end if;
rowCnt := 0;
old_C1 := empty_C1;
new_C1 := empty_C1;
end reset;
procedure add_entry
p_old_C1 in "DEPT"."DEPTNO"%type
,p_new_C1 in out "DEPT"."DEPTNO"%type
if ( inTrigger ) then return; end if;


This code saves the before and after images in pl/sql tables and 'undoes'
the primary key update by setting the new columns back to the old columns.

if (
p_old_C1 <> p_new_C1
) then
rowCnt := rowCnt + 1;
old_C1( rowCnt ) := p_old_C1;
new_C1( rowCnt ) := p_new_C1;
p_new_C1 := p_old_C1;
end if;
end add_entry;
procedure do_cascade
if ( inTrigger ) then return; end if;
inTrigger := TRUE;


For every row that was updated we will perform the clone, cascade and

for i in 1 .. rowCnt loop

This insert clones the parent row, duping the old values with the new
primary key.

insert into DEPT (
,"DNAME","LOC") select
from "DEPT" a
where ( "DEPTNO" ) =
( select old_C1(i)
from dual );

This code is generated only when p_preserve_rowids=true and will flip-flop
the old and new primary keys, hence preserving the rowid of the original

update "DEPT" set
( "DEPTNO" ) =
( select
decode( "DEPTNO", old_c1(i), new_c1(i), old_c1(i) )
from dual )
where ( "DEPTNO" ) =
( select new_C1(i)
from dual )
OR ( "DEPTNO" ) =
( select old_C1(i)
from dual );


Do a cascade update to all children tables.

update "EMP" set
( "DEPTNO" ) =
( select new_C1(i)
from dual )
where ( "DEPTNO" ) =
( select old_C1(i)
from dual );

Removing the old primary key value.

delete from "DEPT"
where ( "DEPTNO" ) =
( select old_C1(i)
from dual);
end loop;
inTrigger := FALSE;
when others then
inTrigger := FALSE;
end do_cascade;
end "uDEPTp";

Lastly, we have the three triggers placed on
the parent table to effect the update
cascade. The first trigger simply 'resets' the package variables above.

create or replace trigger "uc$DEPT_bu"
before update of
on "DEPT"
begin "uc$DEPT_pkg".reset; end;

The next trigger, the for each row trigger,
simply calls add_entry for each changed

create or replace trigger "uc$DEPT_bufer"
before update of
on "DEPT"
for each row

The last trigger, calls do_cascade to effect the change

create or replace trigger "uc$DEPT_au"
after update of
on "DEPT"
begin "uc$DEPT_pkg".do_cascade; end;


This page will be published and republished over the next couple of days and into the future. It will contain pointers to the material that was previously hosted on I'll be reformatting those pages to fit into blogspot here and host the data supporting the pages in my "files" tab on asktom. After that is all done, I'll be doing an update to the questions and answers on asktom to map to the new URLS for everything.

So - beware - you will see a flurry of activity on my blog for a bit. It won't be anything new, but it might be something you didn't see before...

Selected Utilities/Frequently Asked Questions

NOTE: please do not comment on this page.  They will never show up.  If you want to ask a question - that is what is for.  Thanks!