In Oracle, we also have the ability to defer constraint checking, which can be quite advantageous for various operations. The one that immediately jumps to mind is the requirement to cascade an UPDATE of a primary key to the child keys. Many people claim you should never need to do this—that primary keys are immutable (I am one of those people), but many others persist in their desire to have a cascading UPDATE. Deferrable constraints make this possible.
Note It is considered an extremely bad practice to perform update cascades to modify a primary key. It violates the intent of the primary key. If you have to do it once to correct bad information, that’s one thing, but if you find you are constantly doing it as part of your application, you will want to go back and rethink that process—you have chosen the wrong attributes to be the key!
In early releases of Oracle, it was possible to do a CASCADE UPDATE, but doing so involved a tremendous amount of work and had certain limitations. With deferrable constraints, it becomes almost trivial. The code could look like this:
$ sqlplus eoda/foo@PDB1
SQL> create table parent ( pk int primary key ); Table created.
SQL> create table child( fk constraint child_fk_parentreferences parent(pk)deferrableinitially immediate);Table created.
SQL> insert into parent values ( 1 );1 row created.
SQL> insert into child values ( 1 );1 row created.
We have a parent table, PARENT, and a child table, CHILD. Table CHILD references table PARENT, and the constraint used to enforce that rule is called CHILD_FK_PARENT (child foreign key to parent). This constraint was created as DEFERRABLE, but it is set to INITIALLY IMMEDIATE. This means we can defer that constraint until COMMIT or to some other time. By default, however, it will be validated at the statement level. This is the most common use of the deferrable constraints. Most existing applications won’t check for constraint violations on a COMMIT statement, and it is best not to surprise them with that. As defined, table CHILD behaves in the fashion tables always have, but it gives us the ability to explicitly change its behavior. Now let’s try some DML on the tables and see what happens:
SQL> update parent set pk = 2;update parent set pk = 2
ERROR at line 1:
ORA-02292: integrity constraint (EODA.CHILD_FK_PARENT) violated – child recordfound
Since the constraint is in IMMEDIATE mode, this UPDATE fails. We’ll change the mode and try again:
SQL> set constraint child_fk_parent deferred; Constraint set.
SQL> update parent set pk = 2;1 row updated.
Now it succeeds. For illustration purposes, I’ll show how to check a deferred constraint explicitly before committing, to see if the modifications we made are in agreement with the business rules (in other words, to check that the constraint isn’tcurrently being violated). It’s a good idea to do this before committing or releasing control to some other part of the program (which may not be expecting the deferred constraints):
SQL> set constraint child_fk_parent immediate; set constraint child_fk_parent immediate *
ERROR at line 1:
ORA-02291: integrity constraint (EODA.CHILD_FK_PARENT) violated – parent keynot found It fails and returns an error immediately as expected, since we knew that the constraint had been violated. The UPDATE to PARENT was not rolled back (that would violate the statement-level atomicity); it is still outstanding. Also note that our transaction is still working with the CHILD_FK_PARENT constraint deferred because the SET CONSTRAINT command failed. Let’s continue now by cascading the UPDATE to CHILD:
SQL> update child set fk = 2;1 row updated.
SQL> set constraint child_fk_parent immediate; Constraint set.
SQL> commit;Commit complete.
And that’s the way it works. Note that to defer a constraint, you must create it that way—you have to drop and re-create the constraint to change it from nondeferrable to deferrable. That might lead you to believe that you should create all of your constraints as “deferrable initially immediate,” just in case you wanted to defer them at some point. In general, that is not true. You want to allow constraints to be deferred only if you have a real need to do so. By creating deferred constraints, you introduce differences in the physical implementation (in the structure of your data) that might not be obvious. For example, if you create a deferrable UNIQUE or PRIMARY KEY constraint, the index that Oracle creates to support the enforcement of that constraint will be a nonunique index. Normally, you expect a unique index to enforce a unique constraint, but since you have specified that the constraint could temporarily be ignored, it can’t use that unique index. Other subtle changes will be observed, for example, with NOT NULL constraints.
If you allow your NOT NULL constraints to be deferrable, the optimizer will start treating the column as if it supports NULLs—because it in fact does support NULLs during your transaction. For example, suppose you have a table with the following columns and data:
SQL> create table t(x int constraint x_not_null not null deferrable, y int constraint y_not_null not null,z varchar2(30));Table created.
SQL> insert into t(x,y,z)select rownum, rownum, rpad(‘x’,30,’x’)from all_users;61 rows created.
SQL> exec dbms_stats.gather_table_stats( user, ‘T’ );PL/SQL procedure successfully completed.
In this example, column X is created such that when you COMMIT, X will not be null.
However, during your transaction X is allowed to be null since the constraint is deferrable.
Column Y, on the other hand, is always NOT NULL. Let’s say you were to index column Y:
SQL> create index t_idx on t(y);Index created.
And you then ran a query that could make use of this index on Y—but only if Y is NOT NULL, as in the following query:
SQL> explain plan for select count(*) from t; Explained.
SQL> select * from table(dbms_xplan.display(null,null,’BASIC’));
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 |SORT AGGREGATE | |
| 2 |INDEX FULL SCAN | T_IDX |
You would be happy to see the optimizer chose to use the small index on Y to count the rows rather than to full scan the entire table T. However, let’s say that you drop that index and index column X instead:
SQL> drop index t_idx;Index dropped.
SQL> create index t_idx on t(x);Index created.
And you then ran the query to count the rows once more; you would discover that the database does not, in fact cannot, use your index:
SQL> explain plan for select count(*) from t; Explained.
SQL> select * from table(dbms_xplan.display(null,null,’BASIC’));
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 |SORT AGGREGATE | |
| 2 |TABLE ACCESS FULL | T |
It full scanned the table. It had to full scan the table in order to count the rows. This is due to the fact that in an Oracle B*Tree index, index key entries that are entirely null are not made. That is, the index will not contain an entry for any row in the table T, such that all of the columns in the index are null. Since X is allowed to be null temporarily, the optimizer has to assume that X might be null and therefore would not be in the index on X. Hence, a count returned from the index might be different (wrong) from a count against the table.
We can see that if X had a nondeferrable constraint placed on it, this limitation is removed; that is, column X is in fact as good as column Y if the NOT NULL constraint is not deferrable:
SQL> alter table t drop constraint x_not_null; Table altered.
SQL> alter table t modify x constraint x_not_null not null; Table altered.
SQL> explain plan for select count(*) from t; Explained.
SQL> select * from table(dbms_xplan.display(null,null,’BASIC’));
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 |SORT AGGREGATE | |
| 2 |INDEX FULL SCAN | T_IDX |
So, the bottom line is, only use deferrable constraints where you have an identified need to use them. They introduce subtle side effects that could cause differences in your physical implementation (nonunique vs. unique indexes) or in your query plans—as just demonstrated!