Oracle BEST Practices III

1- Put your Database into ARCHIVELOG Mode
2- Use DBMS_STATS for statistic collection
3- Using Recovery Manager(RMAN) for Backup and Recovery needs
4- Resumable Statements and Space Allocation
5- Maintenance of Global Partitioned Indexes
6- Default Tablespaces
7- PL/SQL Bulk Operations
8- Locking Issues
9- Oracle System Event Triggers
10- Autonomous Transactions
11- Defining Application Services for Oracle Database 10g
12- Cost Based Optimizer Best Practices and Advised Strategy

8- Locking Issues

Oracle automatically implements row level locking as needed, we use FOR UPDATE NOWAIT to acquire a lock on rows if needed. But this pessimistic method decreases concurrency, need to be carefully used. Also this statement either locks all the selected rows or the control is returned without acquiring any lock(i.e. even on rows which are available for locking) after throwing an exception. To illustrate, we open two sessions. In the first session, we lock the row with deptno as 10 using FOR UPDATE NOWAIT.

— session 1
SELECT * FROM dept
WHERE deptno = 10
FOR UPDATE ;

DEPTNO DNAME
———- ——————————
10 Administration

In the second session, we try to lock two rows (deptno 10 and 20) from the table dept using FOR UPDATE NOWAIT. An exception is thrown after executing the following statement because one of the row (i.e. deptno 10) out of the selected list is already locked by session 1.

— session 2
SELECT * FROM dept
WHERE deptno IN (10,20)
FOR UPDATE NOWAIT;
SELECT * FROM dept
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified
If session 1 ends its transaction by a commit or rollback within 3 seconds no error message returns with this option;

SELECT * FROM dept
WHERE deptno IN (10,20)
FOR UPDATE WAIT 3;

Now we again try to lock two rows (deptno(s) 10 and 20) from the table dept but using the clause FOR UPDATE SKIP LOCKED instead of FOR UPDATE NOWAIT or WAIT. As you can see the following statement has;

1. returned the control without throwing an exception
2. acquired lock on the row (i.e. deptno 20) which is available for locking
3. skipped the row (i.e. deptno 10) that has been locked already by session 1

SELECT * FROM dept
WHERE deptno IN (10,20)
FOR UPDATE SKIP LOCKED;

DEPTNO DNAME
———- ——————————
20 Marketing
9- Oracle System Event Triggers

Introduced in Oracle 8 and must be created by SYS(as SYSDBA).
With these triggers you can capture and log any error happening within the database,
or start sql trace for some users after logon;

10- Autonomous Transactions

COMMIT or ROLLBACK in any program in your session committed or rolled back all changes in your session, there was only one transaction allowed per connection.

You can define a PL/SQL block to execute as an “autonomous transaction”, any changes made
within that block can be saved or reversed without affecting the outer or main transaction.
Recursive transaction started by main transaction can commit or rollback independently of main transaction with the PRAGMA AUTONOMOUS_TRANSACTION. Committed data unaffected if main transaction rolls back.

examples:

— trigger logging example with autonomous transaction
CREATE OR REPLACE TRIGGER trigger1
BEFORE INSERT ON table1 FOR EACH ROW
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
INSERT INTO log1 VALUES (:new.col1,:new.col2,SYSDATE);
COMMIT;
END;
/

— package logging example with autonomous transaction
CREATE OR REPLACE PACKAGE BODY log IS
PROCEDURE putline ( code_in IN INTEGER, text_in IN VARCHAR2 ) IS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
INSERT INTO logtab
VALUES (code_in, text_in,SYSDATE, USER, SYSDATE, USER, rec.machine, rec.program);
COMMIT;
EXCEPTION
WHEN OTHERS THEN ROLLBACK;
END;
END;
/

11- Defining Application Services for Oracle Database 10g

Services are logical abstractions for managing workloads in Oracle Database 10g,
they represent groups of applications with common attributes, service level thresholds, and priorities.
Application functions can be divided into workloads identified by services.
For example, the CRM application can define a service for each responsibility.

The Database Resource Manager binds services to consumer groups and priorities. You can define separate services for high priority online users and lower priority internal reporting applications. Likewise, you can define gold, silver, and bronze services to prioritize the order in which requests are serviced for the same application

12- Cost Based Optimizer Best Practices and Advised Strategy

When to gather statistics? some candidate situations are as follows:

– After new database creations,
– After new created and altered segments(tables, indexes, partitions, clusters etc.),
– After hardware upgrades like CPUs, I/O subsystem(gather system statistics),
– After migration from Rule based optimizer(RBO) to Cost based optimizer(CBO),
– After large amounts of data change(bulk operations, loads, purges etc.),
– After new high/low values for keys generated,
– After 10g Optimizer statistics are automatically gathered with the job GATHER_STATS_JOB. This job gathers statistics on all objects in the database which have Missing or Stale statistics,
– Also data dictionary and fixed tables need statistics after 10g.

How much to gather ? using Sample Sizes are recommended:

– estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE optimal for uniform sizes and distributions of data but problems can occur with skewed data or if there are lot of nulls in the column,
– Smaller sample sizes cane be preferred for large objects to save time and resource consumption,
– Row sampling means full scan of data sampling rows where as block sampling reduces I/O.

How to keep the statistics up to date? proposed methodology summary at “A Practical Approach to Optimizer Statistics in 10g by Andrew Holdsworth Director of Real World Performance” is as follows:

– Mentioned best practices in A.1. and A.2. will work well for 90% of your SQLs. In the initial post of this series with part 3 “Use DBMS_STATS for statistic collection” I discussed the advantages of DBMS_STATS over ANALYZE for statistics collection. So prefer dbms_stats package over analyze command(we still need analyze for backward compatibility, validate structure(index_stats information also) and list chained rows options), here is an example of advised general stats gathering method;

BEGIN
dbms_stats.gather_schema_stats(ownname => USER,
options => ‘GATHER AUTO’,
degree => 4,
estimate_percent => dbms_stats.auto_sample_size,
cascade => TRUE,
method_opt => ‘FOR ALL COLUMNS SIZE AUTO’,
granularity => ‘AUTO’);
COMMIT;
END;
/

BEGIN
dbms_stats.gather_table_stats(ownname => USER,
tabname => ‘EMPLOYEES’,
degree => 4,
estimate_percent => dbms_stats.auto_sample_size,
cascade => TRUE,
method_opt => ‘FOR ALL COLUMNS SIZE AUTO’,
granularity => ‘AUTO’);
COMMIT;
END;
/

– But for the 9% of your SQLs you may need to refine the stats collection method based on data distribution; manual histogram management and setting statistics etc.

– And for the remaining 1% of your SQLs sample size is irrelevant, you have to get assistance of options like hints, outlines and sql profiles on 10g. These options must be preferred only as a last resort.

– Do not fall into the parameter change trap;
step 1) Bad plan discovered on Query A,
step 2) Optimizer parameter changed to fix Query A,
step 3) This change causes bad plan discovery on Query B,
step 4) than another change on optimizer parameter to fix Query B brings you back to step 1 🙂
Use 10046 and 10053 events for detail analysis of CBO decisions, changing the value of an optimizer parameter for a query will cause new problems.

And thats all! ;D

 

Advertisements

Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s