Text box

Please note that the problems that i face may not be the same that exist on your environment so please test before applying the same steps i followed to solve the problem .

Sunday 11 December 2011

Investigating ITL wait resulted from ORA-00060: Deadlock detected

Hello,
I got the following error:ORA-00060: Deadlock detected
investigating the trace logfile i found :

Deadlock graph:
                       ---------Blocker(s)--------  ---------Waiter(s)---------
Resource Name          process session holds waits  process session holds waits
TX-0013001c-0002687a       149     369     X            135     517           S
TX-000c0004-000283db       135     517     X             99     531           S
TX-00090003-0005c060        99     531     X            149     369           S

the session Holds wait type S made me check if it resulted from ITL wait and ehre i checked it using the following query:


select * from
(
   select 
      DECODE
      (GROUPING(a.object_name), 1, 'All Objects', a.object_name)
   AS "Object",
sum(case when 
   a.statistic_name = 'ITL waits' 
then 
   a.value else null end) "ITL Waits",
sum(case when 
   a.statistic_name = 'buffer busy waits' 
then 
   a.value else null end) "Buffer Busy Waits",
sum(case when 
   a.statistic_name = 'row lock waits' 
then 
   a.value else null end) "Row Lock Waits",
sum(case when 
   a.statistic_name = 'physical reads' 
then 
   a.value else null end) "Physical Reads",
sum(case when 
   a.statistic_name = 'logical reads' 
then 
   a.value else null end) "Logical Reads"
from 
   v$segment_statistics a
where 
   a.owner like upper('&owner')
group by 
   rollup(a.object_name)) b
where (b."ITL Waits">0 or b."Buffer Busy Waits">0);

The owner in the above query is the schema that the statement in the trace file runs against.

Some other queries that may help:
SELECT t.OWNER, t.OBJECT_NAME, t.OBJECT_TYPE, t.STATISTIC_NAME, t.VALUE
FROM v$segment_statistics t
WHERE t.STATISTIC_NAME = 'ITL waits'
AND t.VALUE > 0;

select SID,TYPE,ID1,ID2,LMODE,REQUEST from v$lock                                                            
where type='TX';                                                                                            

Select
   s.sid SID,
   s.serial# Serial#,
   l.type type,
   ' ' object_name,
   lmode held,
   request request
from
   v$lock l,
   v$session s,
   v$process p
where
   s.sid = l.sid and
   s.username <> ' ' and
   s.paddr = p.addr and
   l.type <> 'TM' and
   (l.type <> 'TX' or l.type = 'TX' and l.lmode <> 6)
union
select
   s.sid SID,
   s.serial# Serial#,
   l.type type,
   object_name object_name,
   lmode held,
   request request
from
   v$lock l,
   v$session s,
   v$process p,
   sys.dba_objects o
where
   s.sid = l.sid and
   o.object_id = l.id1 and
   l.type = 'TM' and
   s.username <> ' ' and
   s.paddr = p.addr
union
select
   s.sid SID,
   s.serial# Serial#,
   l.type type,
   '(Rollback='||rtrim(r.name)||')' object_name,
   lmode held,
   request request
from
   v$lock l,
   v$session s,
   v$process p,
   v$rollname r
where
   s.sid = l.sid and
   l.type = 'TX' and
   l.lmode = 6 and
   trunc(l.id1/65536) = r.usn and
   s.username <> ' ' and
   s.paddr = p.addr
order by 5, 6;



If you got IT 0 waits then it is not ITL wait problem but if you find that it is ITL problem then you have to refer to the following note:
How To Solve High ITL Waits For Given Segments. [ID 464041.1]



Monday 5 December 2011

Oracle APEX installation problems

Today i was asked to install oracle apex which was a very intersting task for me.I meet alot of problems related to database and solved it but the remarkable problems that were related to apex installation were solved in the following oracle notes:
1-Oracle apex needs oracle text to be installed:
Manual installation, deinstallation of Oracle Text 10gR1 [ID 280713.1].
2-Apex reported a problem related to grants:
The WWV_FLOW_HELP PACKAGE Status is Invalid After Installing Oracle Text [ID 1335521.1].
I wrote this document to share the information with the community.

Friday 2 December 2011

Egypt Oracle User Group(EGOUG)

I was searching Egypt for a real community group that acts and doing seminars and help people/customers ,but really i didn't find one.I wanted to start the idea and generated the group and now i am in the process of finding people in Egypt who are willing to work as volunteers with me in this non profit group that aim to help oracle community in Egypt.if you are interested please let me know.
The group link is:
http://www.linkedin.com/groups/Egypt-Oracle-User-Group-EGOUG-4201255?home=&gid=4201255&trk=anet_ug_hm

ORA-00600: internal error code, arguments: [qmxtrScalarIsRewritable:dft], [], [], [], [], [], [], [] and ORA-07445: exception encountered: core dump [kgghstfel()+4] [SIGSEGV] [Address n ot mapped to object] [0x000000018] [] []

I faced the following errors in the alert log and the developers were not able to view the DDL from the JDeveloper and toad for any user in the database.

The Call stack for the trace file for error ORA-07445 :


qmxtrxq()+464        CALL     qmxtrxq()            000000000 ?
                                                   FFFFFFFF7FFED090 ?
                                                   000000001 ?
                                                   FFFFFFFF7AD3B698 ?
kgghstfel()+4        PTR_CALL 0000000000000000     000000000 ? 106843EF0 ?
                                                   106843C10 ?
                                                   FFFFFFFF7D500200 ?
                                                   000000000 ?
                                                   FFFFFFFF7C73C000 ?
qmtLookup()+80       CALL     kgghstfel()          000000000 ? 3B26F7628 ?
                                                   000000000 ? 000000000 ?
                                                   000000002 ? 3B26CD118 ?
qmtRemoveRef()+76    CALL     qmtLookup()          10669E678 ? 3B26F7628 ?
                                                   3B26CD6D0 ? 000000029 ?
                                                   00010581D ?
                                                   FFFFFFFF7C026620 ?
qmxtrCleanRewriteCt  CALL     qmtRemoveRef()       10669E678 ? 3B2633A18 ?
x()+44                                             000105800 ? 10581D000 ?
                                                   3B26CD118 ? 000000000 ?
qmxtrxopn()+680      CALL     qmxtrCleanRewriteCt  FFFFFFFF7FFE5BC8 ?
                              x()                  FFFFFFFF7FFE5B18 ?
                                                   FFFFFFFF7FFED090 ?
                                                   10669E338 ? 10669E000 ?
                                                   00010669E ?
qmxtrxopt()+156      CALL     qmxtrxopn()          FFFFFFFF7B9C0118 ?
                                                   FFFFFFFF7FFED090 ?
                                                   000008000 ? 000000000 ?
                                                   002000000 ? 000004004 ?
qmxtrxq()+256        CALL     qmxtrxopt()          FFFFFFFF7B9C0118 ?
                                                   FFFFFFFF7FFED090 ?
                                                   000000000 ? 000000003 ?
                                                   FFFFFFFF7B9DFB20 ?
                                                   FFFFFFFF7B9DF718 ?
qmxtrxq()+464        CALL     qmxtrxq()            0FFFFBC00 ?
                                                   FFFFFFFF7FFED090 ?
                                                   000000000 ?
                                                   FFFFFFFF7B9C0118 ?
                                                   FFFFFFFF7B9DF718 ?
                                                   000000000 ?

after investigation i found similar bugs on metalink and they were related to XML DB.i decided to remove the XML DB from the database as it was not being used.

The steps to remove XML DB:
XDB Removal

The catnoqm.sql script drops XDB.

SQL> spool xdb_removal.log
SQL> set echo on;
SQL> connect / as sysdba
SQL> shutdown immediate;
SQL> startup
SQL> @?/rdbms/admin/catnoqm.sql
SQL> spool off;
you can refer to document :
 How to Deinstall and Reinstall XML Database (XDB) [ID 243554.1]


after removing the XML database i found many invalid objects in the sys user which impaced the database.i resolved those issues by running the following scripts:


Catalog - creates data dictionary views.
Catproc - create in built PL/SQL Procedures, Packages etc


running catalog and catproc will make your packages/procedures invalid. You will have to recompile them. 
You can use utlrp.sql which you can find in $ORACLE_HOME/rdbms/admin




after finishing the above statements i found no invalid objects in sys schema ,but the following error appeared:

Errors in file /app/home/oracle/admin/DEVDB/bdump/devdb_j000_29009.trc:
ORA-12012: error on auto execute of job 42568
ORA-04063: ORA-04063: package body "EXFSYS.DBMS_RLMGR_DR" has errors
ORA-06508: PL/SQL: could not find program unit being called: "EXFSYS.DBMS_RLMGR_
DR"
ORA-06512: at line 1
 has errors
Mon Nov 28 19:42:33 2011
Errors in file /app/home/oracle/admin/DEVDB/bdump/devdb_reco_11767.trc:
ORA-02019: connection description for remote database not found
Mon Nov 28 23:40:44 2011
 i resolved the following error by doing the following oracle note:
How To Install and Uninstall Expression Filter Feature or EXFSYS schema [ID 258618.1]

The database is up and running now with no problems.










Thursday 20 October 2011

how to compress all the tables inside a tablespace

1-calculate all the tablespace tables sizes before compression:
select  sum(bytes/1024/1024/1024),segment_name,owner from dba_segments where tablespace_name='TIM_UAT' and segment_type='TABLE' group by segment_name,owner order by 1;
2-generate a script to compress all the tables inside the tablespace given that the table was not compressed before:

 select 'Alter table '||owner||'.'|| table_name||' move compress;' from dba_tables where TABLESPACE_NAME='TIM_EA3' and compression='DISABLED';

3-rebuild all the indexes that was unusable due to the compression:

  select  sum(bytes/1024/1024/1024),segment_name,owner from dba_segments where tablespace_name='TIM_UAT' and segment_type='TABLE' group by segment_name,owner order by 1;

4-calculate the sizes of all the tables inside the tablespace after compression:
select  sum(bytes/1024/1024/1024),segment_name,owner from dba_segments where tablespace_name='TIM_UAT' and segment_type='TABLE' group by segment_name,owner order by 1;

Don't  forget to rebuild all unusable indexes after finishing this process:
select 'Alter index '||owner||'.'||index_name||' rebuild;' from dba_indexes where status='UNUSABLE'

Kindly note that compression is useful when you have alot of redundant data available and will be so beneficial.



Monday 10 October 2011

trigger_to_catch_failed_login_attempts.sql


-- sample trigger to write diagnostic info to alert.log
-- for failed login attempts (ora-1017)

create or replace trigger logon_denied_to_alert
after servererror on database
 declare
  message varchar2(120);
  IP varchar2(15);
  v_db_user varchar2(30);
  v_os_user varchar2(80);
  v_module varchar2(50);
  v_action varchar2(50);
  v_pid varchar2(10);
  v_sid number;
  v_machine varchar2(100);
begin
   IF (ora_is_servererror(1017)) THEN

   -- get IP for remote or ospid for local connections:
      if sys_context('userenv','network_protocol') = 'TCP' then
        IP := sys_context('userenv','ip_address');
      else
        select distinct sid into v_sid from sys.v_$mystat;
        SELECT p.SPID into v_pid FROM V$PROCESS p, V$SESSION v
             WHERE p.ADDR = v.PADDR AND v.sid = v_sid;
        select machine into v_machine from v$session where sid = v_sid;
      end if;    

      v_db_user := sys_context('userenv','AUTHENTICATED_IDENTITY');
      v_os_user := sys_context('userenv','os_user');
     
      dbms_application_info.READ_MODULE(v_module,v_action);
       
         message:= to_char(sysdate,'Dy Mon dd HH24:MI:SS YYYY');
         sys.dbms_system.ksdwrt(2,message);

         message:= 'logon denied connecting as ' || v_db_user || ' from '|| v_machine || ' '||nvl(IP,v_pid)||' '||v_os_user||
         ' with '||v_module||' '||v_action;

         sys.dbms_system.ksdwrt(2,message);

-- remove comments from next line to let it hang for 5 minutes
-- to be able to do more diagnostics on the operating system:
--         sys.dbms_lock.sleep(300);
    end if;
end;
/
-- end trigger

Wednesday 5 October 2011

ORA-08104: this index object 75350 is being online built or rebuilt


SQL> alter index S.SYS_C007740 rebuild online tablespace s_data01;
alter index S.SYS_C007740 rebuild online tablespace s_data01
*
ERROR at line 1:
ORA-08104: this index object 75350 is being online built or rebuilt

SQL> conn / as sysdba
Connected.
SQL> select obj#,flags from ind$ where obj#=75350;

OBJ# FLAGS
---------- ----------
75350 2563
declare
isclean boolean;
begin
isclean :=false;
while isclean=false
loop
isclean := DBMS_REPAIR.ONLINE_INDEX_CLEAN(dbms_repair.all_index_id,dbms_repair.lock_wait);
dbms_lock.sleep(10);
end loop;
end;
/
PL/SQL procedure successfully completed.

SQL> select obj#,flags from ind$ where obj#=75350;

OBJ# FLAGS
---------- ----------
75350 2051

SQL> alter index S.SYS_C007740 rebuild online tablespace s_data01;

Index altered.

Monday 3 October 2011

ORA-10631: SHRINK clause should not be specified for this object

When you try to shrink a table and get this error this is due to the table has functional based index.
How to find:

SELECT dt.owner, dt.table_name,
(CASE
WHEN NVL(ind.cnt, 0) < 1 THEN 'Y'
ELSE 'N'
END) AS can_shrink
FROM dba_tables dt,
(SELECT table_name, COUNT(*) cnt
FROM dba_indexes di
WHERE index_type LIKE 'FUNCTION-BASED%'
GROUP BY table_name) ind
WHERE dt.table_name = ind.table_name(+)
AND dt.table_name NOT LIKE 'AQ$%'
AND dt.table_name NOT LIKE 'BIN$%'
AND dt.owner = 'CSS_DATA'
ORDER BY 1, 2;

OWNER                          TABLE_NAME                     C
------------------------------ ------------------------------ -
CSS_DATA                       ACTION                         Y
CSS_DATA                       ACTIONLOG                      N
CSS_DATA                       ACTIONLOG_ARCH1                Y
CSS_DATA                       ACTIONLOG_OLD                  Y
CSS_DATA                       ACTIONNAME                     Y
CSS_DATA                       ACTIONPARAMETER                Y
CSS_DATA                       ACTIONPARAMETER_ARCH1          Y
CSS_DATA                       ACTIONPARAMETER_OLD            Y
CSS_DATA                       ACTIONTYPE                     Y
CSS_DATA                       ACTION_PARAMS                  Y
CSS_DATA                       ACTION_PARAMS_ARCH1            Y

OWNER                          TABLE_NAME                     C
------------------------------ ------------------------------ -
CSS_DATA                       ADJUSTMENTREASONS              Y
CSS_DATA                       BATCH                          Y
CSS_DATA                       BILLING_BILL_CYCLE             Y
CSS_DATA                       BILLING_PRICE_GROUP            Y
CSS_DATA                       BILLING_SERVICES               Y
CSS_DATA                       BILLING_SERVICES_PARAMS        Y
CSS_DATA                       BILLING_SERVICES_PARAM_VALUES  Y
CSS_DATA                       BILLING_SERVICES_SMS           Y
CSS_DATA                       BILL_SERV_PRM_CALC_TYP         Y
CSS_DATA                       BUNDLED_MIGRATION              Y
CSS_DATA                       BUNDLED_MIGRATION_BUNDLES      Y
 select INDEX_NAME, INDEX_TYPE
         from dba_indexes
         where table_owner = 'CSS_DATA' and
         table_name = 'ORDER_INSTANCE';  2    3    4

INDEX_NAME                     INDEX_TYPE
------------------------------ ---------------------------
PK_ORDER_INSTANCE              NORMAL
ORDER_INSTANCE_MSISDN_IDX      NORMAL
OINSTANCE_JOB_ID               NORMAL
ORDER_INSTANCE_IDX             FUNCTION-BASED NORMAL
Solution:
1-Drop the function-based index.
3- Shrink the table.
4- Recreate the index again on the table.



Sunday 2 October 2011

ORA-00600: internal error code, arguments: [kdiblFlushStream:iStreamEndRange], [], [], [], [], [], [], []

Most probably this is due to index corruption:
To solve this:
analyze table schema.tablename validate structure cascade;
ERROR at line 1:
ORA-01499: table/index cross reference failure - see trace file

You need to drop this index and re create it.
go to the trace file and find the index object id the drop it and recreate it.

Monday 19 September 2011

Compression


table_compression
The table_compression clause is valid only for heap-organized tables. Use this clause to instruct the database whether to compress data segments to reduce disk use. This clause is especially useful in environments such as data warehouses, where the amount of insert and update operations is small. The COMPRESS keyword enables table compression. The NOCOMPRESS keyword disables table compression. NOCOMPRESS is the default.
When you enable table compression, Oracle Database attempts to compress data during direct-path INSERT operations when it is productive to do so. The original import utility (imp) does not support direct-path INSERT, and therefore cannot import data in a compressed format. You can specify table compression for the following portions of a heap-organized table:
  • For an entire table, in the physical_properties clause of relational_table or object_table
  • For a range partition, in the table_partition_description of the range_partitioning clause
  • For a list partition, in the table_partition_description of the list_partitioning clause
  • For the storage table of a nested table, in the nested_table_col_properties clause


    Does Oracle datapump uses direct path load?

    Yes. This is one of feature that makes impdp or expdp more faster than conventional export and import. To use direct path loading through oracle datapump, one has follow certain condition. Alternatively it can can be used by external table method by which we unload the data on flat file on file system of database server and after user can use those flat file as simple data source in its SELECT statement.

    EXPDP will use DIRECT_PATH mode if:


    The structure of a table allows a Direct Path unload, i.e.:

         - The table does not have fine-grained access control enabled for SELECT.

         - The table is not a queue table.

         - The table does not contain one or more columns of type BFILE or opaque, or an object type containing opaque columns.

         - The table does not contain encrypted columns.

         - The table does not contain a column of an evolved type that needs upgrading.

         - If the table has a column of datatype LONG or LONG RAW, then this column is the last column. 

Compression in 10g vs shrink and move

1-I wanted to test and see how the compression ratio if the table is shrinked and has no free blocks and the test result was promising as stated below:

SQL> select sum(bytes/1024/1024) from dba_segments where owner='NTS_EA3' and segment_name='CUST_CUSTOMER';

SUM(BYTES/1024/1024)
--------------------
              53.125

Elapsed: 00:00:00.15
SQL> alter table nts_ea3.CUST_CUSTOMER enable row movement;

Table altered.

SQL> alter table nts_ea3.CUST_CUSTOMER shrink space;

Table altered.
SQL> select sum(bytes/1024/1024) from dba_segments where owner='NTS_EA3' and segment_name='CUST_CUSTOMER';

SUM(BYTES/1024/1024)
--------------------
              53.125
SQL> alter table nts_ea3.CUST_CUSTOMER move;

Table altered.

Elapsed: 00:00:03.18
select sum(bytes/1024/1024) from dba_segments where owner='NTS_EA3' and segment_name='CUST_CUSTOMER';

SUM(BYTES/1024/1024)
--------------------
                  54

Elapsed: 00:00:00.08
SQL> alter table nts_ea3.CUST_CUSTOMER move compress;

Table altered.

Elapsed: 00:00:07.12
SQL>  select sum(bytes/1024/1024) from dba_segments where owner='NTS_EA3' and segment_name='CUST_CUSTOMER';

SUM(BYTES/1024/1024)
--------------------
                  28

Elapsed: 00:00:00.16
see how the test result changed from 53M to 28M. i will do the test on a massive amount of data and watch for performance change and i will let you know.

2- Dumpfile Compression

Data Pump Export dump files can contain database metadata objects in a compressed format, allowing dump files to consume less operating system disk space. This metadata is automatically uncompressed during Import.
This feature reduces the amount of disk space consumed by dump files.



Wednesday 7 September 2011

Query returns a list of tablespaces that contain XMLTypes:



select distinct p.tablespace_name from dba_tablespaces p, 
  dba_xml_tables x, dba_users u, all_all_tables t where
  t.table_name=x.table_name and t.tablespace_name=p.tablespace_name
  and x.owner=u.username;

Monday 4 July 2011

RMAN Catalog tips(TO be continued)

 What is the RMAN  recovery catalog:
RMAN recovery catalog, which holds RMAN repository data for one or more databases in a separate database schema, in addition to using the control files of the databases.

Creating a Recovery Catalog


Creating a recovery catalog is a three-step process: you must configure the database that will contain the recovery catalog, create the recovery catalog owner, and then create the recovery catalog itself.

Configuring the Recovery Catalog Database


When you use a recovery catalog, RMAN requires that you maintain a recovery catalog schema. The recovery catalog is stored in the default tablespace of the schema. Note that SYS cannot be the owner of the recovery catalog.

Decide which database you will use to install the recovery catalog schema, and also how you will back up this database. You should not install the catalog in the target database: this tactic defeats the purpose of the catalog. Also, decide whether to operate the catalog database in ARCHIVELOG mode, which is recommended.

Planning the Size of the Recovery Catalog Schema


You must allocate space to be used by the catalog schema. The size of the recovery catalog schema depends upon the number of databases monitored by the catalog, and the number and size of RMAN scripts stored in the catalog. The schema also grows as the number of archived redo log files and backups for each database grows.

For an example, assume that the trgt database has 100 files, and you back up the database once a day, producing 50 backup sets containing 1 backup piece each. If you assume that each row in the backup piece table uses the maximum amount of space, then one daily backup will consume less than 170 KB in the recovery catalog. So, if you back up once a day for a year, then the total storage in this period is about 62 MB. Assume approximately the same amount for archived logs. Hence, the worst case is about 120 MB for a year for metadata storage.

For a more typical case in which only a portion of the backup piece row space is used, 15 MB for each year is a realistic estimate.

If you plan to register multiple databases in your recovery catalog, remember to add up the space required for each one based on the previous calculation to arrive at a total size for the default tablespace of the recovery catalog schema.

Allocating Disk Space for the Recovery Catalog Database


If you are creating your recovery catalog in an already-existing database, add enough room to hold the default tablespace to the recovery catalog schema. If you are creating a new database to hold your recovery catalog, then, in addition to the space for the recovery catalog schema itself, you must allow space for other files in the recovery catalog database:
  • SYSTEM tablespace
  • Temporary tablespaces
  • Rollback segment tablespaces
  • Online redo log files
This table describes the typical space required:
Type of Space Space Requirement

SYSTEM tablespace

90 MB

Temp tablespace

5 MB

Rollback or undo tablespace

5 MB

Recovery catalog tablespace

15 MB for each database registered in the recovery catalog

Online redo logs

1 MB each (3 groups, each with 2 members)

 Please be careful to ensure that the recovery catalog and target databases do not reside on the same disk. If a disk containing both your recovery catalog and your target databases failed, your recovery process would be much more complicated. If possible, take other measures as well to eliminate common points of failure between your recovery catalog database and the databases you are backing up.

Creating the Recovery Catalog Owner


After choosing the recovery catalog database and creating necessary space, you are ready to create the owner of the recovery catalog and grant this user necessary privileges.

Assume the following background information for the instructions in the following sections:
  • User SYS with password oracle has SYSDBA privileges on the recovery catalog database catdb.
  • A tablespace called tools in the recovery catalog database catdb stores the recovery catalog. Note that to use an RMAN reserved word as a tablespace name, you must enclose it in quotes and put it in uppercase. (Refer to Oracle Database Recovery Manager Reference for a list of RMAN reserved words.)
  • A tablespace called temp exists in the recovery catalog database.
  • The database is configured in the same way as all normal databases, for example, catalog.sql and catproc.sql have successfully run.

To create the recovery catalog schema in the recovery catalog database:
  1. Start SQL*Plus and then connect with administrator privileges to the database containing the recovery catalog. For example, enter:
    CONNECT SYS/oracle@catdb AS SYSDBA
    
    
  2. Create a user and schema for the recovery catalog. For example, enter:
    CREATE USER rman IDENTIFIED BY cat
      TEMPORARY TABLESPACE temp 
      DEFAULT TABLESPACE tools 
      QUOTA UNLIMITED ON tools;
    
    
  3. Grant the RECOVERY_CATALOG_OWNER role to the schema owner. This role provides the user with all privileges required to maintain and query the recovery catalog.
    SQL> GRANT RECOVERY_CATALOG_OWNER TO rman;
    


RMAN>Configure backup optimization on;
This command will make the read-only datafiles will not be backedup as long as backups of those files already exist and those backups meet eshtablished retention criteria.

Thursday 19 May 2011

Find bind variable value

The usages of bind variable increase SQL query performance. Suppose you have a SQL query that execute several times and only change are in literal values, IN such case professionals are suggested to use bind variable rather than hard cote value. The impact is significant on query performance if you use bind variable.

But the problem is rice when you try to collect full query text from database, the value of the bind variable is not present in the query text. But you may need those values when you are working on that SQL query specially when you try to tune that query.

Here are two query that help you to find the valu of bind variable. The only thing you needed is the SQL ID

1. If your query was run within 30 minutes:

SELECT NAME,POSITION,DATATYPE_STRING,VALUE_STRING
FROM v$sql_bind_capture WHERE sql_id='';


2. If your query was run more than 30 minutes ago. For this you also need a snapshot Id. By default oracle store snapshots of last 7 days:

SELECT NAME,POSITION,DATATYPE_STRING,VALUE_STRING
FROM DBA_HIST_SQLBIND WHERE SQL_ID='' and SNAP_ID='';


Note: v$sql_bind_capture and DBA_HIST_SQLBIND stors only those bind values
which are given in HWERE clause and they do not store timstamp,lob,clob type value

Find bind variable value

The usages of bind variable increase SQL query performance. Suppose you have a SQL query that execute several times and only change are in literal values, IN such case professionals are suggested to use bind variable rather than hard cote value. The impact is significant on query performance if you use bind variable.

But the problem is rice when you try to collect full query text from database, the value of the bind variable is not present in the query text. But you may need those values when you are working on that SQL query specially when you try to tune that query.

Here are two query that help you to find the valu of bind variable. The only thing you needed is the SQL ID

1. If your query was run within 30 minutes:

SELECT NAME,POSITION,DATATYPE_STRING,VALUE_STRING
FROM v$sql_bind_capture WHERE sql_id='';


2. If your query was run more than 30 minutes ago. For this you also need a snapshot Id. By default oracle store snapshots of last 7 days:

SELECT NAME,POSITION,DATATYPE_STRING,VALUE_STRING
FROM DBA_HIST_SQLBIND WHERE SQL_ID='' and SNAP_ID='';


Note: v$sql_bind_capture and DBA_HIST_SQLBIND stors only those bind values
which are given in HWERE clause and they do not store timstamp,lob,clob type value

Monday 16 May 2011

SYSTEM and SYSAUX Tablespaces


Each Oracle Database must contain a SYSTEM tablespace and a SYSAUX tablespace, which are
automatically created when the database is created. The system default is to create a smallfile
tablespace. You can also create bigfile tablespaces, which enable the Oracle database to manage ultra
large files (up to 8 exabytes in size).
A tablespace can be online (accessible) or offline (not accessible). The SYSTEM tablespace is always
online when the database is open. It stores tables that support the core functionality of the database,
such as the data dictionary tables.
The SYSAUX tablespace is an auxiliary tablespace to the SYSTEM tablespace. The SYSAUX
tablespace stores many database components, and it must be online for the correct functioning of all
database components.
Note: The SYSAUX tablespace may be taken offline for performing tablespace recovery, whereas
this is not possible in the case of the SYSTEM tablespace. Neither of them may be made read-only.

Monday 28 March 2011

Kill all user connected sessions

To Kill the sessions:
DECLARE -- declare variables
CURSOR c1 IS
select sid, serial# from v$session
where username='ADPE';  -- declare cursor
-- declare record variable that represents a row fetched
kill_it c1%ROWTYPE; -- declare variable with %ROWTYPE attribute
BEGIN
-- open the explicit cursor c1
OPEN c1;
LOOP
FETCH c1 INTO kill_it; -- retrieve record
EXIT WHEN c1%NOTFOUND;
BEGIN
EXECUTE IMMEDIATE 'alter system kill session '''||
kill_it.sid||', '||kill_it.serial#||'''';
END;
END LOOP;
CLOSE c1;
END;

To disconnect the sessions:

DECLARE -- declare variables
CURSOR c1 IS
select sid, serial# from v$session
where username='SSS';  -- declare cursor
-- declare record variable that represents a row fetched
kill_it c1%ROWTYPE; -- declare variable with %ROWTYPE attribute
BEGIN
-- open the explicit cursor c1
OPEN c1;
LOOP
FETCH c1 INTO kill_it; -- retrieve record
EXIT WHEN c1%NOTFOUND;
BEGIN
EXECUTE IMMEDIATE 'alter system disconnect session '''||
kill_it.sid||', '||kill_it.serial#||'''immediate';
END;
END LOOP;
CLOSE c1;
END;
Please note that disconnect is more powerful than kill.

Thursday 10 March 2011

comprehensive 4-step approach to implementing data masking

 Find: This phase involves identifying and cataloging sensitive or regulated data across the entire enterprise. Typically carried out by business or security analysts, the goal of this exercise is to come up with the comprehensive list of sensitive data elements specific to the organization and discover the associated tables and columns across enterprise databases that contain the sensitive data.
 Assess: In this phase, developers or DBAs in conjunction with business or security analysts identify the masking algorithms that represent the optimal techniques to replace the original sensitive data. Developers can leverage the existing masking library or extend it with their own masking routines.
 Secure: This and the next steps may be iterative. The security administrator executes the masking process to secure the sensitive data during masking trials. Once the masking process has completed and has been verified, the DBA then hands over the environment to the application tester
 Test: In the final step, the production users execute application processes to test whether the resulting masked data can be turned over to the other non-production users. If the masking routines need to be tweaked further, the DBA restores the database to the pre-masked state, fixes the masking algorithms and re-executes the masking process.

Sunday 6 March 2011

Oracle Data Masking


Who is the best at disguising themselves with a mask?   Zorro?  Hannibal from the A-Team?   Ethan Hunt from Mission Impossible?
 From this brief list, you can see that the quality of masks varies enormously.  Just because someone wears a mask, they won’t always be fully disguised.    The same principle applies when applying a mask to your data.
Data masking is where techniques are used to disguise the original data.  Some examples of these techniques are shown in the listing below:
a. Remove a substring from the data item.
b. Add a fixed number or substring to the data item.
c. Delete the data item, or replace it with a NULL.
d. Process the data item, or a subset, using a pre-defined procedure or function.
e. Add a randomly generated number, or replace an existing number with a random number.  The random number could also be within a specified percentage higher or lower than the original value.
f. Re-arrange various components of the data item.
g. Replace the data item with data from a lookup column in another table.

The actual masking process could use one of these or similar techniques, or it could use a combination of different techniques.
Note:  A “data item” is a column within a database table that has been identified as containing “sensitive” data.
The main reason for masking the data is security.   Some people who access your data will not really need to see all the data in its original format.   An example of this could be an external customer, or where you publish information to a website. 
Another scenario would be where the security restrictions for your test, training and development environments are not as robust as those for your production environment.  If these databases use a copy of production data, then they could be candidates for data masking.
Unfortunately, data masking is not always as straightforward as it might first appear.  You should bear in mind the following: 
1. You cannot disguise the data so much that it becomes unusable - masked data still needs to conform to application formatting requirements.
(e.g.  if you mask credit card numbers, they will still need to comply with any checksums designed to check that the numbers are valid).  
2. You need to have some way of tracing the data back to its original format.
If a developer or tester finds a problem, they need to be able to work out whether the issue was caused by application code, or an anomaly in the data.
3. Be aware of the impact on your reporting systems
If you have a column of sales figures and you just re-arrange the numbers, then your totals and averages could still be the same, but any drill-downs in your data could become inaccurate.
If you only have a few figures in a particular column then someone could easily work out what the original order was.
Ideally, you should try to maintain the sort order, length and data types of the original data.
If substituting items such as people’s first names, then double-check that you’re replacing female names with female names and male names with male names.
4. The referential integrity of your data needs to be maintained.
As a basic example - You have a table that stores details of cows in the FARM_ANIMAL table. This table is linked to the FARMER table via a foreign key.  If you mask the ID column of the FARMER table, then make sure that you use the same format mask for the foreign key column of the FARM_ANIMAL table - otherwise the farmer will lose his cows.
Any related data items that may not have a formal foreign key constraint will also need to have the same format mask applied. 
Check for any similar data in remote databases.  If data in these remote columns is not masked in a similar fashion,  then people could use these as a reference to figure out the original values of the masked items.
5. Be aware of the performance implications of masking your data.
If your database contains a large amount of data inconsistencies, then you may have to exclude a large amount of data from the format mask - with the associated performance impact, whilst you deal with these values.
In this situation, it would be useful to undertake a data cleansing exercise before trying to mask the data.  Otherwise, you’ll be left with lots of unmasked data which could make it simple for someone to work out what format mask you have applied to the column. In the future you should probably think about restricting the acceptable data entry formats (e.g. by using check constraints etc.)
It is best to plan in advance which items you need to mask and what format mask will be applied to each data item.   If you try to mask everything in the database, then that could take a very long time.
6. If any tables or data items are not essential outside of the production environment, then consider deleting them, truncating them, or replacing them with NULLs.
Oracle offers its own data masking tool in the form of the Oracle Enterprise Manager Data Masking Pack, which is part of Grid Control 10.2.0.4 or later.   There is a useful tutorial for this tool at http://www.oracle.com/technology/obe/11gr1_db/security/datamask/datamask.htm. 
Data Masking is a very large topic which can’t really be covered in a blog, but when carrying out any masking, you should always bear in mind the advice of Joseph Fieman from Gartner, when he wrote:
  ”Data-masking technologies should satisfy a simple, yet strict rule: The application that runs against masked data performs as if masked data is real.”
Make sure that your mask is a good one, otherwise people will be able to see behind it.
References:
http://www.oracle.com/enterprise_manager/data-masking.html
http://www.oracle.com/newsletters/information-indepth/database-insider/jan-09/index.html
http://oracle.com/database/security
http://www.orafaq.com/papers/data_sanitization.pdf
http://www.darkreading.com/database_security/security/appsecurity/showArticle.jhtml?articleID=222000741

Wednesday 16 February 2011

Oracle grid infrastructure bug on Linux redhat 4 reported in SR 3-2845758491 to oracle support


While I was testing to install oracle RAC on REDhat 4 I found that following error which cause the installation to get aborted:
ACFS & ADVM is not supported on redhat4
Also according to the oracle certification matrix The RAC 11G release 2  is certified on Redhat release 4 and the ACFS and ADVM is not supported .
I reported it as a bug according to the SR #3-2845758491
Note :this issue must not appear on Redhat 5 as the ACFS & ADVM is supported on it.