Text box

Please note that the problems that i face may not be the same that exist on your environment so please test before applying the same steps i followed to solve the problem .

Monday, 19 September 2011

Compression


table_compression
The table_compression clause is valid only for heap-organized tables. Use this clause to instruct the database whether to compress data segments to reduce disk use. This clause is especially useful in environments such as data warehouses, where the amount of insert and update operations is small. The COMPRESS keyword enables table compression. The NOCOMPRESS keyword disables table compression. NOCOMPRESS is the default.
When you enable table compression, Oracle Database attempts to compress data during direct-path INSERT operations when it is productive to do so. The original import utility (imp) does not support direct-path INSERT, and therefore cannot import data in a compressed format. You can specify table compression for the following portions of a heap-organized table:
  • For an entire table, in the physical_properties clause of relational_table or object_table
  • For a range partition, in the table_partition_description of the range_partitioning clause
  • For a list partition, in the table_partition_description of the list_partitioning clause
  • For the storage table of a nested table, in the nested_table_col_properties clause


    Does Oracle datapump uses direct path load?

    Yes. This is one of feature that makes impdp or expdp more faster than conventional export and import. To use direct path loading through oracle datapump, one has follow certain condition. Alternatively it can can be used by external table method by which we unload the data on flat file on file system of database server and after user can use those flat file as simple data source in its SELECT statement.

    EXPDP will use DIRECT_PATH mode if:


    The structure of a table allows a Direct Path unload, i.e.:

         - The table does not have fine-grained access control enabled for SELECT.

         - The table is not a queue table.

         - The table does not contain one or more columns of type BFILE or opaque, or an object type containing opaque columns.

         - The table does not contain encrypted columns.

         - The table does not contain a column of an evolved type that needs upgrading.

         - If the table has a column of datatype LONG or LONG RAW, then this column is the last column. 

Compression in 10g vs shrink and move

1-I wanted to test and see how the compression ratio if the table is shrinked and has no free blocks and the test result was promising as stated below:

SQL> select sum(bytes/1024/1024) from dba_segments where owner='NTS_EA3' and segment_name='CUST_CUSTOMER';

SUM(BYTES/1024/1024)
--------------------
              53.125

Elapsed: 00:00:00.15
SQL> alter table nts_ea3.CUST_CUSTOMER enable row movement;

Table altered.

SQL> alter table nts_ea3.CUST_CUSTOMER shrink space;

Table altered.
SQL> select sum(bytes/1024/1024) from dba_segments where owner='NTS_EA3' and segment_name='CUST_CUSTOMER';

SUM(BYTES/1024/1024)
--------------------
              53.125
SQL> alter table nts_ea3.CUST_CUSTOMER move;

Table altered.

Elapsed: 00:00:03.18
select sum(bytes/1024/1024) from dba_segments where owner='NTS_EA3' and segment_name='CUST_CUSTOMER';

SUM(BYTES/1024/1024)
--------------------
                  54

Elapsed: 00:00:00.08
SQL> alter table nts_ea3.CUST_CUSTOMER move compress;

Table altered.

Elapsed: 00:00:07.12
SQL>  select sum(bytes/1024/1024) from dba_segments where owner='NTS_EA3' and segment_name='CUST_CUSTOMER';

SUM(BYTES/1024/1024)
--------------------
                  28

Elapsed: 00:00:00.16
see how the test result changed from 53M to 28M. i will do the test on a massive amount of data and watch for performance change and i will let you know.

2- Dumpfile Compression

Data Pump Export dump files can contain database metadata objects in a compressed format, allowing dump files to consume less operating system disk space. This metadata is automatically uncompressed during Import.
This feature reduces the amount of disk space consumed by dump files.



Wednesday, 7 September 2011

Query returns a list of tablespaces that contain XMLTypes:



select distinct p.tablespace_name from dba_tablespaces p, 
  dba_xml_tables x, dba_users u, all_all_tables t where
  t.table_name=x.table_name and t.tablespace_name=p.tablespace_name
  and x.owner=u.username;

Monday, 4 July 2011

RMAN Catalog tips(TO be continued)

 What is the RMAN  recovery catalog:
RMAN recovery catalog, which holds RMAN repository data for one or more databases in a separate database schema, in addition to using the control files of the databases.

Creating a Recovery Catalog


Creating a recovery catalog is a three-step process: you must configure the database that will contain the recovery catalog, create the recovery catalog owner, and then create the recovery catalog itself.

Configuring the Recovery Catalog Database


When you use a recovery catalog, RMAN requires that you maintain a recovery catalog schema. The recovery catalog is stored in the default tablespace of the schema. Note that SYS cannot be the owner of the recovery catalog.

Decide which database you will use to install the recovery catalog schema, and also how you will back up this database. You should not install the catalog in the target database: this tactic defeats the purpose of the catalog. Also, decide whether to operate the catalog database in ARCHIVELOG mode, which is recommended.

Planning the Size of the Recovery Catalog Schema


You must allocate space to be used by the catalog schema. The size of the recovery catalog schema depends upon the number of databases monitored by the catalog, and the number and size of RMAN scripts stored in the catalog. The schema also grows as the number of archived redo log files and backups for each database grows.

For an example, assume that the trgt database has 100 files, and you back up the database once a day, producing 50 backup sets containing 1 backup piece each. If you assume that each row in the backup piece table uses the maximum amount of space, then one daily backup will consume less than 170 KB in the recovery catalog. So, if you back up once a day for a year, then the total storage in this period is about 62 MB. Assume approximately the same amount for archived logs. Hence, the worst case is about 120 MB for a year for metadata storage.

For a more typical case in which only a portion of the backup piece row space is used, 15 MB for each year is a realistic estimate.

If you plan to register multiple databases in your recovery catalog, remember to add up the space required for each one based on the previous calculation to arrive at a total size for the default tablespace of the recovery catalog schema.

Allocating Disk Space for the Recovery Catalog Database


If you are creating your recovery catalog in an already-existing database, add enough room to hold the default tablespace to the recovery catalog schema. If you are creating a new database to hold your recovery catalog, then, in addition to the space for the recovery catalog schema itself, you must allow space for other files in the recovery catalog database:
  • SYSTEM tablespace
  • Temporary tablespaces
  • Rollback segment tablespaces
  • Online redo log files
This table describes the typical space required:
Type of Space Space Requirement

SYSTEM tablespace

90 MB

Temp tablespace

5 MB

Rollback or undo tablespace

5 MB

Recovery catalog tablespace

15 MB for each database registered in the recovery catalog

Online redo logs

1 MB each (3 groups, each with 2 members)

 Please be careful to ensure that the recovery catalog and target databases do not reside on the same disk. If a disk containing both your recovery catalog and your target databases failed, your recovery process would be much more complicated. If possible, take other measures as well to eliminate common points of failure between your recovery catalog database and the databases you are backing up.

Creating the Recovery Catalog Owner


After choosing the recovery catalog database and creating necessary space, you are ready to create the owner of the recovery catalog and grant this user necessary privileges.

Assume the following background information for the instructions in the following sections:
  • User SYS with password oracle has SYSDBA privileges on the recovery catalog database catdb.
  • A tablespace called tools in the recovery catalog database catdb stores the recovery catalog. Note that to use an RMAN reserved word as a tablespace name, you must enclose it in quotes and put it in uppercase. (Refer to Oracle Database Recovery Manager Reference for a list of RMAN reserved words.)
  • A tablespace called temp exists in the recovery catalog database.
  • The database is configured in the same way as all normal databases, for example, catalog.sql and catproc.sql have successfully run.

To create the recovery catalog schema in the recovery catalog database:
  1. Start SQL*Plus and then connect with administrator privileges to the database containing the recovery catalog. For example, enter:
    CONNECT SYS/oracle@catdb AS SYSDBA
    
    
  2. Create a user and schema for the recovery catalog. For example, enter:
    CREATE USER rman IDENTIFIED BY cat
      TEMPORARY TABLESPACE temp 
      DEFAULT TABLESPACE tools 
      QUOTA UNLIMITED ON tools;
    
    
  3. Grant the RECOVERY_CATALOG_OWNER role to the schema owner. This role provides the user with all privileges required to maintain and query the recovery catalog.
    SQL> GRANT RECOVERY_CATALOG_OWNER TO rman;
    


RMAN>Configure backup optimization on;
This command will make the read-only datafiles will not be backedup as long as backups of those files already exist and those backups meet eshtablished retention criteria.