Irrelevant thoughts of an oracle DBA

28 November 2007

oracle datapump, because there is no better one

Filed under: infrastructure — dhoogfr @ 0:39

In the last couple of days I have been busy with consolidating different databases into 2 big ones.
One of the challenges was that in the original databases there where a lot of objects in the “users” and “indx” tablespace (and the other tablespaces had no real naming convention either). To make it more fun, a high number of tables used lob datatypes.
(When you have a table with a lob datatype and one of the referenced tablespaces does not exist, then the table create statement fail, instead of failing over to the default tablespace of the schema owner. This because imp can not determine which tablespace clause – that of the table itself or that for the lob – caused the error)

In the old pre-10g ages, I would have had the choice between a) create tablespaces with the same name in the consolidated database and later on move the objects to a new tablespace, or b) precreate the tables with lob datatypes and then do the import.
Both options would be time consuming and error prone, not to mention that the import utility was not very fast when it came to importing lobs.

Fortuanatly these databases to be consolidated where all 10gR2 databases, so I had a third option: use oracle datapump to move the schemas to the new database.
With datapump you have this very nice parameter: remap_tablespace, which allows you to simple specify the old and the new tablespaces name and that is all you have to do to change tablespace_names during the import.
And to make it realy fun: you can use the network_link parameter to specify a database link, which will then be used to transport data from the source db to the target db (by by intermediate file).

Of course there are some drawbacks.
First of all: it is a server based tool (or better said, expdp/impdp are tools written arround server packages). That is, all processing happens on the server and the dmp file will be placed/read on/from the server itself.
While this gives you the possibility to start the expdp or impdp, disconnect and then reconnect at home, it may provide a (security) problem to work with files on the server itself. On nx environments NFS may provide a solution, as people can just mount the nfs volume on their local pc’s and work with the dmp files as if they where stored locally.
A second disadvantage is that datapump may be slower then exp/imp in case you have just a small dataset, but a high number of (empty or almost empty) tables.
What I also found annoying, is that the flashback_time parameter (which is more or less replacing the old consistent parameter) does not accept a datastamp value. This means that you can’t simply specify flashback_time=”systimestamp” to tell datapump you want all objects to be consistent to each other, but instead you have the specify the following: flashback_time=”to_timestamp(to_char(systimestamp, ‘DD/MM/YYYY HH24:MI:SS.FF’), ‘DD/MM/YYYY HH24:MI:SS.FF’)”.

Oh, and of course there is a bug to look out for: 5874989, which saids that data corruption can occur during a datapump import with characterset conversion.
luckily a patch exists.

27 November 2007

linux / unix kernel parameters follow-up

Filed under: infrastructure — dhoogfr @ 21:21

This is a follow-up note on my prior linux / unix kernel parameters post

Yesterday I received some feedback from oracle about why the core dump has a 4 GB limit and why you should therefore limit the shmmax to a maximum of 4GB:

” Normally on x86_64 a coredump is not limited to 4 GB, but a coredump is affected in different levels: namely memory, I/O and disk.
Most of the limitations are due to the filesystem used.
If the coredump routine in the kernel is not using the right kernel calls or the filesystem cannot handle files bigger than 4 GB (like reiserfs or some old ext2),
then the coredump will be affected.

For this reason, setting shmmax < 4GB on x86_64 is more a recommendation.”

After reading this I, closed the service request…

24 November 2007

October cpu patch

Filed under: infrastructure — dhoogfr @ 9:55

Yesterday I installed the October cpu patch on one of our client’s system.
Now, I don’t have much experience with cpu patches, but it looked simple enough. Just download the latest version of opatch, adjust the path settings and then use the napply command to apply the patch:

[oracle@xxxxxxxxxx:/data/software/oracle/cpu/OCT2007/6394981]$ opatch napply -skip_subset -skip_duplicate
Invoking OPatch

Oracle interim Patch Installer version
Copyright (c) 2007, Oracle Corporation.  All rights reserved..

Setting N-Apply implicit patch base-directory to /data/software/oracle/cpu/OCT2007/6394981
UTIL session

Oracle Home       : /opt/oracle/product/10.2.0/db_1
Central Inventory : /opt/oracle/oraInventory
   from           : /var/opt/oracle/oraInst.loc
OPatch version    :
OUI version       :
OUI location      : /opt/oracle/product/10.2.0/db_1/oui
Log file location : /opt/oracle/product/10.2.0/db_1/cfgtoollogs/opatch/opatch2007-11-23_11-05-24AM.log

Unable to lock Central Inventory.  OPatch will attempt to re-lock.
Do you want to proceed? [y|n]
User Responded with: Y
OPatch will sleep for few seconds, before re-trying to get the lock...

At this point opatch hung, unable to lock the inventory.
After some searching, I found that the locks directory in the oraInventory directory was not owned by oracle but oraclecl:

[oracle@xxxxxxxxxx:/opt/oracle/oraInventory]$ ls -l
total 30
drwxrwx---   2 oracle   oinstall     512 Nov 14 17:09 Contents
drwxrwx---   2 oracle   oinstall     512 Nov 11 20:25 ContentsXML
-rwxrwx---   1 oracle   oinstall      38 Nov 14 18:05 install.platform
drwxr-xr-x   2 oraclecl oinstall     512 Nov 14 18:11 locks
drwxrwx---   3 oracle   oinstall    1536 Nov 23 11:11 logs
-rwxrwx---   1 oracle   oinstall      59 Nov 11 20:25 oraInst.loc
-rwxrwx---   1 oracle   oinstall     299 Nov 14 18:02
-rwxrwx---   1 oracle   oinstall    1265 Nov 11 20:25
drwxrwx---   2 oracle   oinstall     512 Nov 11 20:14 oui
-rwxrwx---   1 oracle   oinstall    3336 Nov 14 18:02 sessionContext.ser

[oracle@xxxxxxxxxx:/opt/oracle/oraInventory]$ cd locks
[oracle@xxxxxxxxxx:/opt/oracle/oraInventory/locks]$ ls -l
total 0
-rw-r--r--   1 oraclecl oinstall       0 Nov 14 18:11 writer.lock

Now, this oraclecl user was used to install an oracle client (the system is a test system, where different parts of the application are installed on the same server), and used the same oracle_base and inventory location as the oracle database itself.
I fixed the problem and ran the opatch again.
Only to fail again, this time with the message that several make files could not be found….
At this time I was very happy that this was not a production system :)

Fortunately a quick look on the known issues part in the patch readme showed that the problem occured because the advanced security option was not installed in this oracle home and that it could be fixed by removing the 6397929 directory from the patch directory.
And sure enough, after deleting the directory I managed to install the patch without further problems.
Of course, why a cpu patch fails when a separate licensed option is not installed is an entirely different question.

Installing a cpu patch, easy enough, no?

« Previous Page

The Rubric Theme Blog at


Get every new post delivered to your Inbox.