Irrelevant thoughts of an oracle DBA

25 December 2007

Can a select block a truncate ?

Filed under: Uncategorized — dhoogfr @ 14:41

Recently a discussion was ongoing on the oracle-L list about the question if a select could block a truncate or not.
The first reaction that I had was that, when a select would be active reading from a table, the truncate would fail with the message: “resource busy and acquire with NOWAIT specified”. My reasoning behind this was that a truncate is a ddl statement, which generates no undo for the “deleted” records. So to guarantee that the select would be read consistent, Oracle would have to fail the truncate.
But when discussing this at the office, my colleague Geert claimed that a select placed no protection on the structure of a table and that it would be the select statement that would fail (with the message that the object does no longer exists).

At these moments, there is only 1 thing that you can realy do, and that is to test it. So I did and I found out that is not so strait forward.
Here we go:
(more…)

13 December 2007

setting db_cache_size when using automatic sga management

Filed under: Uncategorized — dhoogfr @ 22:52

When using automatic sga management (sga_target), you might notice that your shared pool sometimes keeps growing, at the cost of the db_cache_size. I suspect this to be caused by applications that are not using bind variables, but I have not yet researched this further.

When you have such an occasion you can do 4 things :

  • Instruct the programmers to fix the program (but you can only really do this when the application is developed in house)
  • Leave it as it is (not really an option)
  • Discard automatic sga management, and specify the different memory parameter yourself.
  • Give a lower limit for the db_cache_size, by setting the db_cache_size without removing the sga_target (oracle would then use the db_cache_size as a lower bound)

I went with option number 4, as the application was not developed by us and I still wanted oracle to be able to move memory from one pool to the other.
For this system I also wanted to increase the sga_target, as several applications had been migrated to this database.

SQL> alter system set sga_target=3000M scope=spfile sid='*';

System altered.

SQL> alter system set db_cache_size=2000M scope=spfile sid='*';
alter system set db_cache_size=2000M scope=spfile sid='*'
*
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-00384: Insufficient memory to grow cache

SQL> column sid format a15
SQL> column name format a40
SQL> column value format a20
SQL> r
  1  select sid, name, value
  2  from v$spparameter
  3* where name = 'db_cache_size'

SID             NAME                                     VALUE
--------------- ---------------------------------------- --------------------
*               db_cache_size

*sight*

It seems like oracle is checking the new db_cache_size value (that you want to put in the spfile, not in the running instance) against the current values in the running instance.
On metalink I found the following bug note about this: bug 4919526 “alter system set db_cache_size=xxxx scope=spfile fails. ora-00384″, only this has been marked as not a bug (yeah, right).

To overcome this issue I specified the db_cache_size parameter in the pfile that pointed to the spfile (which was located in asm) and after that changed the parameter again in the spfile itfself and removed the setting from the pfile.

SQL> !
[oracle@eocpc-rc01:~]$ echo db_cache_size=2000M >> $ORACLE_HOME/dbs/initTRIDION1.ora
[oracle@eocpc-rc01:~]$ cat $ORACLE_HOME/dbs/initTRIDION1.ora
spfile='+DGTRIDION/TRIDIONUK/spfileTRIDION.ora'
db_cache_size=2000M
[oracle@eocpc-rc01:~]$ exit
exit

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 3154116608 bytes
Fixed Size                  2033408 bytes
Variable Size             620763392 bytes
Database Buffers         2516582400 bytes
Redo Buffers               14737408 bytes
Database mounted.
Database opened.

SQL> alter system set db_cache_size=2000M scope=spfile sid='*';

System altered.

SQL> column sid format a15
SQL> column name format a40
SQL> column value format a30
SQL> set linesize 120
SQL> select sid, name, value
from v$spparameter
where name = 'db_cache_size';  2    3

SID             NAME                                     VALUE
--------------- ---------------------------------------- ------------------------------
*               db_cache_size                            2097152000

SQL> !
[oracle@eocpc-rc01:~]$ echo spfile=\'+DGTRIDION/TRIDIONUK/spfileTRIDION.ora\' > $ORACLE_HOME/dbs/initTRIDION1.ora
[oracle@eocpc-rc01:~]$ cat $ORACLE_HOME/dbs/initTRIDION1.ora
spfile='+DGTRIDION/TRIDIONUK/spfileTRIDION.ora'

Of course, recreating the spfile is also an option

25 November 2007

Blog recommendation

Filed under: Uncategorized — dhoogfr @ 0:09

Just received a mail from Geert De Paep, the guy behind dba-village, that he as well has started a blog: http://geertdepaep.wordpress.com.
As Geert is a collegue and friend, I know he has a very profound knowledge of oracle in general and high-availability in particular, so his blog is a real must read.

His first article is one about the use of oracle streams to migrate and upgrade databases.

11 November 2007

linux / unix kernel parameters

Filed under: Uncategorized — dhoogfr @ 16:05

I have a confession to make: the linux/unix kernel parameters have always been somewhat of a mistery to me and while I have installed several databases on linux and solaris systems I never realy understood how to tune them. I just followed the oracle installation documentation and metalink note 169706.1.

But recently I had to perform an installation on a 64bit linux system. The database in question was going to be used to develop a data warehouse system and the company asked to have at least a buffer cache of 4 GB. Now the box had 16 GB of ram so this requirement should not pose any problem.
So I started preparing the os for the installation and looked to metalink for the recommended kernel values. At that point I noticed that the value for the shmmax parameter on linux 64bit was stated as “max(ram in bytes, 4GB) / 2″ “max(ram in bytes/2, 4 GB * 1024^3)”. This limit of 4 GB seems a little bit strange to me for a 64bit system, so I posted a service request on this and started a quest for more information on the kernel parameters.

I found an excellent explanation here, but after reading it I started to ask myself more questions on the values recommended by Oracle in the installation guide and on metalink.
As I understand it, the shmmax parameter is supposed to be the maximum size of a single shared memory segments (and the oracle sga is build out of these shared memory segments). The shmmni parameter gives the maximum number of shared memory segments system wide and shmall gives the total amount of shared memory (expressed in pages).
If I combine the 3 definitions I understand that the maximum shared memory will be the minimum between shmmax * shmmni and shmall * page_size.

When I calculate the parameters with the values specified in metalink note 169706.1 (for my system with 16 GB ram) I get the following results:

shmmax = min(4294967296, RAM in bytes) /2 = min(4294967296, 17179869184)/2 = 4294967296/2 = 2147483648
shmmax = min(RAM in bytes / 2 , 4294967296) = min (17179869184 / 2, 4294967296) = 4294967296
shmmni = 4096
shmall = shmmax / page_size = 2147483648 / 4096 = 524288
shmall = shmmax / page_size = 4294967296 / 4096 = 1048576

Which would mean that on my system I get a maximum of 2 GB 4 GB of shared memory, and thus a maximum sga of 2GB 4 GB (actually less, because this limit is system wide and not only for my oracle instance). Needless to say that when you are needing a 64bit system, this 2 GB 4 GB limit is a little bit on the low side…

In metalink note 169706.1 there is, next to the shmall parameter, a link to another note (301830.1) which states to set the shmall parameter to “ram size / page_size” instead of shmmax / page_size. For my system with 16 GB of ram, this would give a shmall of 4194304 instead of 524288 and thus indeed fixing my problem.
Also if you would compare the proposed values in the installation guide with those from metalink you will notice that the shmall in the installation guide (2097152) is much higher then on metalink (allowing for 8 GB of shared memory when your pagesize is 4096) and that there is no 4GB limitation in the calculation of the shmmax parameter.

During my searh Oracle also responded on my SR on metalink, saying that the 4 GB limit in the shmmax calculation is not an os limit but an oracle limit. They also said that this was because 4GB is the maximum size of a core dump. Now what the size of the core dump has to do with the size of a shared memory segment is yet unclear to me, and it also makes me wonder if my calculation of shmmax is correct. (After having received some feedback by oracle it indeed turned out that my formulla for shmmax was not correct, so I have made the necessary adjustments in the post)
I will keep searching for this and update the post when I have more information.

shmmax and solaris 10

A couple of days later, I needed to install oracle on a solaris 10 system. Now with solaris 10 the way to specify kernel parameters differs from the prior solaris versions. You now no longer need to put your kernel parameters in the /etc/system file, but instead use projects and projmod.
Because of past experiences on solaris 10 I already knew that the oracle installation guide was not very correct on how to work with projects (and because of my newly gained understanding of the kernel parameters) I started to search for more information on solaris 10 kernel parameters.

I soon found out that the parameter project.max-shm-memory which is replacing the old shmmax parameter differs not only in name and how to set it (note that using prctl to set the kernel parameter, as suggested in the installation guide will make the parameter not persistant), but also in definition!
When you look at the solaris tunable parameters reference manual you will note that the parameter is not specifying the maximum size of a single shared memory segment, but instead the maximum shared memory for your entire project. In the installation guide, however, oracle is still using the same value for the new project.max-shm-memory parameter as with the old shmmax parameter. So when you follow the recommended values for the installation guide you would suddenly limit the total sga size for all databases on your system to 4GB (unless you use different users with different projects for your databases).

So be warned: when it comes to kernel parameters, don’t just take the values specified in the installation guide.

29 October 2007

database agnostic

Filed under: Uncategorized — dhoogfr @ 20:37

Recently I was reading the meep post on Moans Nogood’s blog, and especially the comment about the database agnostic applications struck me.
In my job I see a lot of application vendors who proudly claim that their applications are database agnostic. Which is for me enough to classify that application with the garbage.

Ok, you paid a lot of money for your database license, and then some vendor comes in and says: “Yes, a lot of functionality is already provided in the database, but we are not using it” and he/she is actually proud on that.

Not only they are re-inventing the wheel, buth they also shows that they have no idea how the database works.
If they did, they would know that it is not the limitation on the length of the field or table names, nor which keywords are being used, that set databases apart, but the concurrency model. And apart from limiting the database to keyed reads and writes, there is no way you can make your application run on multiple databases.

« Previous Page

The Rubric Theme Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.