Irrelevant thoughts of an oracle DBA

25 October 2009

Wintertime (again)

Filed under: infrastructure,Uncategorized — Freek D'Hooge @ 15:28

During my prior post on the effect of daylight saving settings on the Oracle scheduler, I already pointed out that it is best to set your session timezone information to a named timezone and not to an absolute offset. In this post I would like to investigate how the session timezone settings affect the sysdate, current_date, systimestamp and current_timestamp variables during the switchover to or from daylight saving time. Current_date and current_timestamp, are using the date/time information of the server on which the database runs and modify that time using the timezone settings of the session.
As with the last post, the tests where done in response to the switching from wintertime to summertime, and I’m to lazy to redo them.

In the first test, I do not explicitly set timezone information in my session.
Both the server time and the client time has been set to a couple of minutes before the swithover from wintertime to summertime:

sys@GUNNAR> alter session set nls_date_format = 'DD/MM/YYYY HH24:MI:SS';

Session altered.

sys@GUNNAR> alter session set nls_timestamp_tz_format='DD/MM/YYYY HH24:MI:SS "TZ:" TZR "DS:" TZD ';

Session altered.

sys@GUNNAR> column systimestamp format a35
sys@GUNNAR> column current_timestamp format a35
sys@GUNNAR> select sysdate, current_date, systimestamp, current_timestamp from dual;

SYSDATE             CURRENT_DATE        SYSTIMESTAMP                        CURRENT_TIMESTAMP
------------------- ------------------- ----------------------------------- -----------------------------------
29/03/2009 01:58:32 29/03/2009 01:58:32 29/03/2009 01:58:32 TZ: +01:00 DS:  29/03/2009 01:58:32 TZ: +01:00 DS:

As you can see the timezone information uses the absolute offset notation and is set to GMT +1 (which corresponds with wintertime in Belgium).
After some minutes (when the summertime came in effect), I execute the same query again:

sys@GUNNAR> select sysdate, current_date, systimestamp, current_timestamp from dual;

SYSDATE             CURRENT_DATE        SYSTIMESTAMP                        CURRENT_TIMESTAMP
------------------- ------------------- ----------------------------------- -----------------------------------
29/03/2009 03:00:18 29/03/2009 02:00:18 29/03/2009 03:00:18 TZ: +02:00 DS:  29/03/2009 02:00:18 TZ: +01:00 DS:

Both sysdate and systimestamp has jumped 1 hour in the feature and systimestamp now shows the timezone as “GMT + 2” (summertime in Belgium).
Current_date and current_timestamp both show the time without summertime corrections, but with current_timestamp the timezone information places the time in the right context.

Next, I disconnect and reconnect the session:

sys@GUNNAR> select sysdate, current_date, systimestamp, current_timestamp from dual;

SYSDATE             CURRENT_DATE        SYSTIMESTAMP                        CURRENT_TIMESTAMP
------------------- ------------------- ----------------------------------- -----------------------------------
29/03/2009 03:01:15 29/03/2009 03:01:15 29/03/2009 03:01:15 TZ: +02:00 DS:  29/03/2009 03:01:15 TZ: +02:00 DS:

This time, all 4 show the same time and timezone information (all using summertime).
The explanation for this is that the timezone information for a session is determined when the session is created, and Oracle only applies daylight saving settings when using a named timezone. So as long as the session is connected, it uses the “old” timezone of GMT +1. With sysdate and systimestamp the timezone information comes from the server, not from the client.

In the second test, I have set the ORA_SDTZ variable in the client environment to “Europe/Brussels”

sys@GUNNAR> select sysdate, current_date, systimestamp, current_timestamp from dual;

SYSDATE             CURRENT_DATE        SYSTIMESTAMP                        CURRENT_TIMESTAMP
------------------- ------------------- ----------------------------------- -----------------------------------------------
29/03/2009 01:57:37 29/03/2009 01:57:38 29/03/2009 01:57:37 TZ: +01:00 DS:  29/03/2009 01:57:37 TZ: EUROPE/BRUSSELS DS: CET

### a couple of minutes later

sys@GUNNAR> select sysdate, current_date, systimestamp, current_timestamp from dual;

SYSDATE             CURRENT_DATE        SYSTIMESTAMP                        CURRENT_TIMESTAMP
------------------- ------------------- ----------------------------------- ------------------------------------------------
29/03/2009 03:00:04 29/03/2009 03:00:04 29/03/2009 03:00:04 TZ: +02:00 DS:  29/03/2009 03:00:04 TZ: EUROPE/BRUSSELS DS: CEST

Both current_date and current_timestamp have now also jumped 1 hour in the “future” and the daylight saving settings in current_timestamp has changed from CET (Central European Time) to CEST (Central European Summer Time).
To me this shows that it is important to set the timezone of you clients correctly, even if the database is not used from different timezones.
A long running session is sufficient to pollute your data, certainly if you are using current_date as it has no timezone information.

7 December 2008

About datapump, SMO, autocompletion, P2V and opatch

Filed under: Uncategorized — Freek D'Hooge @ 15:30

Long time since I blogged, but I have been busy with tinkering with Ubuntu, learning about Oracle streams, client projects and playing with my now 10 month old son (not necessarily in this order).

Still, to let you know that I am still alive: a short overview of things I learned that you might find useful:

Tanel Poder’s advanced troubleshooting course

If you still have the possibility to follow it, then do so. I really learned a lot during this course.
The word “advanced” in the course title is not soley there for marketing purposes
More information about the course contents can be found here

Datapump and lobs

Until 10.2.0.4 there is a bug in datampump with lobs, causing them to be slow on importing. This is due to bug 5555463, which is fixed in 10.2.0.4 (there is also an one-off patch available). Another work-arround is to enforce the access_method used by datapump, but this is unsupported.
Some articles about this that I found usefull:

Note:5555463.8 – Bug 5555463 – Slow performance of datapump IMPORT with small LOBs
Note:552424.1 – Export/Import DataPump Parameter ACCESS_METHOD – How to Enforce a Method of Loading and Unloading Data ?
Note:286496.1 – Export/Import DataPump Parameter TRACE – How to Diagnose Oracle Data Pump

Netapp Snapmanager for Oracle 3.0 has been released.

This release has some important new features, such as time based retention policies.
Don’t have the time to play with it myself, but one of my colleagues has started to test it, and is blogging about it: http://bjornnaessens.wordpress.com

Autocompletion for hostnames

http://geekozoid.blogspot.com/2008/06/auto-complete-hostnames-with-using-bash.html

Prevent windows from requiring to reactivate after a P2V

I created a vmware image from my laptop to run it under Ubuntu, unfortunately my laptop was installed with an OEM edition of XP, so it kept comlaining that I needed to reactivate the license again.
Luckally there is a vmware parameter you can use to avoid this: http://communities.vmware.com/thread/99057
(Thanks to Filiep Declerck for the link)

10.2.0.4, opatch, solaris and rac

With 10.2.0.4, opatch does not seems to update the $ORACLE_HOME/inventory/oneoffs/ and the $ORACLE_HOME/.patch_storage/ directories on the remote nodes when applying one-off patches. We have noticed this issue on several servers with Solaris 10 and oracle 10.2.0.4 (and opatch 10.2.0.4.3 and 10.2.0.4.5). Other combination might have the same problem.
The patch itself seems to be applied on the remote nodes, but because the directories are not copied a query (such as lsinventory) on the remote inventory will fail on the remote nodes (preventing installing other patches).
This issue is now tracked in bug 7611827. The workarround is to use the “-local” flag and install the patch node per node.

That’s all for now.

26 October 2008

Merge statements and sequences

Filed under: performance,Uncategorized — Freek D'Hooge @ 3:00

Here’s something I stumbled upon when I was researching a performance problem involving the oracle merge command.


fdh@GUNNAR> create table t_dest
  2  ( n1   number(10,0),
  3    n2   number(10,0),
  4    v1   varchar2(1)
  5  )
  6  /

Table created.

fdh@GUNNAR> create table t_source
  2  ( n1 number(10,0)
  3  )
  4  /

Table created.

fdh@GUNNAR> create sequence s_seq
  2  start with 1
  3  increment by 1
  4  nomaxvalue
  5  nocache
  6  nocycle
  7  noorder
  8  /

Sequence created.

fdh@GUNNAR> insert into t_dest 
2 select rownum, s_seq.nextval, 'O'
3 from all_objects 
4 where rownum <= 1000; 

1000 rows created. 

fdh@GUNNAR> insert into t_source 
2 select rownum 
3 from all_objects 
4 where rownum <= 500; 

500 rows created. 

fdh@GUNNAR> commit; 

Commit complete.

fdh@GUNNAR> select last_number from user_sequences where sequence_name = 'S_SEQ';

LAST_NUMBER
-----------
       1001

fdh@GUNNAR> set timing on
fdh@GUNNAR> exec dbms_monitor.session_trace_enable(waits => TRUE, binds => TRUE);

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.07
fdh@GUNNAR> merge into t_dest
  2  using t_source
  3  on ( t_dest.n1 = t_source.n1)
  4  when matched then update
  5  set v1 = 'U'
  6  when not matched then insert (n1, n2, v1)
  7  values (0, s_seq.nextval, 'N');

500 rows merged.

Elapsed: 00:00:03.68
fdh@GUNNAR> commit;

Commit complete.

Elapsed: 00:00:00.06
fdh@GUNNAR> exec dbms_monitor.session_trace_disable;

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.06
fdh@GUNNAR> select v1, count(*)
  2  from t_dest
  3  group by v1
  4  order by v1;

V   COUNT(*)
- ----------
O        500
U        500

Elapsed: 00:00:00.09

As you can see, the merge statement has updated 500 records and inserted no new records.
But check out the sequence:


fdh@GUNNAR> select last_number from user_sequences where sequence_name = 'S_SEQ';

LAST_NUMBER
-----------
       1501

Elapsed: 00:00:00.07

Apparently the sequence has increased with 500 (number of records from t_source), even with the “when not matched” clause never evaluating to true.
The trace output confirms this as well.


merge into t_dest
using t_source
on ( t_dest.n1 = t_source.n1)
when matched then update
set v1 = 'U'
when not matched then insert (n1, n2, v1)
values (0, s_seq.nextval, 'N')

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          6          0           0
Execute      1      0.59       0.56          0          9       1014         500
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.60       0.58          0         15       1014         500

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 43

Rows     Row Source Operation
-------  ---------------------------------------------------
      2  MERGE  T_DEST (cr=509 pr=0 pw=0 time=3478125 us)
    500   VIEW  (cr=509 pr=0 pw=0 time=3510161 us)
    500    SEQUENCE  S_SEQ (cr=509 pr=0 pw=0 time=3508659 us)
    500     HASH JOIN OUTER (cr=9 pr=0 pw=0 time=6366 us)
    500      TABLE ACCESS FULL T_SOURCE (cr=3 pr=0 pw=0 time=1042 us)
   1000      TABLE ACCESS FULL T_DEST (cr=6 pr=0 pw=0 time=1049 us)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  log file sync                                   2        0.01          0.01
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        0.01          0.01
********************************************************************************

update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,
  cache=:7,highwater=:8,audit$=:9,flags=:10
where
 obj#=:1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      500      0.00       0.00          0          0          0           0
Execute    500      2.96       2.90          0        500       1013         500
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     1000      2.96       2.91          0        500       1013         500

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  UPDATE  SEQ$ (cr=1 pr=0 pw=0 time=167 us)
      1   INDEX UNIQUE SCAN I_SEQ1 (cr=1 pr=0 pw=0 time=16 us)(object id 102)

********************************************************************************

Clearly something to remember when you are using a merge command with a sequence in the “when not matched” clause and a source table containing a couple of million rows. Certainly when you expect most records to be matched.
In such a case it could be better to move the sequence to a row trigger on the target table.

21 September 2008

Tanel Poder: Advanced Oracle Troubleshooting seminars

Filed under: Uncategorized — Freek D'Hooge @ 1:12

Just a quick note to say I will be going to Tanel Poder’s Advanced Oracle Troubleshooting seminar in Düsseldorf on 6 and 7 November. :)

Anyone else going?

ORA-08103 revisisted

Filed under: Uncategorized — Freek D'Hooge @ 0:39

In an earlier post I had already written about sessions receiving an ORA-08103 error after selecting from a table that had been truncated. At that time, I wrote the following conclusion:

“At this point I have no real explanation as for why the error occurs at this number of records fetched.
I suspect oracle of storing the locations of the blocks it has to scan during a full table in a kind of array with a limited length. After this array has been processed, Oracle would use the dataobj# value (and other keys) to get the next batch of block locations. At that moment the “object no longer exists” error would be thrown as the referenced dataobj# value no longer exists (it has been increased by 1).
But at this moment this is just pure speculation.”

Some weeks ago, I read an answer on the Oracle-L list from Tanel Poder to an ORA-08103 question, in which he stated that oracle would read 10 pointers to extents from the segment header at a time.

A quick test does seem to confirm this:

The TEST tablespace is a LMT with an uniform extent size of 16K

drop table test;

Table dropped.

fdh@GUNNAR&amp;amp;gt; create table test (veld1 char(2000)) tablespace test pctfree 0;

Table created.

fdh@GUNNAR&amp;amp;gt; insert into test select 'x' from all_objects where rownum &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;= 200;

200 rows created.

fdh@GUNNAR&amp;amp;gt; commit;

Commit complete.

fdh@GUNNAR&amp;amp;gt; select bytes, blocks, extents from user_segments where segment_name = 'TEST';

BYTES     BLOCKS    EXTENTS
---------- ---------- ----------
425984         52         26

fdh@GUNNAR&amp;amp;gt; set serveroutput on
fdh@GUNNAR&amp;amp;gt; -- set the arraysize to 1 to prevent oracle from "prefetching" records
fdh@GUNNAR&amp;amp;gt; set arraysize 1
fdh@GUNNAR&amp;amp;gt; var x refcursor
fdh@GUNNAR&amp;amp;gt;
fdh@GUNNAR&amp;amp;gt; BEGIN
2
3      open :x for select veld1 from test;
4
5  END;
6  /

PL/SQL procedure successfully completed.

fdh@GUNNAR&amp;amp;gt;
fdh@GUNNAR&amp;amp;gt; DECLARE
2
3      l_veld1   char(2000);
4
5  BEGIN
6
7      fetch :x into l_veld1;
8      dbms_output.put_line('veld1: ' || trim(l_veld1));
9
10  END;
11  /
veld1: x

PL/SQL procedure successfully completed.

-- AT THIS MOMENT ANOTHER SESSION TRUNCATES THE TABLE

fdh@GUNNAR&amp;amp;gt;
fdh@GUNNAR&amp;amp;gt; DECLARE
2
3      l_veld1   char(2000);
4      l_cnt     number(3,0)   := 0;
5
6  BEGIN
7
8      BEGIN
9
10          loop
11
12              fetch :x into l_veld1;
13              exit when :x%NOTFOUND;
14              l_cnt := l_cnt + 1;
15
16          end loop;
17
18      EXCEPTION
19          when others then
20              dbms_output.put_line(SQLERRM);
21
22      END;
23
24      dbms_output.put_line('fetched rows: ' || l_cnt);
25
26  END;
27  /
ORA-08103: object no longer exists
fetched rows: 75

PL/SQL procedure successfully completed.

fdh@GUNNAR&amp;amp;gt;

My block size is 8K, so I could store 4 records per db block. Before the truncate I read 1 record and after the truncate I could read 75 records before getting the error. 76 records / 4 records per block / 2 blocks per extent = 9.5 extents. Remember that the first block is used for the segment header, so pointer 1 would only point to an extent with 1 data block ( meaning that my 10 pointers point in total to 19 blocks * 4 = 76 records).

This answers the question on when you would receive the ORA-08103 error after a truncate, but also means that your dbfmrc is limited by your extent size.
In my example the 10 pointers would reference 20 blocks (not taking in account the segment header), so a dbfmrc of more then 20 would not be possible for this table.

New question: Why is a multiblock read not crossing extent bounderies?

21 July 2008

AnalyzeThis

Filed under: Uncategorized — Freek D'Hooge @ 12:22

A while ago I had written a post with the question if I would release a package as open source or just provide a wrapped form of it. Since then, it became clear to me that I simply don’t have the time to provide proper support for the package.
Meaning I have two choises, either I never release the package to the public and only use it for my own purposes, or I publish the unwrapped code.

For me, option 1 is not realy an option, so I’m proud to present to you: AnalyzeThis !

uhm, ok, fantastic, euhm… so what does it do?

Well, I have created a page with a little faq (which will have to act as documentation), but in a nuttshell: It helps you to gather and manage the cbo statistics for your database by making it possible to store the dbms_stats option values you want to use into a table and to specify default options on several levels. I also automates the backup of statistics before overwriting them and logs information about the generation of new statistics (including timing information).

Currently you have two options with dbms_stats: either you let oracle determine which stats options to use for which object or you specify seperate options for each object yourself. There is nothing in between.
With AnalyzeThis you now have a third option: specify the objects (for which you can also use wildcards) for which you want to use a certain parameter set and let the other objects use the default parameters.

I don’t know if other people think there is a need for such a package, but I did. So here it is.
Don’t expect to much support for it though. Remember the main reason why I’m publising the code is because I don’t have the time to provide proper supprt.
I will still continue to further development this package, so irregular updates and bugfixes will be made available for download in the future.

10 June 2008

bottle water

Filed under: personal,Uncategorized — Freek D'Hooge @ 1:06

This weekend I saw a documentary on television about the sense or nonsense of bottle water.
In this documentary is was pointed out that in mosts part of the world, tap water is perfectly save to drink (in Belgium it is actually more strictlty regulated then bottle water), that tap water is cheaper and that bottle water has a high impact on the environment (mainly because of transportation and the plastic bottles, even when they are recycled).

They also zoomed in to some of the more extreme forms of bottle water: Appereantly there exists something that is called “cloud juice” which can be ordered in the more fancier restaurants and is nothing more then rain water from King Island, Australia. Another example was water from Fiji, an area where lots of people don’t have access to clean water!

I must admit that I had not given it much thought until today, and in spite of the fact that I always have drunk tap water at my parents, I’m drinking bottled water today.
But after having seen this documentary my wife and I decided to switch to tap water. No more heavy lifting of bottles of water (except for little Gunnar who still requires water with a low level of minerals).

Ok, to be honest I’m drinking bottle water as I’m writing this. A special kind even: fire water aka a 14 year old anConoc single malt :)

24 May 2008

So, you think to know who is blocking you, do you?

Filed under: Uncategorized — Freek D'Hooge @ 23:09

Recently I needed to verify why an update kept hanging on one of our client’s new rac system.
I suspected this to be a locking issue, so I queried v$session with a self join on the blocking_session column to check if the session was indeed blocked and if so who was blocking it.

The result showed that the session was indeed blocked and that the blocking session was issued by the sysman user.
Which was a little bit odd.

After running some queries to find out what the hell sysman was doing (which resulted in zero answers), I ran an older script I have that joins v$lock with v$session (next to v$locked_object and dba_objects) which showed a different session as blocking session.
Which was a little bit odd.

Additional queries confirmed that it was indeed this other session which was holding the lock (comming of course from the same user on a different machine, on which he forgot to issue a commit).

Currious about this, I checked the output from the queries again and I noticed that the two reported sessions where off by 1. In v$session.blocking_session the value was 1 higher then reported from v$lock.
So I did some tests if this result was consistent.

C:\>sqlplus fdh/fdh

SQL*Plus: Release 10.2.0.1.0 - Production on Wed May 14 17:40:25 2008

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters and Data Mining options

fdh@TRIDION> create table test
  2  ( veld1  number)
  3  tablespace sysaux;

Table created.

fdh@TRIDION> insert into test values (1);

1 row created.

fdh@TRIDION> commit;

Commit complete.

fdh@TRIDION> select sid from v$mystat where rownum = 1;

       SID
----------
       502

fdh@TRIDION> update test set veld1 = 2;

1 row updated.

fdh@TRIDION>

I now have an unfinished transaction in session with sid 502.
I will now open a new session (on the same instance) and try to update the same statement.

C:\>sqlplus fdh/fdh

SQL*Plus: Release 10.2.0.1.0 - Production on Wed May 14 17:40:40 2008

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters and Data Mining options

fdh@TRIDION> select sid from v$mystat where rownum = 1;

       SID
----------
       322

fdh@TRIDION> update test set veld1 = 3;

I now have a blocking lock, so lets check v$session to see the value of the blocking_session column:

sys@TRIDION> select sid, serial#, username, blocking_session from gv$session where blocking_session is not null;

SID    SERIAL# USERNAME                       BLOCKING_SESSION
---------- ---------- ------------------------------ ----------------
322      17689 FDH                                         503

I did the test several times, each time the value in blocking_session was 1 higher then the actual blocking session.
However, If the 2 sessions where on different nodes, the reported sid was correct. Same for single instances.
So I rule seems to be that if both the blocking and the blocked session are on the same rac instance, the value in the blocking_session column in v$session is 1 higher then the actual blocking sid.

A search on metalink showed that this is in fact a known bug: Bug 5481650, and that it is fixed in 10.2.0.4.
An interim patch is availabe for 10.2.0.2 and 10.2.0.3 for both solaris sparc as linux 64bit (don’t know if the problem is restricted to these two platforms).

So be warned the next time you need to verify a locking issue on your rac database, oracle might lie to you about who is guilty

5 May 2008

10.2.0.4 available on solaris sparc 64bit

Filed under: Uncategorized — Freek D'Hooge @ 11:27

Just noticed on http://www.dba-village.com that the 10.2.0.4 patch is finally available for sparc 64bit.

Just to let you know…

20 January 2008

Askimet spam filter

Filed under: Uncategorized — Freek D'Hooge @ 23:35

I have noticed that the spam counter has been going up without new spam messages being listed in the spam filter.  So it looks like the askimet spam filter is marking some posts as spam without giving me the possibility to moderate them.

If you notice that a posted comment is not published on the blog, then please send an email to the address listed in the “about” section.

Next Page »

Create a free website or blog at WordPress.com.