Ferhat's Blog

There will be only one database

ipmitool to measure electricity usage of Exadata

Posted by fsengonul on September 16, 2013

If you’re curious about the electricity usage of an exadata rack  ( or have shortage of power in your data center) you may try to use a smart PDU.
But there is a better and cheaper way to measure it by using the ipmitool. After the collection, it’s so easy to create a graph and compare different exadata versions.

exadata_electricity_compare
In this graph , 2 X3-2 HC(High Capacity) / 2 X2-2 HP(High Performance) and 1 v2 SATA racks are compared. The electricity usage of the HP disks seems much more than the HC ones. It would be interesting to compare the relationship between the throughput, cpu usage and electricity.The details are below:

[root@xxx01 ~]# ipmitool sensor | grep -i vps
VPS_CPUS         | 50.000     | Watts      | ok    | na
VPS_MEMORY       | 12.000     | Watts      | ok    | na
VPS_FANS         | 42.000     | Watts      | ok    | na
/SYS/VPS         | 370.000    | Watts      | ok    | na

Our sysadmin Mustafa Altuğ Kamacı has coded a nice script to collect this info from all compute and storage cells. The script is triggered from the crontab.

[root@xxx01 ~]# cat /usr/bin/pwrstat
#!/bin/ksh
PATH=$PATH:/usr/bin:/usr/sbin:/bin
export PATH
d=`date '+%d%m%y'`
t=`date '+%H:%M'`
integer P1=0
integer p1=0
for i in `cat /root/group_all`
do
p1=`ssh -q $i "ipmitool sensor get /SYS/VPS|grep 'Sensor Reading'"|awk '{a=a+$4}END{print a }'`
P1=$P1+$p1
done
echo $t " " $P1 "Watt"  >> /home/pwrstat/pwrstat_$d.log
[root@xxx01 ~]#
root@maxdb01 pwrstat]# ls -al
total 376
drwxr-xr-x  2 root root 4096 Sep 16 00:00 .
drwxr-xr-x 23 root root 4096 Sep 10 15:26 ..
-rw-r--r--  1 root root 3173 Aug  1 23:55 pwrstat_010813.log
-rw-r--r--  1 root root 5472 Sep  1 23:55 pwrstat_010913.log
-rw-r--r--  1 root root 5472 Aug  2 23:55 pwrstat_020813.log
-rw-r--r--  1 root root 5472 Sep  2 23:55 pwrstat_020913.log
-rw-r--r--  1 root root 5472 Aug  3 23:55 pwrstat_030813.log
-rw-r--r--  1 root root 5472 Sep  3 23:55 pwrstat_030913.log
-rw-r--r--  1 root root 5472 Aug  4 23:55 pwrstat_040813.log
.
.
.
[root@xxx01 pwrstat]# cat pwrstat_010913.log
00:00   17580 Watt
00:05   17890 Watt
00:10   17350 Watt
00:15   17510 Watt
00:20   17990 Watt
00:25   17800 Watt
00:30   17640 Watt
00:35   17720 Watt
00:40   17780 Watt
00:45   17830 Watt
00:50   17950 Watt
00:55   17410 Watt
01:00   17970 Watt
01:05   17510 Watt
01:10   17600 Watt

Posted in Exadata, Uncategorized | 2 Comments »

using pivot fnc to mimic grid active session graph

Posted by fsengonul on January 2, 2012

version 2:
With Yasin’s suggestion, I have changed the script. Added the total ash_secs for the system and get the percentage for the cases when the machine is not fully utilized.
Now the first row shows the rollup of the data and the values are percents with respect to the ash seconds.
From the example below we may roughly say that machines cpu is 28 % utilized and sqlid cqkr2d84qxt6p has used 11% of the cpu for the last 60 seconds.

00:33:01 SQL> @ash2 60                                                                                                                                     
ash_counts for last 60 seconds

NUMBER_OF_NODES NUMBER_OF_THREADS SAMPLE_SECS   ASH_SECS ASH_SECS_PERCENT
--------------- ----------------- ----------- ---------- ----------------
             10                24          60      14400              144

Elapsed: 00:00:00.03

SQL_ID        ON_CPU CONC  UIO  SIO  ADM  OTH CONF SCHE CLST  APP  QUE IDLE  NTW  CMT TOTAL
------------- ------ ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- -----
                  28    0    4    0    0    0    0    0    1    0    0    0    0    0    34
cqkr2d84qxt6p     11    0    0    0    0    0    0    0    0    0    0    0    0    0    11
913cu8k9858rp      6    0    0    0    0    0    0    0    0    0    0    0    0    0     7
9dmq476mc247s      7    0    0    0    0    0    0    0    0    0    0    0    0    0     7
84k1xtr2aj9fd      2    0    0    0    0    0    0    0    0    0    0    0    0    0     2
bfsy799japxd4      0    0    0    0    0    0    0    0    0    0    0    0    0    0     1
7ar015kr4jny2      0    0    0    0    0    0    0    0    0    0    0    0    0    0     1
cgsangykrg375      1    0    0    0    0    0    0    0    0    0    0    0    0    0     1
13r0r59cjc9qy      0    0    0    0    0    0    0    0    0    0    0    0    0    0     1
5tmsa82zrcbnr      0    0    0    0    0    0    0    0    0    0    0    0    0    0     1
9g4s5ycuz6x10      0    0    0    0    0    0    0    0    0    0    0    0    0    0     1
7guv13r4psz4k      0    0    0    0    0    0    0    0    0    0    0    0    0    0     1
cg1s0gh2xn49n      0    0    0    0    0    0    0    0    0    0    0    0    0    0     1
frff0g83ud57d      0    0    0    0    0    0    0    0    0    0    0    0    0    0     1
fvxc1zqs1y3ah      0    0    0    0    0    0    0    0    0    0    0    0    0    0     0
1ytrv77gunsz1      0    0    0    0    0    0    0    0    0    0    0    0    0    0     0
0g1zt3sb3y5yz      0    0    0    0    0    0    0    0    0    0    0    0    0    0     0
bakdmp8pnc8a5      0    0    0    0    0    0    0    0    0    0    0    0    0    0     0
f1y8kbhh6v9sv      0    0    0    0    0    0    0    0    0    0    0    0    0    0     0
6pw8uk8k0dv0q      0    0    0    0    0    0    0    0    0    0    0    0    0    0     0
5k2b3qsy3b30r      0    0    0    0    0    0    0    0    0    0    0    0    0    0     0
81ky0n97v4zsg      0    0    0    0    0    0    0    0    0    0    0    0    0    0     0
3jvj0zbkak9h6      0    0    0    0    0    0    0    0    0    0    0    0    0    0     0

23 rows selected.

Elapsed: 00:00:02.15
00:34:44 SQL> 
prompt ash_counts for last &1 seconds
undef ASH_SECS_PERCENT
col ASH_SECS_PERCENT new_value ASH_SECS_PERCENT

column ON_CPU format 999
column Conc format 999
column UIO format 999
column SIO format 999
column Adm format 999
column Oth format 999
column Conf format 999
column Sche format 999
column CLST format 999
column App format 999
column Que format 999
column Idle format 999
column Ntw format 999
column Cmt format 999
column TOTAL format 999

select count(*) number_of_nodes,avg(value) number_of_threads,
       &1 sample_secs, sum(value)*&1 ash_Secs,sum(value)*&1/100 AS ASH_SECS_PERCENT from gv$parameter where name='cpu_count';

WITH ASH_SECS AS
(select sql_id,
ON_CPU,CONC,UIO,SIO, ADM, OTH, CONF, SCHE, CLST, APP, QUE, IDLE, NTW, CMT,
ON_CPU+CONC+UIO+ SIO+ ADM+ OTH+ CONF+ SCHE+ CLST+ APP+ QUE+ IDLE+ NTW+ CMT total
from
(select ash.sql_id,nvl(EN.WAIT_CLASS,'ON_CPU') class from gv$active_Session_history ash, v$event_name en
where ash.sample_time > sysdate - interval '&1' second
and ash.SQL_ID is not NULL and en.event# (+)=ash.event#
)
PIVOT (count(*) FOR class IN ('ON_CPU' ON_CPU,'Concurrency' Conc,'User I/O' UIO,'System I/O' SIO,'Administrative' Adm,'Other' Oth,
'Configuration' Conf ,'Scheduler' Sche,'Cluster' "CLST",'Application' App,'Queueing' Que,'Idle' Idle,'Network' Ntw,'Commit' Cmt )))
    select sql_id,sum(ON_CPU)/&&ASH_SECS_PERCENT ON_CPU,sum(CONC)/&&ASH_SECS_PERCENT CONC,sum(UIO)/&&ASH_SECS_PERCENT UIO,sum(SIO)/&&ASH_SECS_PERCENT SIO, 
    sum(ADM)/&&ASH_SECS_PERCENT ADM, sum(OTH)/&&ASH_SECS_PERCENT OTH , sum(CONF)/&&ASH_SECS_PERCENT CONF, sum(SCHE)/&&ASH_SECS_PERCENT SCHE,
     sum(CLST)/&&ASH_SECS_PERCENT CLST , sum(APP)/&&ASH_SECS_PERCENT APP,sum(QUE)/&&ASH_SECS_PERCENT QUE, sum(IDLE)/&&ASH_SECS_PERCENT IDLE, 
     sum(NTW)/&&ASH_SECS_PERCENT NTW, sum(CMT)/&&ASH_SECS_PERCENT CMT,sum(TOTAL)/&&ASH_SECS_PERCENT TOTAL from ash_secs
    group by rollup(sql_id)
    order by TOTAL desc;

version 1:
I had been using the decode function in order to summarize the last minute activity from ash.
It seems much more tidy and “new fashioned” to use pivot instead.
I wonder if there is a way to get the TOTAL column without rescanning the data.

prompt ash_counts for last 1 minute
column ON_CPU format 99999
column Conc format 9999
column UI/O format 9999
column SI/O format 9999
column Adm format 9999
column Oth format 9999
column Conf format 9999
column Sche format 9999
column CLST format 9999
column App format 9999
column Que format 9999
column Idle format 9999
column Ntw format 9999
column Cmt format 9999
column TOTAL format 99999

select * from
(select  ash.sql_id,nvl(EN.WAIT_CLASS,'ON_CPU') class from gv$active_Session_history ash, v$event_name en
where ash.sample_time > sysdate - interval '60' second
and  ash.SQL_ID is not NULL  and en.event# (+)=ash.event#
UNION ALL
select ash.sql_id,'TOTAL' from gv$active_Session_history ash
where ash.sample_time > sysdate - interval '60' second
and ash.sql_id is not null
 )
PIVOT (count(*)   FOR class IN ('ON_CPU' ON_CPU,'Concurrency' Conc,'User I/O' "UI/O",'System I/O' "SI/O",'Administrative' Adm,'Other' Oth,
'Configuration' Conf ,'Scheduler' Sche,'Cluster' "CLST",'Application' App,'Queueing' Que,'Idle' Idle,'Network' Ntw,'Commit' Cmt ,'TOTAL' TOTAL))
order by  TOTAL desc;

16:42:20 SQL> @ash                                                                                                                                         
ash_counts for last 1 minute

SQL_ID        ON_CPU  CONC  UI/O  SI/O   ADM   OTH  CONF  SCHE  CLST   APP   QUE  IDLE   NTW   CMT  TOTAL
------------- ------ ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ------
g42mvgbnmf5ws   1585     0     4     0     0    32     0     0     0     0     0     0     0     0   1672
0vjyvrybmtt1h    746     0     2     0     0     8     0     0     0     0     0     0     0     0    787
bjutwyympyn2c    233     0     0     0     0     0     0     0    11     0     0     0     0     0    251
6pm37uuabk94w    176     0     0     0     0     2     0     0     0     0     0     0     0     0    185
5k2b3qsy3b30r     43     0     7     0     0     0     0     0     0     0     0     0     0     0     52
3tp2fqk9wp4c0     45     0     1     0     0     0     0     0     0     0     0     0     3     0     51
7t3bfrr7fpjsr     41     0     0     0     0     0     0     0     0     0     0     0     0     0     41
dw0bx0cr2aacz     14     0     2     0     0     0     0     0     0     0     0     0     1     0     17
atmwkzu6u8prw      4     0     8     0     0     2     0     0     0     0     0     0     0     0     14
c9q76xrxyyv1t      3     0     4     0     0     2     0     0     0     0     0     0     0     0      9
bnvar23frxtaa      4     0     1     0     0     0     0     0     0     0     0     0     4     0      9
f1y8kbhh6v9sv      3     0     0     0     0     0     0     0     0     0     0     0     0     0      3
84p5drb647ptj      1     0     0     0     0     0     0     0     0     0     0     0     0     0      1
fvp75qx959tu5      1     0     0     0     0     0     0     0     0     0     0     0     0     0      1
1uyp1pq4w60h7      1     0     0     0     0     0     0     0     0     0     0     0     0     0      1
aanxb0917spa9      0     0     0     1     0     0     0     0     0     0     0     0     0     0      1
08z984tgg4rqu      1     0     0     0     0     0     0     0     0     0     0     0     0     0      1
3ubt3k76mva3k      1     0     0     0     0     0     0     0     0     0     0     0     0     0      1
576kqgucy5v1q      0     0     0     0     0     1     0     0     0     0     0     0     0     0      1
4cyx7sg5hd6wn      0     0     0     0     0     0     0     0     0     0     0     0     0     0      1

Posted in oracle | 2 Comments »

find a way to time travel in MOS

Posted by fsengonul on October 14, 2011

Posted in Uncategorized | 1 Comment »

OOW 2011 sessions

Posted by fsengonul on September 19, 2011

My sessions/forums in Oracle Open World 2011:

04561 – Turkcell’s Oracle Exadata Journey Continues: Three Full Racks Running Six Databases
13803 – Oracle Exadata Hybrid Columnar Compression: Next-Generation Compression
14048 – Maximize Your ROI with Oracle Database Cloud
Data Warehouse Global Leaders Annual Meeting

 

You may use the following link and search for Sengonul,  to add  the sessions

https://oracleus.wingateweb.com/scheduler/speakers.do

 

And the details:

Title: Turkcell’s Oracle Exadata Journey Continues: Three Full Racks Running Six Databases
Time Monday, 11:00 AM, Moscone South – 302
Length 1 Hour
Abstract: Turkcell, the leading telco operator in Turkey, with more than 33 million subscribers, started its Oracle Exadata journey a little more than a year ago with one full machine and achieved tremendous success. After it experienced a tenfold improvement in performance, storage, and datacenter footprint for its 100 TB data warehouse database, it was a no-brainer to continue on this route, so it added two new Exadata Database Machine X2-2s and consolidated all of its six databases in its data warehouse domain on three full racks. In this session, it shares its experience in this episode of the journey.

Title: Oracle Exadata Hybrid Columnar Compression: Next-Generation Compression
Time Tuesday, 11:45 AM, Moscone South – 304
Length 1 Hour
Abstract: Is your data warehouse growing faster than your storage budget? Is the size of your data warehouse slowing down your users’ queries? Are you convinced that there isn’t a way to archive your OLTP data and keep it accessible to users? If you answered yes to any of these questions, your attendance at this session is mandatory! You will learn how Oracle Exadata hybrid columnar compression can shrink your data warehouse to as little as 1/15 of its original size and improve query performance by drastically reducing I/O. You will also learn how Oracle Exadata hybrid columnar compression, with up to 20x compression for archive data, lets you keep your historical data available for users, and your storage administrator won’t even care that it’s there.

Title: Maximize Your ROI with Oracle Database Cloud
Time Monday, 03:30 PM, Moscone South – 308
Length 1 Hour
Abstract: Database cloud deployments provide the best ROI for deploying databases in a cloud environment. They are based on and leverage advanced database capabilities, and many customers are already benefiting from the capex and opex savings enabled by database cloud deployments. This session presents best practices for maximizing ROI when implementing database consolidation and deploying database as a service (DaaS) to improve overall business agility and significantly reduce database deployment times. It includes specific customer use cases and shows how the customers are maximizing the ROI of database cloud environments.

 

 

Posted in Exadata, oracle | Leave a Comment »

ORA_HASH to compare two tables/(sub)partitions

Posted by fsengonul on July 28, 2011

When you suggest a new method to move the data from one db to another db (previous post) , you should prove that every row is migrated successfully.
There are lots of examples for ORA_HASH implementation on the net. This is yet another one:
The sp uses listagg and ora_hash together.
The input may include the owner,table_name,partition or subpartition.

create or replace procedure GET_ORA_HASH_TABLE
        (owner in varchar2, table_name in varchar2,partition_name in varchar2 default NULL,sub_partition_name in varchar2 default NULL,hash_value out varchar2 ) is
        l_all_columns varchar2(4000);
        v_dyntask   varchar2(20000);
        CURSOR get_columns(p_owner varchar2,p_table_name varchar2) IS
            select listagg(column_name,'||') WITHIN GROUP (order by column_id) all_columns from dba_tab_columns where owner=p_owner and table_name=p_table_name;
BEGIN
   open get_columns(owner,table_name);
   fetch get_columns into l_all_columns;
   close get_columns;
   v_dyntask := 'select sum(ora_hash('|| l_all_columns ||'))  from '|| owner ||'.'|| table_name ;
   if sub_partition_name is  NOT NULL
   THEN
            v_dyntask := v_dyntask || ' subpartition ('|| sub_partition_name ||')';
   else if partition_name is  NOT NULL
            THEN
                                     v_dyntask := v_dyntask || ' partition ('|| partition_name ||')';
         end if;
  end if;
   execute immediate v_dyntask into hash_value ;
END;

Usage on subpartitions :

SQL> SET SERVEROUTPUT ON;
SQL> declare
  2  hash_value varchar2(4000);
  3  BEGIN
  4      GET_ORA_HASH_TABLE('OWNER','TABLE_NAME','','SP2011JAN01_01',hash_value);
  5      DBMS_OUTPUT.PUT_LINE(hash_value);
  6  END;
  7  /
43437576967369636

PL/SQL procedure successfully completed.

And on partitions:

SQL> r
  1  declare
  2  hash_value varchar2(4000);
  3  BEGIN
  4      GET_ORA_HASH_TABLE('OWNER','TABLE_NAME','P2011JAN01','',hash_value);
  5      DBMS_OUTPUT.PUT_LINE(hash_value);
  6* END;
695708730528399811

PL/SQL procedure successfully completed.

SQL> 

And on table:

 1  declare
  2  hash_value varchar2(4000);
  3  BEGIN
  4      GET_ORA_HASH_TABLE('OWNER','TABLE_NAME','','',hash_value);
  5      DBMS_OUTPUT.PUT_LINE(hash_value);
  6* END;

Posted in oracle | 1 Comment »

impdp via dblink on partitions and tbl$or$idx$part$num (It’s not a curse, just a function)

Posted by fsengonul on July 24, 2011

/* ALL THE BELOW IS JUST FOR TESTING, DO NOT USE on PRODUCTION , YET */

I hate restrictions.
The worst one is ORA-14100 :

SQL> select * from "TABLE_OWNER"."TABLE_NAME"@XLK partition(P2011JAN01);
select * from "TABLE_OWNER"."TABLE_NAME"@XLK partition(P2011JAN01)
                          *
ERROR at line 1:
ORA-14100: partition extended table name cannot refer to a remote object

You may either define the borders of the partition or create a view on the target and select on the view via dblink. Defining the borders will only work with range and list partitions. For the hash ones the only way is to define a view. But there has to be another way.

I have seen that impdp over dblink can get partition or subpartition name as a parameter. Such that:

impdp ddsbase/xxxxx DIRECTORY=DPE1 NETWORK_LINK=XLK tables=TABLE_OWNER.TABLE_NAME:P2011JAN01 job_name=DPE1_P2011JAN01 LOGFILE=DPE1:P2011JAN01.log CONTENT=DATA_ONLY QUERY=\" order by contract_sk,content_sk \" 

. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_02" 20303431 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_14" 20303020 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_01" 20222512 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_03" 20215302 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_04" 20261746 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_05" 20225309 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_06" 20301212 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_07" 20252840 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_08" 20254043 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_10" 20220187 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_13" 20237208 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_15" 20218603 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_16" 20225260 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_09" 20267115 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_11" 20239070 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_12" 20203805 rows

So impdp has a way to do this, even on subpartitionwise. But even on impdp , if you define parallel=16 , only one partition is loaded at a time, because of the locking issues on the target table.
When I check the query running on the source side, I have seen the strange “undocumented” tbl$or$idx$part$num function.


FROM "TABLE_OWNER"."TABLE_NAME"@XLK KU$
WHERE   tbl$or$idx$part$num("TABLE_OWNER"."TABLE_NAME"@XLK,0,3,0,KU$.ROWID)=4791815

So oracle has a way to select a single subpartition without the subpartition keyword. As you may guess 4791815 is the object_id for the subpartition in dba_objects.

SQL> col owner format a10
SQL> col object_name format a15
SQL> col suboject_name format a15
SQL> select owner,object_name,subobject_name  from dba_objects where object_id=4791815;

OWNER      OBJECT_NAME     SUBOBJECT_NAME
---------- --------------- ------------------------------
TABLE_OWNER  TABLE_NAME SP2011JAN01_02

Let’s mimic impdp and try to create a 16 way parallel movement from one db to another based on subpartitions.


SQL> select  count(*) from
  2   TABLE_OWNER.TABLE_NAME@XLK 
  3   WHERE TBL$OR$IDX$PART$NUM ("TABLE_OWNER"."TABLE_NAME", 0,3,0,ROWID) = 4791815;
select /*+ OPAQUE_TRANSFORM NESTED_TABLE_GET_REFS NESTED_TABLE_GET_REFS */ count(*) from
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 28728
Session ID: 1430 Serial number: 1865

maxdb04: 160363               ORA 7445 [qesmaGetPam()+4]                                  2011-07-24 18:20:07.398000 +03:00

adrci> show incident -mode detail -p "incident_id=160363"

ADR Home = /u01/app/oracle/diag/rdbms/dbname/dbname_4:
*************************************************************************

**********************************************************
INCIDENT INFO RECORD 1
**********************************************************
   INCIDENT_ID                   160363
   STATUS                        ready
   CREATE_TIME                   2011-07-24 18:20:07.398000 +03:00
   PROBLEM_ID                    8
   CLOSE_TIME                    <NULL>
   FLOOD_CONTROLLED              none
   ERROR_FACILITY                ORA
   ERROR_NUMBER                  7445
   ERROR_ARG1                    qesmaGetPam()+4
   ERROR_ARG2                    SIGSEGV
   ERROR_ARG3                    ADDR:0x8
   ERROR_ARG4                    PC:0x13D9310
   ERROR_ARG5                    Address not mapped to object
   ERROR_ARG6                    <NULL>
   ERROR_ARG7                    <NULL>
   ERROR_ARG8                    <NULL>
   ERROR_ARG9                    <NULL>
   ERROR_ARG10                   <NULL>
   ERROR_ARG11                   <NULL>
   ERROR_ARG12                   <NULL>
   SIGNALLING_COMPONENT          PART
   SIGNALLING_SUBCOMPONENT       <NULL>
   SUSPECT_COMPONENT             <NULL>
   SUSPECT_SUBCOMPONENT          <NULL>
   ECID                          <NULL>
   IMPACTS                       0
   PROBLEM_KEY                   ORA 7445 [qesmaGetPam()+4]
   FIRST_INCIDENT                160386
   FIRSTINC_TIME                 2011-07-24 17:06:21.071000 +03:00
   LAST_INCIDENT                 160363
   LASTINC_TIME                  2011-07-24 18:20:07.398000 +03:00
   IMPACT1                       0
   IMPACT2                       0
   IMPACT3                       0
   IMPACT4                       0
   KEY_NAME                      PQ
   KEY_VALUE                     (0, 1311520807)
   KEY_NAME                      Client ProcId
   KEY_VALUE                     oracle@hostname (TNS V1-V3).6051_47869439584576
   KEY_NAME                      SID
   KEY_VALUE                     2708.15225
   KEY_NAME                      ProcId
   KEY_VALUE                     41.95
   OWNER_ID                      1
   INCIDENT_FILE                 /u01/app/oracle/diag/rdbms/dbname/dbname_4/trace/dbname_4_ora_6051.trc
   OWNER_ID                      1
   INCIDENT_FILE                 /u01/app/oracle/diag/rdbms/dbname/dbname_4/incident/incdir_160363/dbname_4_ora_6051_i160363.trc



Ups, it creates a dump on the source db. :(
Don’t give up,yet.
When I check the dump file it mentions about a dblink named “!”
TBL$OR$IDX$PART$NUM(“TABLE_OWNER”.”TABLE_NAME”@!

so this time change the sql to :

SQL> select  count(*) from TABLE_OWNER.TABLE_NAME@XLK
  2  WHERE TBL$OR$IDX$PART$NUM ("TABLE_OWNER"."TABLE_NAME"@XLK, 0,3,0,ROWID) = 4791815;

  COUNT(*)
----------
  17760539

SQL> 

When I add the db_link name to the TBL$OR$IDX$PART$NUM function , it is working.

And finally it’s easy to create 16 parallel insert /*+APPEND */ statements running parallel on the same partition which has 16 subpartitions. And combine them with our in-house code to run them as seperate jobs.

SQL> set pagesize 100
SQL> r
  1  select subobject_name,object_id from dba_objects@XLK
  2* where  owner='TABLE_OWNER' and object_name='TABLE_NAME'  and subobject_name like 'SP2011JAN01%'

SUBOBJECT_NAME                  OBJECT_ID
------------------------------ ----------
SP2011JAN01_01                    4769085
SP2011JAN01_02                    4769086
SP2011JAN01_03                    4769087
SP2011JAN01_04                    4769088
SP2011JAN01_05                    4769089
SP2011JAN01_06                    4769090
SP2011JAN01_07                    4769091
SP2011JAN01_08                    4769092
SP2011JAN01_09                    4769093
SP2011JAN01_10                    4769094
SP2011JAN01_11                    4769095
SP2011JAN01_12                    4769096
SP2011JAN01_13                    4769097
SP2011JAN01_14                    4769098
SP2011JAN01_15                    4769099
SP2011JAN01_16                    4769100

16 rows selected.

SQL> 

SQL> insert /*+ APPEND  */ into TABLE_OWNER.TABLE_NAME subpartition(SP2011JAN01_01)
  2  select * from "TABLE_OWNER"."TABLE_NAME"@XLK "KU$"
  3  WHERE TBL$OR$IDX$PART$NUM ("TABLE_OWNER"."TABLE_NAME"@XLK, 0, 3, 0,ROWID) = 4769085
  4  order by contract_sk,content_sk;

in another 15 sessions

SQL> insert /*+ APPEND  */ into TABLE_OWNER.TABLE_NAME subpartition(SP2011JAN01_02)
  2  select * from "TABLE_OWNER"."TABLE_NAME"@XLK "KU$"
  3  WHERE TBL$OR$IDX$PART$NUM ("TABLE_OWNER"."TABLE_NAME"@XLK, 0, 3, 0,ROWID) = 4769086
  4  order by contract_sk,content_sk;
.
.
.

It took 3 hours with impdp to move a single day, while it take only 10 minutes with this method.
I’m not planning to use it yet, but wondering about the comments.

Posted in oracle | 3 Comments »

Presentation in ilOUG

Posted by fsengonul on May 16, 2011

I’ll be in Tel Aviv on Wednesday (18/05) to present our last exadata project.
Thanks to Israel Oracle User Group for their kind invitation.
It would also be a great chance for me to learn their experiences both on exadata and oracle user group activities.
For more information : http://www.iloug.org.il/Event_Page.php?EventID=104

Knowledge will increase when it is shared. That’s the best side of user groups.

Posted in Exadata, oracle | 4 Comments »

The eagle has landed

Posted by fsengonul on April 19, 2011

After 3 months of planning and logistics, the migration of uncompressed 600 TB’s of data from Europe to Asia has finished last weekend. Now our 2 X2-2 Racks is hosting 4 databases.

Thanks for everybody who has  involved in this project with both their supports and critics.

Posted in Exadata | 4 Comments »

External table to load compressed data residing on remote locations

Posted by fsengonul on April 8, 2011

The PREPROCESSOR option in the external tables  has provided the DWH systems  a lot of opportunities. Below you may find Allen Brumm’s and Mehant Baid’s  notes on how to load remote files via external tables and my minor changes on the script in order to add gunzip support and flexibility for different proxy and target file names.

In our environment , in order to load a single table, 16 sqlldr jobs start from 8 ETL servers. (parallel   & direct path) . The input files are on the ETL servers in gzip compressed format. The ETL tool , on the fly, unzips the files and feeds sqlldr.  The target tables has always 16 hash partitions.  As a result of this design for each table , I have 256 temp segments created on the database side. The difference of the hashing algorithm adds more spice to the situation. And also the network cards on both db nodes and ETL servers are 1 GbE. (do not ask why they’re not 10GbE. They will be.)

The best part of this method is the flexibility. You may choose to run the gunzip command on the target or on the source.In my case running the gunzip on the oracle machine decreases the data flow on the network and also decrease the cpu usage on the etl machine.  I’m still searching for an empty period on both sides in order to collect cpu and network usage statisticks on both cases.

so the steps:

The first thing : passwordless ssh connectivity has to be established between the remote locations and database servers.

Then I’ve changed the script to add support for different proxy and file names. On my case the input file names are all the same but the locations and directory names are different.

The create script for external table:

CREATE TABLE INVOICE.EXT_ABC(  INVxxx   NUMBER(16)    , ...))
ORGANIZATION EXTERNAL(TYPE oracle_loader   DEFAULT DIRECTORY dp_data_dir
ACCESS PARAMETERS(     RECORDS DELIMITED BY NEWLINE
PREPROCESSOR dp_exec_dir:'ora_xtra_pp.sh'
FIELDS TERMINATED BY ':'
MISSING FIELD VALUES ARE NULL
REJECT ROWS WITH ALL NULL FIELDS        (        FIELDS.... ))
LOCATION ('file01.gz','file02.gz','file03.gz','file04.gz','file05.gz','file06.gz','file07.gz','file08.gz',            'file09.gz','file10.gz','file11.gz','file12.gz','file13.gz','file14.gz','file15.gz','file16.gz'            ))
parallel 16;

Mehant Baid’s pdf : ExternalTablesRemoteDataAccess

The mapping file: ora_xtra_map.txt

a.gz:etl_user:10.XXX.XXX.11:/mfs_16way_000/Applications/Datamart/INV_DM/main/a.gz:cat:YES
a:etl_user:10.XXX.XXX.11:/mfs_16way_000/Applications/Datamart/INV_DM/main/a:cat:NO
file01.gz:etl_user:10.XXX.XXX.11:/DIR1/mfs_16way_000/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file02.gz:etl_user:10.XXX.XXX.12:/DIR2/mfs_16way_001/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file03.gz:etl_user:10.XXX.XXX.13:/DIR3/mfs_16way_002/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file04.gz:etl_user:10.XXX.XXX.14:/DIR4/mfs_16way_003/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file05.gz:etl_user:10.XXX.XXX.15:/DIR5/mfs_16way_004/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file06.gz:etl_user:10.XXX.XXX.16:/DIR6/mfs_16way_005/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file07.gz:etl_user:10.XXX.XXX.17:/DIR7/mfs_16way_006/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file08.gz:etl_user:10.XXX.XXX.18:/DIR8/mfs_16way_007/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file09.gz:etl_user:10.XXX.XXX.11:/DIR1/mfs_16way_008/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file10.gz:etl_user:10.XXX.XXX.12:/DIR2/mfs_16way_009/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file11.gz:etl_user:10.XXX.XXX.13:/DIR3/mfs_16way_010/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file12.gz:etl_user:10.XXX.XXX.14:/DIR4/mfs_16way_011/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file13.gz:etl_user:10.XXX.XXX.15:/DIR5/mfs_16way_012/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file14.gz:etl_user:10.XXX.XXX.16:/DIR6/mfs_16way_013/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file15.gz:etl_user:10.XXX.XXX.17:/DIR7/mfs_16way_014/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file16.gz:etl_user:10.XXX.XXX.18:/DIR8/mfs_16way_015/Applications/Datamart/INV_DM/main/input.gz:cat:YES

The modified ora_xtra_pp.sh

#! /bin/sh
PATH=/bin:/usr/bin
export PATH
#set -x
# ora_xtra_pp: ORAcle eXternal Table Remote file Access Pre- Processor
# fs version
#Format for the Map file
#Consists of five fields seperated using a :
#Syntax
#file:rusr:rhost:rdir_file_name:rcmd1:comp
#comp is used for gzipped input files.YES/NO
#
#Example
#foo.dat:abrumm:abrumm-dev:/home/abrumm/xt_Data/foo.dat:cat:NO
#f1.dat:fsengonul:fsengonul-dev:/home/fsengonul/xt_Data/foo.dat:cat:NO
#f1.gz:fsengonul:fsengonul-dev:/home/fsengonul/xt_Data/foo.gz:cat:YES
#this gives the freedom to use same named files on different locations
#
#get filename component of LOCATION, the access driver
#provides the LOCATION as the first argument to the preprocessor
proxy_file_name=`basename $1`
data_dir_name=`dirname $1`
#Flag is set if the file name in the Map file matches the proxy file name
#where our data file is stored
flag_dirf=0
#loops through the map file and fetches details for ssh
#username,hostname and remote directory
file_not_found_err='ora_xtra_pp:
Map file missing.Needed ora_xtra_map.txt in data directory'
if [ -e $data_dir_name/ora_xtra_map.txt ]
then
    while read line
    do
    map_file_name=`echo $line | cut -d: -f1`
    if [ $map_file_name = $proxy_file_name ]
    then
        rdir_file_name=`echo $line | cut -d: -f4`
        rusr=`echo $line | cut -d: -f2`
        rhost=`echo $line | cut -d: -f3`
        rcmd1=`echo $line | cut -d: -f5`
        comp=`echo $line | cut -d: -f6`
        flag_dirf=1
        break
    fi
    done  < $data_dir_name/ora_xtra_map.txt
else
    echo $file_not_found_err 1>&2
    echo $data_dir_name 1>&2
    exit 1
fi
if  [ $flag_dirf = 1 ]
then
    if [ $comp = 'NO' ]
    then
        ssh -q $rusr@$rhost $rcmd1 $rdir_file_name
    fi
    if [ $comp = 'YES' ]
    then
        ssh -q $rusr@$rhost $rcmd1 $rdir_file_name | gunzip -c
    fi
fi

Posted in oracle | 2 Comments »

TROUG Day 2011

Posted by fsengonul on April 8, 2011


 

Türk Oracle Kullanıcıları TROUG gününde buluşuyor!

2010 yılında International Oracle
Users Group Community (IOUC) tarafından kabul görerek kurulan Türk Oracle Kullanıcıları Grubu (http://troug.org), her yıl düzenlemeyi hedeflediği “TROUG Day” etkinliğinin ilki ile 21 Nisan’da Türk Oracle kullanıcılarını buluşturuyor. Katılımın ücretsiz olacağı bu etkinlikte, konusunun uzmanlarından teknik sunumlar ve katılımcıların sorularıyla yönlendireceği “TROUG Panel” oturumu yer alacak. Ayrıca günün sonunda
Bilginç IT akademi tarafından çekilişle birer kişiye Oracle 11g OCP ve Oracle/Sun Java sertifika eğitimleri hediye edilecektir.

Dünyaca ünlü Oracle gurusu Jonathan Lewis da sizlerle birlikte TROUG gününde!

Tüm dünyada Oracle profesyonellerinin yakından takip ettiği, Oracle gurusu Jonathan Lewis, açılış konuşması ve sonrasında “Thinking About Joins” başlıklı
teknik sunumu ile, ayrıca Oracle ACE Kamran Agayev de RMAN konulu bir sunum ile aramızda olacak.

Etkinliğimiz aynı zamanda internet üzerinden canlı yayınlanacak!

Etkinlimize lokasyon ya da kapasite nedenleriyle yerinde katılma fırsatı bulamayanlar için, tüm içerik
internet üzerinden canlı olarak yayınlanacak. Web üzerinden bu etkinliğimizi takip etmek isteyenler icin yayın adresimiz:

http://www.theformspider.com/troug/index.php

21 Nisan’da görüşmek üzere !


Program

 

09:00-10:00

Açılış & Thinking About Joins

Jonathan Lewis

10:00-10:50

20+ Soruda Exadata

Ferhat Şengönül & Hüsnü Şensoy

Kahve Molası

11:00-12:00

Enterprise Manager 11g

Grid Control Gökhan Atıl

Öğle Yemeği Molası

13:00-14:00

Oracle Dataguard: Nasıl

Daha Efektif Kullanırız?

Emre Baransel & Ogan Özdoğan

14:00-14:50

ASM Best Practices

Orhan Bıyıklıoğlu

Kahve Molası

15:00-16:00

11g Backup & Recovery

New Features

Kamran Agayev & Zekeriya Beşiroğlu

16:00-16:50

PL/SQL ile Web 2.0: JavaScript ve Javacılar nasıl kıskançlıktan çatlatılır?

Yalım K. Gerger

Kahve Molası

17:00-18:00

TROUG Panel

H.Tonguç Yılmaz, Kamran Agayev,

O. Yasin Saygılı, Talip Hakan Öztürk, Gökhan Atıl, Emre Baransel

 

 

> ÜCRETSİZ KAYIT OL

Tarih: 21 Nisan Perşembe
Yer: Bahçeşehir
Üniversitesi
 

Beşiktaş Kampüsü – İstanbul


 



Posted in TROUG | Leave a Comment »

 
Follow

Get every new post delivered to your Inbox.

Join 160 other followers