Ferhat's Blog

There will be only one database

impdp via dblink on partitions and tbl$or$idx$part$num (It’s not a curse, just a function)

Posted by fsengonul on July 24, 2011

/* ALL THE BELOW IS JUST FOR TESTING, DO NOT USE on PRODUCTION , YET */

I hate restrictions.
The worst one is ORA-14100 :

SQL> select * from "TABLE_OWNER"."TABLE_NAME"@XLK partition(P2011JAN01);
select * from "TABLE_OWNER"."TABLE_NAME"@XLK partition(P2011JAN01)
                          *
ERROR at line 1:
ORA-14100: partition extended table name cannot refer to a remote object

You may either define the borders of the partition or create a view on the target and select on the view via dblink. Defining the borders will only work with range and list partitions. For the hash ones the only way is to define a view. But there has to be another way.

I have seen that impdp over dblink can get partition or subpartition name as a parameter. Such that:

impdp ddsbase/xxxxx DIRECTORY=DPE1 NETWORK_LINK=XLK tables=TABLE_OWNER.TABLE_NAME:P2011JAN01 job_name=DPE1_P2011JAN01 LOGFILE=DPE1:P2011JAN01.log CONTENT=DATA_ONLY QUERY=\" order by contract_sk,content_sk \" 

. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_02" 20303431 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_14" 20303020 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_01" 20222512 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_03" 20215302 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_04" 20261746 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_05" 20225309 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_06" 20301212 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_07" 20252840 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_08" 20254043 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_10" 20220187 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_13" 20237208 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_15" 20218603 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_16" 20225260 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_09" 20267115 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_11" 20239070 rows
. . imported "TABLE_OWNER"."TABLE_NAME":"P2011JAN01"."SP2011JAN01_12" 20203805 rows

So impdp has a way to do this, even on subpartitionwise. But even on impdp , if you define parallel=16 , only one partition is loaded at a time, because of the locking issues on the target table.
When I check the query running on the source side, I have seen the strange “undocumented” tbl$or$idx$part$num function.


FROM "TABLE_OWNER"."TABLE_NAME"@XLK KU$
WHERE   tbl$or$idx$part$num("TABLE_OWNER"."TABLE_NAME"@XLK,0,3,0,KU$.ROWID)=4791815

So oracle has a way to select a single subpartition without the subpartition keyword. As you may guess 4791815 is the object_id for the subpartition in dba_objects.

SQL> col owner format a10
SQL> col object_name format a15
SQL> col suboject_name format a15
SQL> select owner,object_name,subobject_name  from dba_objects where object_id=4791815;

OWNER      OBJECT_NAME     SUBOBJECT_NAME
---------- --------------- ------------------------------
TABLE_OWNER  TABLE_NAME SP2011JAN01_02

Let’s mimic impdp and try to create a 16 way parallel movement from one db to another based on subpartitions.


SQL> select  count(*) from
  2   TABLE_OWNER.TABLE_NAME@XLK 
  3   WHERE TBL$OR$IDX$PART$NUM ("TABLE_OWNER"."TABLE_NAME", 0,3,0,ROWID) = 4791815;
select /*+ OPAQUE_TRANSFORM NESTED_TABLE_GET_REFS NESTED_TABLE_GET_REFS */ count(*) from
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 28728
Session ID: 1430 Serial number: 1865

maxdb04: 160363               ORA 7445 [qesmaGetPam()+4]                                  2011-07-24 18:20:07.398000 +03:00

adrci> show incident -mode detail -p "incident_id=160363"

ADR Home = /u01/app/oracle/diag/rdbms/dbname/dbname_4:
*************************************************************************

**********************************************************
INCIDENT INFO RECORD 1
**********************************************************
   INCIDENT_ID                   160363
   STATUS                        ready
   CREATE_TIME                   2011-07-24 18:20:07.398000 +03:00
   PROBLEM_ID                    8
   CLOSE_TIME                    <NULL>
   FLOOD_CONTROLLED              none
   ERROR_FACILITY                ORA
   ERROR_NUMBER                  7445
   ERROR_ARG1                    qesmaGetPam()+4
   ERROR_ARG2                    SIGSEGV
   ERROR_ARG3                    ADDR:0x8
   ERROR_ARG4                    PC:0x13D9310
   ERROR_ARG5                    Address not mapped to object
   ERROR_ARG6                    <NULL>
   ERROR_ARG7                    <NULL>
   ERROR_ARG8                    <NULL>
   ERROR_ARG9                    <NULL>
   ERROR_ARG10                   <NULL>
   ERROR_ARG11                   <NULL>
   ERROR_ARG12                   <NULL>
   SIGNALLING_COMPONENT          PART
   SIGNALLING_SUBCOMPONENT       <NULL>
   SUSPECT_COMPONENT             <NULL>
   SUSPECT_SUBCOMPONENT          <NULL>
   ECID                          <NULL>
   IMPACTS                       0
   PROBLEM_KEY                   ORA 7445 [qesmaGetPam()+4]
   FIRST_INCIDENT                160386
   FIRSTINC_TIME                 2011-07-24 17:06:21.071000 +03:00
   LAST_INCIDENT                 160363
   LASTINC_TIME                  2011-07-24 18:20:07.398000 +03:00
   IMPACT1                       0
   IMPACT2                       0
   IMPACT3                       0
   IMPACT4                       0
   KEY_NAME                      PQ
   KEY_VALUE                     (0, 1311520807)
   KEY_NAME                      Client ProcId
   KEY_VALUE                     oracle@hostname (TNS V1-V3).6051_47869439584576
   KEY_NAME                      SID
   KEY_VALUE                     2708.15225
   KEY_NAME                      ProcId
   KEY_VALUE                     41.95
   OWNER_ID                      1
   INCIDENT_FILE                 /u01/app/oracle/diag/rdbms/dbname/dbname_4/trace/dbname_4_ora_6051.trc
   OWNER_ID                      1
   INCIDENT_FILE                 /u01/app/oracle/diag/rdbms/dbname/dbname_4/incident/incdir_160363/dbname_4_ora_6051_i160363.trc



Ups, it creates a dump on the source db. ­čśŽ
Don’t give up,yet.
When I check the dump file it mentions about a dblink named “!”
TBL$OR$IDX$PART$NUM(“TABLE_OWNER”.”TABLE_NAME”@!

so this time change the sql to :

SQL> select  count(*) from TABLE_OWNER.TABLE_NAME@XLK
  2  WHERE TBL$OR$IDX$PART$NUM ("TABLE_OWNER"."TABLE_NAME"@XLK, 0,3,0,ROWID) = 4791815;

  COUNT(*)
----------
  17760539

SQL> 

When I add the db_link name to the TBL$OR$IDX$PART$NUM function , it is working.

And finally it’s easy to create 16 parallel insert /*+APPEND */ statements running parallel on the same partition which has 16 subpartitions. And combine them with our in-house code to run them as seperate jobs.

SQL> set pagesize 100
SQL> r
  1  select subobject_name,object_id from dba_objects@XLK
  2* where  owner='TABLE_OWNER' and object_name='TABLE_NAME'  and subobject_name like 'SP2011JAN01%'

SUBOBJECT_NAME                  OBJECT_ID
------------------------------ ----------
SP2011JAN01_01                    4769085
SP2011JAN01_02                    4769086
SP2011JAN01_03                    4769087
SP2011JAN01_04                    4769088
SP2011JAN01_05                    4769089
SP2011JAN01_06                    4769090
SP2011JAN01_07                    4769091
SP2011JAN01_08                    4769092
SP2011JAN01_09                    4769093
SP2011JAN01_10                    4769094
SP2011JAN01_11                    4769095
SP2011JAN01_12                    4769096
SP2011JAN01_13                    4769097
SP2011JAN01_14                    4769098
SP2011JAN01_15                    4769099
SP2011JAN01_16                    4769100

16 rows selected.

SQL> 

SQL> insert /*+ APPEND  */ into TABLE_OWNER.TABLE_NAME subpartition(SP2011JAN01_01)
  2  select * from "TABLE_OWNER"."TABLE_NAME"@XLK "KU$"
  3  WHERE TBL$OR$IDX$PART$NUM ("TABLE_OWNER"."TABLE_NAME"@XLK, 0, 3, 0,ROWID) = 4769085
  4  order by contract_sk,content_sk;

in another 15 sessions

SQL> insert /*+ APPEND  */ into TABLE_OWNER.TABLE_NAME subpartition(SP2011JAN01_02)
  2  select * from "TABLE_OWNER"."TABLE_NAME"@XLK "KU$"
  3  WHERE TBL$OR$IDX$PART$NUM ("TABLE_OWNER"."TABLE_NAME"@XLK, 0, 3, 0,ROWID) = 4769086
  4  order by contract_sk,content_sk;
.
.
.

It took 3 hours with impdp to move a single day, while it take only 10 minutes with this method.
I’m not planning to use it yet, but wondering about the comments.

Posted in oracle | 5 Comments »

Presentation in ilOUG

Posted by fsengonul on May 16, 2011

I’ll be in Tel Aviv on Wednesday (18/05) to present our last exadata project.
Thanks to Israel Oracle User Group for their kind invitation.
It would also be a great chance for me to learn their experiences both on exadata and oracle user group activities.
For more information : http://www.iloug.org.il/Event_Page.php?EventID=104

Knowledge will increase when it is shared. That’s the best side of user groups.

Posted in Exadata, oracle | 4 Comments »

The eagle has landed

Posted by fsengonul on April 19, 2011

After 3 months of planning and logistics, the migration of uncompressed 600 TB’s of data from Europe to Asia has finished last weekend. Now our 2 X2-2 Racks is hosting 4 databases.

Thanks for everybody who has  involved in this project with both their supports and critics.

Posted in Exadata | 4 Comments »

External table to load compressed data residing on remote locations

Posted by fsengonul on April 8, 2011

The PREPROCESSOR option in the external tables ┬áhas provided the DWH systems ┬áa lot of opportunities. Below you may find Allen Brumm’s and Mehant Baid’s ┬ánotes on how to load remote files via external tables and my minor changes on the script in order to add gunzip support and flexibility for different proxy and target file names.

In our environment , in order to load a single table, 16 sqlldr jobs start from 8 ETL servers. (parallel ┬á & direct path) . The input files are on the ETL servers in gzip compressed format. The ETL tool , on the fly, unzips the files and feeds sqlldr. ┬áThe target tables has always 16 hash partitions. ┬áAs a result of this design for each table , I have 256 temp segments created on the database side. The difference of the hashing algorithm adds more spice to the situation. And also the network cards on both db nodes and ETL servers are 1 GbE. (do not ask why they’re not 10GbE. They will be.)

The best part of this method is the flexibility. You may choose to run the gunzip command on the target or on the source.In my case running the gunzip on the oracle machine decreases the data flow on the network and also decrease the cpu usage on the etl machine. ┬áI’m still searching for an empty period on both sides in order to collect cpu and network usage statisticks on both cases.

so the steps:

The first thing : passwordless ssh connectivity has to be established between the remote locations and database servers.

Then I’ve changed the script to add support for different proxy and file names. On my case the input file names are all the same but the locations and directory names are different.

The create script for external table:

CREATE TABLE INVOICE.EXT_ABC(  INVxxx   NUMBER(16)    , ...))
ORGANIZATION EXTERNAL(TYPE oracle_loader   DEFAULT DIRECTORY dp_data_dir
ACCESS PARAMETERS(     RECORDS DELIMITED BY NEWLINE
PREPROCESSOR dp_exec_dir:'ora_xtra_pp.sh'
FIELDS TERMINATED BY ':'
MISSING FIELD VALUES ARE NULL
REJECT ROWS WITH ALL NULL FIELDS        (        FIELDS.... ))
LOCATION ('file01.gz','file02.gz','file03.gz','file04.gz','file05.gz','file06.gz','file07.gz','file08.gz',            'file09.gz','file10.gz','file11.gz','file12.gz','file13.gz','file14.gz','file15.gz','file16.gz'            ))
parallel 16;

Mehant Baid’s pdf : ExternalTablesRemoteDataAccess

The mapping file: ora_xtra_map.txt

a.gz:etl_user:10.XXX.XXX.11:/mfs_16way_000/Applications/Datamart/INV_DM/main/a.gz:cat:YES
a:etl_user:10.XXX.XXX.11:/mfs_16way_000/Applications/Datamart/INV_DM/main/a:cat:NO
file01.gz:etl_user:10.XXX.XXX.11:/DIR1/mfs_16way_000/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file02.gz:etl_user:10.XXX.XXX.12:/DIR2/mfs_16way_001/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file03.gz:etl_user:10.XXX.XXX.13:/DIR3/mfs_16way_002/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file04.gz:etl_user:10.XXX.XXX.14:/DIR4/mfs_16way_003/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file05.gz:etl_user:10.XXX.XXX.15:/DIR5/mfs_16way_004/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file06.gz:etl_user:10.XXX.XXX.16:/DIR6/mfs_16way_005/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file07.gz:etl_user:10.XXX.XXX.17:/DIR7/mfs_16way_006/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file08.gz:etl_user:10.XXX.XXX.18:/DIR8/mfs_16way_007/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file09.gz:etl_user:10.XXX.XXX.11:/DIR1/mfs_16way_008/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file10.gz:etl_user:10.XXX.XXX.12:/DIR2/mfs_16way_009/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file11.gz:etl_user:10.XXX.XXX.13:/DIR3/mfs_16way_010/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file12.gz:etl_user:10.XXX.XXX.14:/DIR4/mfs_16way_011/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file13.gz:etl_user:10.XXX.XXX.15:/DIR5/mfs_16way_012/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file14.gz:etl_user:10.XXX.XXX.16:/DIR6/mfs_16way_013/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file15.gz:etl_user:10.XXX.XXX.17:/DIR7/mfs_16way_014/Applications/Datamart/INV_DM/main/input.gz:cat:YES
file16.gz:etl_user:10.XXX.XXX.18:/DIR8/mfs_16way_015/Applications/Datamart/INV_DM/main/input.gz:cat:YES

The modified ora_xtra_pp.sh

#! /bin/sh
PATH=/bin:/usr/bin
export PATH
#set -x
# ora_xtra_pp: ORAcle eXternal Table Remote file Access Pre- Processor
# fs version
#Format for the Map file
#Consists of five fields seperated using a :
#Syntax
#file:rusr:rhost:rdir_file_name:rcmd1:comp
#comp is used for gzipped input files.YES/NO
#
#Example
#foo.dat:abrumm:abrumm-dev:/home/abrumm/xt_Data/foo.dat:cat:NO
#f1.dat:fsengonul:fsengonul-dev:/home/fsengonul/xt_Data/foo.dat:cat:NO
#f1.gz:fsengonul:fsengonul-dev:/home/fsengonul/xt_Data/foo.gz:cat:YES
#this gives the freedom to use same named files on different locations
#
#get filename component of LOCATION, the access driver
#provides the LOCATION as the first argument to the preprocessor
proxy_file_name=`basename $1`
data_dir_name=`dirname $1`
#Flag is set if the file name in the Map file matches the proxy file name
#where our data file is stored
flag_dirf=0
#loops through the map file and fetches details for ssh
#username,hostname and remote directory
file_not_found_err='ora_xtra_pp:
Map file missing.Needed ora_xtra_map.txt in data directory'
if [ -e $data_dir_name/ora_xtra_map.txt ]
then
    while read line
    do
    map_file_name=`echo $line | cut -d: -f1`
    if [ $map_file_name = $proxy_file_name ]
    then
        rdir_file_name=`echo $line | cut -d: -f4`
        rusr=`echo $line | cut -d: -f2`
        rhost=`echo $line | cut -d: -f3`
        rcmd1=`echo $line | cut -d: -f5`
        comp=`echo $line | cut -d: -f6`
        flag_dirf=1
        break
    fi
    done  < $data_dir_name/ora_xtra_map.txt
else
    echo $file_not_found_err 1>&2
    echo $data_dir_name 1>&2
    exit 1
fi
if  [ $flag_dirf = 1 ]
then
    if [ $comp = 'NO' ]
    then
        ssh -q $rusr@$rhost $rcmd1 $rdir_file_name
    fi
    if [ $comp = 'YES' ]
    then
        ssh -q $rusr@$rhost $rcmd1 $rdir_file_name | gunzip -c
    fi
fi

Posted in oracle | 2 Comments »

TROUG Day 2011

Posted by fsengonul on April 8, 2011


 

T├╝rk Oracle Kullan─▒c─▒lar─▒ TROUG g├╝n├╝nde bulu┼čuyor!

2010 y─▒l─▒nda International Oracle
Users Group Community (IOUC) taraf─▒ndan kabul g├Ârerek kurulan T├╝rk Oracle Kullan─▒c─▒lar─▒ Grubu (http://troug.org), her y─▒l d├╝zenlemeyi hedefledi─či ÔÇťTROUG DayÔÇŁ etkinli─činin ilki ile 21 NisanÔÇÖda T├╝rk Oracle kullan─▒c─▒lar─▒n─▒ bulu┼čturuyor. Kat─▒l─▒m─▒n ├╝cretsiz olaca─č─▒ bu etkinlikte, konusunun uzmanlar─▒ndan teknik sunumlar ve kat─▒l─▒mc─▒lar─▒n sorular─▒yla y├Ânlendirece─či ÔÇťTROUG PanelÔÇŁ oturumu yer alacak. Ayr─▒ca g├╝n├╝n sonunda
Bilgin├ž IT akademi taraf─▒ndan ├žekili┼čle birer ki┼čiye Oracle 11g OCP ve Oracle/Sun Java sertifika e─čitimleri hediye edilecektir.

D├╝nyaca ├╝nl├╝ Oracle gurusu Jonathan Lewis da sizlerle birlikte TROUG g├╝n├╝nde!

T├╝m d├╝nyada Oracle profesyonellerinin yak─▒ndan takip etti─či, Oracle gurusu Jonathan Lewis, a├ž─▒l─▒┼č konu┼čmas─▒ ve sonras─▒nda ÔÇťThinking About JoinsÔÇŁ ba┼čl─▒kl─▒
teknik sunumu ile, ayr─▒ca Oracle ACE Kamran Agayev de RMAN konulu bir sunum ile aram─▒zda olacak.

Etkinli─čimiz ayn─▒ zamanda internet ├╝zerinden canl─▒ yay─▒nlanacak!

Etkinlimize lokasyon ya da kapasite nedenleriyle yerinde kat─▒lma f─▒rsat─▒ bulamayanlar i├žin, t├╝m i├žerik
internet ├╝zerinden canl─▒ olarak yay─▒nlanacak. Web ├╝zerinden bu etkinli─čimizi takip etmek isteyenler icin yay─▒n adresimiz:

http://www.theformspider.com/troug/index.php

21 NisanÔÇÖda g├Âr├╝┼čmek ├╝zere !


Program

 

09:00-10:00

A├ž─▒l─▒┼č & Thinking About Joins

Jonathan Lewis

10:00-10:50

20+ Soruda Exadata

Ferhat ┼×eng├Ân├╝l & H├╝sn├╝ ┼×ensoy

Kahve Molas─▒

11:00-12:00

Enterprise Manager 11g

Grid Control G├Âkhan At─▒l

├ľ─čle Yeme─či Molas─▒

13:00-14:00

Oracle Dataguard: Nas─▒l

Daha Efektif Kullan─▒r─▒z?

Emre Baransel & Ogan ├ľzdo─čan

14:00-14:50

ASM Best Practices

Orhan B─▒y─▒kl─▒o─člu

Kahve Molas─▒

15:00-16:00

11g Backup & Recovery

New Features

Kamran Agayev & Zekeriya Be┼čiro─člu

16:00-16:50

PL/SQL ile Web 2.0: JavaScript ve Javac─▒lar nas─▒l k─▒skan├žl─▒ktan ├žatlat─▒l─▒r?

Yal─▒m K. Gerger

Kahve Molas─▒

17:00-18:00

TROUG Panel

H.Tongu├ž Y─▒lmaz, Kamran Agayev,

O. Yasin Sayg─▒l─▒, Talip Hakan ├ľzt├╝rk, G├Âkhan At─▒l, Emre Baransel

 

 

> ├ťCRETS─░Z KAYIT OL

Tarih: 21 Nisan Per┼čembe
Yer: Bah├že┼čehir
├ťniversitesi
 

Be┼čikta┼č Kamp├╝s├╝ ÔÇô ─░stanbul


 



Posted in TROUG | Leave a Comment »

The first and the only Oracle user group application in the world. From TROUG

Posted by fsengonul on March 27, 2011

http://itunes.apple.com/tr/app/turkish-oracle-user-group/id424769558?mt=8
D├╝nyan─▒n ilk ve tek oracle kullan─▒c─▒ grubu uygulamas─▒n─▒ bu linkten indirebilirsiniz. Tabi ki TROUG’den. Bu uygulama ile etkinliklerimizi takip edebilir ve forumlar─▒m─▒za kat─▒labilirsiniz. Eline sa─čl─▒k Zekeriya Be┼čiro─člu.

You may download the first and the only oracle user group application in iTunes from this link. Of course it’s from TROUG.
You can follow our events and write on our forums with this unique application. Thank you Zekeriya Be┼čiro─člu.

Posted in TROUG | Leave a Comment »

TROUG@Sakarya ├ťniversitesi

Posted by fsengonul on March 25, 2011


Bug├╝n , T├╝rkiye Oracle Kullan─▒c─▒lar─▒ Grubu (TROUG) olarak Sakarya ├ťniversitesi’ndeki “Bili┼čim ve Teknoloji ┼×├Âleni” nindeydik. G├Âkhan At─▒l ve G├╝rcan Orhan’la birlikle Sapanca G├Âl├╝ manzaral─▒ bu g├╝zel kamp├╝ste , m├╝kemmel bir organizasyona kat─▒l─▒p , oracle ve i┼č hayat─▒ hakk─▒ndaki tecr├╝belerimizi ├Â─črencilere aktarmaya ├žal─▒┼čt─▒k. Onlar─▒n sorular─▒ndan biz de ├žok ┼čey ├Â─črendik.
T├╝rkiye’de merakl─▒, azimli ve ara┼čt─▒rmaktan y─▒lmayan bir nesil yeti┼čiyor. Art─▒k, teknolojiyi takip etmekten , teknoloje y├Ân vermeye do─čru gidiyoruz. ├çok yak─▒nda , eminim ki, veritaban─▒ konusunda da teknoloji ├╝reten bir ├╝lke haline gelece─čiz.
Bizi davet eden ve en iyi ┼čekilde a─č─▒rlayan Burak ├çakmak’a ve Bilgisayar Toplulu─ču’nun di─čer ├╝yelerine ├žok te┼čekk├╝rler.

Today (24 March 2011) as members of Turkish Oracle User Group´╗┐´╗┐´╗┐´╗┐ (TROUG), G├Âkhan At─▒l, G├╝rcan Orhan and I were in Sakarya to join Sakarya University Computer Science CommunityÔÇÖs Technology and Informatics Festival to share our experiences with the University Students. While having a beautiful lake view of the University Campus, we attended this well organized event and shared our experiences & knowledge about Oracle and business life in general. We also learnt a lot from the questions theyÔÇÖve asked.
It was so nice to see the young students with curiousity, tender and dedicated to research. We are turning into setting the way to the technology instead of solely following it. Soon, IÔÇÖm quite sure that, Turkey will become a database technology provider.
IÔÇÖd like to thank so much to Mr. Burak Cakmak and other members of Computer Science Community for their kindness and taking good care of us.

Posted in TROUG | 3 Comments »

Playing with policy managed databases

Posted by fsengonul on March 17, 2011

It was a great idea to try quality of service in 11gR2. But unfortunately at the moment it only supports OLTP workload.(average response time of less than one second, and preferably, an average response time of less than 0.5 seconds) and it does not support parallel queries.

Even though I can’t use them at the moment, I can still gain something from the server pool architecture.
I can move nodes from one database to another.
Below you may find my first tries.


oracle@node01:/home/oracle>srvctl status srvpool -g POOL1B

Server pool name: POOL1B
Active servers count: 1
oracle@node01:/home/oracle>srvctl status srvpool -g POOL1B -a
Server pool name: POOL1B
Active servers count: 1
Active server names: node14
NAME=node14 STATE=ONLINE
oracle@node01:/home/oracle>srvctl config database -d XDB
Database unique name: XDB
Database name: XDB
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/XDB/spfileXDB.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: POOL1B
Database instances:
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Database is policy managed
oracle@node01:/home/oracle>srvctl status srvpool -g POOL1B -a
Server pool name: POOL1B
Active servers count: 1
Active server names: node14
NAME=node14 STATE=ONLINE
oracle@node01:/home/oracle>srvctl start database -d XDB
oracle@node01:/home/oracle>srvctl status database -d XDB
Instance XDB_1 is running on node node14
oracle@node01:/home/oracle>srvctl modify srvpool -g POOL1B -n "node14,node15"
oracle@node01:/home/oracle>srvctl status srvpool -g POOL1B -a
Server pool name: POOL1B
Active servers count: 2
Active server names: node14,node15
NAME=node14 STATE=ONLINE
NAME=node15 STATE=ONLINE
oracle@node01:/home/oracle>srvctl status database -d XDB
Instance XDB_1 is running on node node14
Instance XDB_4 is running on node node15
oracle@node01:/home/oracle>srvctl modify srvpool -g POOL1B -n "node14"
PRCS-1011 : Failed to modify server pool POOL1B
CRS-2736: The operation requires stopping resource 'ora.XDB.db' on server 'node15'
CRS-2738: Unable to modify server pool 'ora.POOL1B' as this will affect running resources, but the force option was not specified
oracle@node01:/home/oracle>srvctl stop instance -d XDB -n node15
oracle@node01:/home/oracle>srvctl status database -d XDB
Instance XDB_1 is running on node node14
Instance XDB_4 is not running on node node15
oracle@node01:/home/oracle>srvctl modify srvpool -g POOL1B -n "node14"
oracle@node01:/home/oracle>srvctl status database -d XDB
Instance XDB_1 is running on node node14
oracle@node01:/home/oracle>

Posted in Exadata, oracle | Leave a Comment »

What happens when exadata has lost two disks?

Posted by fsengonul on March 8, 2011

Actually the question is “What happens when ASM has lost two disks in different failgroups when redundancy is normal?”
We’re sure that there will be no problem if we lose all of the disks in a single failgroup but in order to answer the question we have to understand how mirroring works in ASM.
In normal redundancy, mirror of each extent is written to another disk in another failgroup. This causes a partnership between these disks. As far as I’ve found out this information is kept in the x$kfdpartner table.

From the following query it seems that an extend in DATA_CD_10_CEL27 has a mirror on DATA_CD_07_CEL01. If we lose these two disks then the system has to crash.
On the other hand if we lose DATA_CD_10_CEL27 and the disks in the partner list ( 8 disks at the moment) stays alive, there will be no problem if we lose another disk.


select dg.name dg_name,
d.name disk_name,
pd.name partner_name
from x$kfdpartner p,v$asm_diskgroup dg,v$asm_disk d,v$asm_disk pd
where
p.grp=dg.group_number
and d.disk_number=p.disk and d.group_number=p.grp
and pd.disk_number=p.number_kfdpartner and pd.group_number=p.grp
and dg.name='DATA' and d.name='DATA_CD_10_CEL27';

DG_NAME DISK_NAME PARTNER_NAME
DATA DATA_CD_10_CEL27 DATA_CD_07_CEL01
DATA DATA_CD_10_CEL27 DATA_CD_06_CEL01
DATA DATA_CD_10_CEL27 DATA_CD_07_CEL25
DATA DATA_CD_10_CEL27 DATA_CD_06_CEL25
DATA DATA_CD_10_CEL27 DATA_CD_07_CEL26
DATA DATA_CD_10_CEL27 DATA_CD_06_CEL26
DATA DATA_CD_10_CEL27 DATA_CD_07_CEL28
DATA DATA_CD_10_CEL27 DATA_CD_06_CEL28

Posted in Exadata, oracle | Leave a Comment »

What happens when exadata has lost a disk?

Posted by fsengonul on February 5, 2011

We have experienced a disk failure today and changed it without any problem or manual commands.
This morning we have lost a disk in exadata. We got an alert and an email mentioning that “Hard disk status changed to predicative failure: critical” . There was also the drawing of the location of the corrupted disk in the email.
From the logs of the cell and asm, it can be easily seen that it has dropped the grid disks and started a rebalance operation in order to be sure that all the data has 2 copies.
We did not wait for the oracle/sun engineer to come and replace the disk. Our system admins has replaced the disk and exadata automatically recognized the new disk and started a new rebalance operation without any manual commands.


/* cell  triggers the drop operation */
Sat Feb 05 11:50:31 2011
Received subopcode 6 in publish ASM Query on 3 guids.
NOTE: Initiating ASM Instance operation: ASM DROP critical disk on 3 disks
DATA_CD_08_cel11 [00000xxxx-yyyy-zzzz-0000-000000000000]
RECO_CD_08_cel11 [00000xxxx-yyyy-zzzz-0000-000000000000]
SYSTEMDG_CD_08_cel11 [00000xxxx-yyyy-zzzz-0000-000000000000]


/* the corrupt disk has been replaced with the spare one */
Sat Feb 05 16:40:44 2011
Drop celldisk CD_08_cel11 (options: force, from memory only) - begin
Drop celldisk CD_08_cel11 - end
Sat Feb 05 16:40:44 2011
Open received invalid device name SYSTEMDG_CD_08_cel11
Sat Feb 05 16:40:44 2011
Open received invalid device name SYSTEMDG_CD_08_cel11
Sat Feb 05 16:42:44 2011
create CELLDISK CD_08_cel11 on device /dev/sdi
Sat Feb 05 16:42:44 2011
create GRIDDISK DATA_CD_08_cel11 on CELLDISK CD_08_cel11
Griddisk DATA_CD_08_cel11  - number is (248)
NOTE: Initiating ASM instance operation:
Operation: DROP and ADD of ASM disk for Grid disk guid=00000xxxx-yyyy-zzzz-0000-000000000000
Received subopcode 4 in publish ASM Query on 1 guids.
NOTE: Initiating ASM Instance operation: ASM DROP ADD disk on 1 disks
DATA_CD_08_cel11 [00000xxxx-yyyy-zzzz-0000-000000000000]

Storage Index Allocation for GridDisk DATA_CD_08_cel11 successful

Sat Feb 05 16:42:44 2011
create GRIDDISK RECO_CD_08_cel11 on CELLDISK CD_08_cel11
Griddisk RECO_CD_08_cel11  - number is (252)
NOTE: Initiating ASM instance operation:
Operation: DROP and ADD of ASM disk for Grid disk guid=00000xxxx-yyyy-zzzz-0000-000000000000
Received subopcode 4 in publish ASM Query on 1 guids.
NOTE: Initiating ASM Instance operation: ASM DROP ADD disk on 1 disks
RECO_CD_08_cel11 [00000xxxx-yyyy-zzzz-0000-000000000000]

Storage Index Allocation for GridDisk RECO_CD_08_cel11 successful

 



Sat Feb 05 16:42:44 2011
create GRIDDISK SYSTEMDG_CD_08_cel11 on CELLDISK CD_08_cel11
Griddisk SYSTEMDG_CD_08_cel11  - number is (256)
NOTE: Initiating ASM instance operation:
Operation: DROP and ADD of ASM disk for Grid disk guid=00000xxxx-yyyy-zzzz-0000-000000000000
Received subopcode 4 in publish ASM Query on 1 guids.
NOTE: Initiating ASM Instance operation: ASM DROP ADD disk on 1 disks
SYSTEMDG_CD_08_cel11 [00000xxxx-yyyy-zzzz-0000-000000000000]

Posted in Exadata, oracle | 7 Comments »