Ferhat's Blog

There will be only one database

2 x2-2 have been installed.

Posted by fsengonul on January 16, 2011

Our new x2-2 machines have been installed. We now have 16 compute nodes and 28 storage cells. The main issue in the installation is cabling for two racks. They have been cabled as 2 separate 8 node machines out of the factory. The Sun engineer first removed the 7 inter-switch links between leaf switches. And he also removed the 2 links between the spine and leaf switches. It seems that in a single RACK installation leaf to leaf links provide the shortest path while in a 2 RACK installation we do not have enough empty ports to connect all spines directly. They have used 32 cables to connect the spine and leaf switches between RACKs.
Below you may find the diagrams for cabling.
Other than that there is no difference between a single node and multi node installation.
PS: I should also mention about a bug in the cluvfy that it does not work for more than 10 nodes.

5 Responses to “2 x2-2 have been installed.”

  1. laotsao said

    Oracle keep the cabling information of exadata and exalogic behind firewall
    only accessible by paid customer.
    In CLOS network, this case 3 stage network, one always use the spine switch to provide the connective to the spine switch, spine to the server nodes

    AS your diagram show that 8+14 nodes within rack and 8 link to spine, this is (1- 8/22) blocking CLOS network, If one load balance the dual port of HBA then the blocking is only (1- 8/11).

    It will be interesting to see in reality, what is the blocking %

    • fsengonul said

      You may check the following link for multiple RACK senarios for both exalogic and exadata. They are accessible by public. If you have an algorithm or method to check the blocking % , it would be great.

      http://download.oracle.com/docs/cd/E18476_01/doc.220/e18478/app_network.htm

    • kocakahin said

      Hi blocking issue is not really a case for Exadata. Because as you may now what we usually does to add around 100ns more processing time per switch hope and FAT TREE TOPOLOGY guarantees that we will be doing at most 2 hops in anyway. So regarding low latecy access it is not a big problem (needless to say throughput oriented systems.)You may use one qperf tool to compare one switch ve two switch throughput and latency.

  2. laotsao said

    hi
    are you allow to post the picture of the back of the rack (single and tow rack) so one can appreciate the cabling of IB and ethernet?
    TIA

  3. […] single and two rack IB Fabric Posted on January 17, 2011 by laotsao this blog provide IB fabric diagram for single and two rack IB […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: