Total Pageviews

MDS Blogs

Please visit
http://mds9000.blogspot.com for
MDS Config and Troubleshooting Info.

-jerome.jsph@gmail.com

UCS Config -Disclaimer

Please note that this is just a lab recreation and documentation, this is no way replaces official manual and best pratice documentations

Tuesday, November 3, 2009

UCS_SAN_Setup1.doc

  1. UCS Setup
    1. Create vhbas in service profile

 

Note fc0 and fabric id A.

Create fc1 in fabric id B

 

It should look like the above, please include server id and chassis id/rack id , so that it

Is easily identifiable 01 – rack 1, 01 chassis 1,3x – server id  0 –fc0

 

    1. wwNN pool or policy

 

 

c.fcoe vlan info:

 

Based on the uplink Switch: u can use different vlans/vsans for each fabric.

More on NPV trouble shooting

http://www-tac-sj/~jejoseph/SAN/ChalkTalk/SANOS-3.2-OSM-NPV.ppt

 

Once these are done:

Verify using CLI

 

UCS1-FI-A /org # show service-profile connectivity name server3

 

VIF:

 

ID         Transport Tag   Status      Endpoint   Peer

---------- --------- ----- ----------- ---------- ----

       687 Fc            0 Allocated

      8879 Ether         0 Allocated

       688 Fc            0 Allocated

      8880 Ether         0 Allocated

 

UCS1-FI-A(nxos)# show run int vfc688

version 4.0(1a)N2(1.1e)

 

interface vfc688

  description server 1/3, VHBA

  no shutdown

  bind interface vethernet8880

 

4. Enable Uplinks to MDS or Brocade switches – make sure those uplink ports are in correct VSAN on MDS Switch

 

 

Once it is enabled.

 

UCS1-FI-A(nxos)# show npv flogi-table vsan 1

--------------------------------------------------------------------------------

SERVER                                                                  EXTERNAL

INTERFACE VSAN FCID             PORT NAME               NODE NAME       INTERFAC

E

--------------------------------------------------------------------------------

vfc688    1    0xef0013 20:00:00:00:00:01:01:30 20:00:00:00:01:01:01:30 fc2/1

 

Total number of flogi = 1.

 

fc1/4      1     0xef000f  20:41:00:0d:ec:d5:3a:00  20:01:00:0d:ec:d5:3a:01 << 61xx login

fc1/4      1     0xef0013  20:00:00:00:00:01:01:30  20:00:00:00:01:01:01:30 <<

actual server.

 

On Fabric B:

 

UCS1-FI-B(nxos)# show npv flogi-table v 1

--------------------------------------------------------------------------------

SERVER                                                                  EXTERNAL

INTERFACE VSAN FCID             PORT NAME               NODE NAME       INTERFAC

E

--------------------------------------------------------------------------------

vfc687    1    0xed0014 20:00:00:00:00:01:01:31 20:00:00:00:01:01:01:30 fc2/1

 

Total number of flogi = 1.

 

fc1/4      1     0xed000f  20:41:00:0d:ec:d6:8b:40  20:01:00:0d:ec:d6:8b:41

fc1/4      1     0xed0014  20:00:00:00:00:01:01:31  20:00:00:00:01:01:01:30

 

UCS1-FI-B(nxos)# show run int vfc687

version 4.0(1a)N2(1.1e)

 

interface vfc687

  description server 1/3, VHBA

  no shutdown

  bind interface vethernet8879

 

 

Topology

 

 

Server3-fc0—6120-A –fc2/1------fc1/4-MDS-A ---Promise SPA-port1

                                                                                  Promise SPB- port1

----------fc1—6120B—fc2/1-------fc1/4—MDS-B—Promise SPA-port2

                                                                                   Promise SPB-port2

 

Zoning

 

MDSA : Server3-fc0

fc1/1      1     0xef0001  26:00:00:01:55:35:0b:4a  25:00:00:01:55:35:0b:4a

                           [promise-sp1a]

fc1/2      1     0xef0002  26:02:00:01:55:35:0b:4a  25:01:00:01:55:35:0b:4a

                           [promise-sp2a]

fc1/4      1     0xef0013  20:00:00:00:00:01:01:30  20:00:00:00:01:01:01:30

 

 

 

 

MDSB-Server3-fc1 :

fc1/1      1     0xed0001  26:03:00:01:55:35:0b:4a  25:01:00:01:55:35:0b:4a

                           [promise-sp2b]

fc1/2      1     0xed0011  26:01:00:01:55:35:0b:4a  25:00:00:01:55:35:0b:4a

                           [promise-sp1b]

fc1/4      1     0xed0014  20:00:00:00:00:01:01:31  20:00:00:00:01:01:01:30

 

 

In lab, I have set the zoning to be default permit all.

 

 

Promise Storage –Active/Passive Array:

 

Lun Masking: We allowed 4 luns to be seen by all servers because for later stage, we can do vmotion between the servers.

 

 

 

Certain luns are in controller 1 and other in controller 2.

 

 

On bootpolicy in the service profile:

 

 

 

fc0 configured for sp1a and sp2a for lun 9 .

if there is no image on luns, it will use CDROM, which u can map thro’ kvm’s virtual media.

 

 

Then boot and install the OS in promise disk instead of local disk.

 

 


No comments:

Post a Comment

Followers