SAN Migration in Linux / HP UX


There are few ways to migrate a SAN in Linux / Unix.

1.Storage level
2.Host level

In host level , we have few choice

1.Using pvmove ( Not reliable and not suggested by anyone)
2.Using back and restore . 
3.Using LVM mirror.


2. Using backup and restore Method:

Note : Here we required down time for Application which is belongs to the FS

1.Create new VG and New temp FS
2.Down the Application
3. Copy the data using backup tool (fbackup , TSM ,etc)
4. Restore the data to new FS (new mount point)
5.Unmount the existing FS
6. Mount the new LVOLs to old Mount directory

Or else for backup and restore we can use "rsync" command .

Run RSYCN Command like below in background. 

#nohup rsync -apvz --progress --exclude "lost+found/" /SOURCE_DIR/*  /DESTINATION_DIR/ &
3. SAN migration Using LVM mirror

Note :1.  Here we no need of down time. But take a full backup for safety. 
          2. if it is in HP UX , the VG should be created with vgcreate -e option. Otherwise we need to modify the vg using vgmodify (still we required downtime for FS, concern is max_PE_per PV)

1. Add the new storage Disk to VG
2. Mirror the existing LVOLs
3. Once mirror done , Break the mirror
4. Remove the old storage disk from VG


================================

The Below is the Migration steps i followed in Linux.

1. One mounted fs on a plain lvm volume with a single lun:
/dev/mapper/DATA-VG_DATA 92G 74G 14G 85% /mnt/DATA

2. Get new lun working at linux level,


for example using dm-multipath:
mpath13 (3600c0ff000d8230d16de5b4c01000000) dm-24 HP,MSA2312sa
[size=93G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 0:0:0:17 sdu 65:64 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:17 sdab 65:176 [active][undef]

3. Create new physical volume:
[root@nodeA# pvcreate /dev/mapper/mpath13
Physical volume "/dev/mapper/mpath13" successfully created

4. Extend original volume group:
[root@nodeA# vgextend VG_DATA /dev/mapper/mpath13
Volume group "VG_DATA" successfully extended

5. Convert logical volume to a mirror with 2 legs:
[root@nodeA# lvconvert -m 1 /dev/VG_DATA/LV_DATA --corelog
VG_DATA/LV_DATA: Converted: 12.2%
VG_DATA/LV_DATA: Converted: 24.4%
VG_DATA/LV_DATA: Converted: 36.2%
VG_DATA/LV_DATA: Converted: 48.3%
VG_DATA/LV_DATA: Converted: 60.3%
VG_DATA/LV_DATA: Converted: 72.4%
VG_DATA/LV_DATA: Converted: 84.6%
VG_DATA/LV_DATA: Converted: 96.7%
VG_DATA/LV_DATA: Converted: 100.0%
Logical volume LV_DATA converted.



[root@nodeA# vgdisplay VG_DATA -v
Using volume group(s) on command line
Finding volume group "DATA"
--- Volume group ---
VG Name VG_DATA
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 21
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 186.25 GB
PE Size 4.00 MB
Total PE 47681
Alloc PE / Size 47616 / 186.00 GB
Free PE / Size 65 / 260.00 MB
VG UUID VLzsFf-vlqq-TmSR-2P53-izm4-dVH1-pcNeii

--- Logical volume ---
LV Name /dev/VG_DATA/LV_DATA
VG Name VG_DATA
LV UUID 9Q9PrO-TBmP-1prT-8PSV-tFxT-3GR0-FE94Q9
LV Write Access read/write
LV Status available
# open 1
LV Size 93.00 GB
Current LE 23808
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:16

--- Logical volume ---
LV Name /dev/VG_DATA/LV_DATA_mimage_0
VG Name VG_DATA
LV UUID w80F3L-JbHv-A5Dt-50dK-8N0k-h3IE-P0GzJL
LV Write Access read/write
LV Status available
# open 1
LV Size 93.00 GB
Current LE 23808
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:25

--- Logical volume ---
LV Name /dev/VG_DATA/LV_DATA_mimage_1
VG Name VG_DATA
LV UUID 5gOfn4-bCNB-tpG3-0gue-hhr9-k63p-E0Jb9U
LV Write Access read/write
LV Status available
# open 1
LV Size 93.00 GB
Current LE 23808
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:26

--- Physical volumes ---
PV Name /dev/dm-13
PV UUID R5henD-M3pW-dNaH-P4Xs-WBjW-fTWo-52nvTz
PV Status allocatable
Total PE / Free PE 23840 / 32

PV Name /dev/dm-24
PV UUID 3pPxpD-DX4U-ay6t-Gf2B-RoKP-Wfoz-rN2Usj
PV Status allocatable
Total PE / Free PE 23841 / 33

Voila.

6. Convert lv to unmirrored, removing the old pv:
[root@nodeA# lvconvert -m 0 /dev/VG_DATA/LV_DATA /dev/dm-13
Logical volume LV_DATA converted.

7. Remove old pv from vg:
[root@nodeA# VG_DATA /dev/dm-13
Removed "/dev/dm-13" from volume group "VG_DATA"

Done!
Now the VG is running with new PV which is from new SAN.

Post a Comment

1 Comments

  1. Nice article friend, we have also covered one of the topic in our blog by showing pvmove migration and server to server migration.

    ReplyDelete