Featured post

Quiz: Data PreProcessing

Wednesday, 21 December 2016

Solaris 11: How to add swap in zfs root env

-> How to add swap space online to a zfs root env


Procedure:

# zfs create -V 16g rpool/swap2
# swap -a
# swap -l
swapfile             dev    swaplo   blocks     free
/dev/zvol/dsk/rpool/swap 304,1        16 33554416 33554416
# swap -a /dev/zvol/dsk/rpool/swap2
# swap -l
swapfile             dev    swaplo   blocks     free
/dev/zvol/dsk/rpool/swap 304,1        16 33554416 33554416
/dev/zvol/dsk/rpool/swap2 304,3        16 33554416 33554416

Update /etc/vfstab
# vi /etc/vfstab
# grep -i swap /etc/vfstab
swap    -       /tmp    tmpfs   -       yes     nosuid
/dev/zvol/dsk/rpool/swap        -               -               swap    -       no      -
/dev/zvol/dsk/rpool/swap2        -               -               swap    -       no      -


Friday, 16 December 2016

Solaris 10: How to add ZPOOL-ZFS file system on Solaris local zones

How to add ZPOOL-ZFS file system on Solaris local zones or How to create a file system on container.



1> In case sufficient space is not there in the zpool. Add the new LUN to the pool

# zpool add zpaudxxxs_T2CL_zones c10t6000097000049570003653303438323423d0
# zpool list
NAME                    SIZE  ALLOC   FREE  CAP  HEALTH  ALTROOT

zpaudxxxs_T2CL_zones  2.20T   972G  1.25T  43%  ONLINE  -


2> Zlogin to the container
# mkdir -p /proj/build/bamboo
# zfs create -o mountpoint=/proj/build/bamboo zpaudxxxs_T2CL_zones/proj/dcmwxxxz/pbamboo
# zfs set quota=40G zpaudxxxs_T2CL_zones/proj/dcmwxxxz/pbamboo


Thursday, 8 December 2016

AIX: File System Creation Extension

Pre-Check:
How to decide the max lun side that can be added in a VG
Suppose sapdbvg12 VG need to be extended

ax10p03::/home/pg53ot=> lsvg sapdbvg12
VOLUME GROUP:       sapdbvg12                VG IDENTIFIER:  0037031a00004c00000000f8d788c643
VG STATE:           active                   PP SIZE:        32 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      4732 (151424 megabytes)
MAX LVs:            256                      FREE PPs:       1234 (39488 megabytes)
LVs:                5                        USED PPs:       3498 (111936 megabytes)
OPEN LVs:           5                        QUORUM:         3 (Enabled)
TOTAL PVs:          4                        VG DESCRIPTORS: 4
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         4                        AUTO ON:        yes
MAX PPs per VG:     32512                                    
MAX PPs per PV:     2032                     MAX PVs:        16
LTG size:           128 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
ax10p03::/home/pg53ot=>

Now multiply Max PP per VG to PP size, it will give max lun size.

32512 * 32 =  65024 -   Means 65GB can be allocated max

How to make sure new powerpath device is not having data as sometime someone else might have run the cfgmgr and it becomes difficult to identify the power device.

New Lun id 55F8 –> 131 GB

1> # powermt display dev=all | grep 55F8

2> # powermt display dev=hdiskpower51
Pseudo name=hdiskpower51
Symmetrix ID=000295700123
Logical device ID=55F8
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
   0 fscsi0                   hdisk323  FA  8gA   active  alive       0      0
   0 fscsi0                   hdisk324  FA 10gA   active  alive       0      0
   0 fscsi0                   hdisk325  FA  6gA   active  alive       0      0

3> To confirm the size
 # bootinfo -s hdiskpower51
131077

4> To check it is not part of any VG


# lspv hdiskpower51
0516-304 : Unable to find device id hdiskpower51 in the Device
        Configuration Database.

# lspv hdiskpower50
PHYSICAL VOLUME:    hdiskpower50             VOLUME GROUP:     appsvg
PV IDENTIFIER:      00cd1fc66693f57a VG IDENTIFIER     00cd1fc600004c00000001478a24da23
PV STATE:           active                                    
STALE PARTITIONS:   0                        ALLOCATABLE:      yes
PP SIZE:            32 megabyte(s)           LOGICAL VOLUMES:  2
TOTAL PPs:          1021 (32672 megabytes)   VG DESCRIPTORS:   2
FREE PPs:           29 (928 megabytes)       HOT SPARE:        no
USED PPs:           992 (31744 megabytes)    MAX REQUEST:      1 megabyte
FREE DISTRIBUTION:  00..00..00..00..29                        
USED DISTRIBUTION:  205..204..204..204..175                   

5>Make sure this is not part of rootvg, extra precaution
# lsvg -p rootvg
rootvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk150          active            543         250         61..00..00..80..109
# powermt display dev=hdisk150
Pseudo name=hdiskpower12
Symmetrix ID=000295700123
Logical device ID=0806

So rootvg is having hdiskpower12 only.


*******************************
1> Check the powerdevices
In case of no powerdevices

aur6051l:Bansalr1[2284]# lsvg
rootvg
nim6051T2C
aur6051l:Bansalr1[2286]# lspv | awk '{print $1}' | grep hdiskpower
hdiskpower0
hdiskpower1
hdiskpower2

2> Detect the luns

aur6051l:Bansalr1[2292]# cfgmgr -v
cfgmgr is running in phase 2
----------------
Output truncated.

aur6051l:Bansalr1[2295]# lspv | awk '{print $1}' | grep hdiskpower
hdiskpower0
hdiskpower1
hdiskpower2
hdiskpower3

It shows hdiskpower3 is added

3> Check the size of the luns

aur6051l:Bansalr1[2296]# bootinfo -s hdiskpower3
512001

aur6051l:Bansalr1[2297]# lsattr -El hdiskpower3
PR_key_value   none               Reserve Key.                                   True
clr_q          yes                Clear Queue (RS/6000)                          True
location                          Location                                       True
lun_id         0x3000000000000    LUN ID                                         False
lun_reset_spt  yes                FC Forced Open LUN                             True
max_coalesce   0x100000           Maximum coalesce size                          True
max_transfer   0x100000           Maximum transfer size                          True
pvid           none               Physical volume identifier                     False
pvid_takeover  yes                Takeover PVIDs from hdisks                     True
q_err          no                 Use QERR bit                                   True
q_type         simple             Queue TYPE                                     False
queue_depth    32                 Queue DEPTH                                    True
reassign_to    120                REASSIGN time out value                        True
reserve_policy single_path        Reserve Policy used to reserve device on open. True
rw_timeout     40                 READ/WRITE time out                            True
scsi_id        0x8a0180           SCSI ID                                        False
start_timeout  180                START unit time out                            True
ww_name        0x50000975000265ac World Wide Name     
                          False
aur6051l:Bansalr1[2298]# lsattr -El hdiskpower2
PR_key_value   none                             Reserve Key.                                   True
clr_q          yes                              Clear Queue (RS/6000)                          True
location                                        Location                                       True
lun_id         0x2000000000000                  LUN ID                                         False
lun_reset_spt  yes                              FC Forced Open LUN                             True
max_coalesce   0x100000                         Maximum coalesce size                          True
max_transfer   0x100000                         Maximum transfer size                          True
pvid           00c32b77653d6be70000000000000000 Physical volume identifier                     False
pvid_takeover  yes                              Takeover PVIDs from hdisks                     True
q_err          no                               Use QERR bit                                   True
q_type         simple                           Queue TYPE                                     False
queue_depth    32                               Queue DEPTH                                    True
reassign_to    120                              REASSIGN time out value                        True
reserve_policy no_reserve                       Reserve Policy used to reserve device on open. True
rw_timeout     40                               READ/WRITE time out                            True
scsi_id        0x8a0180                         SCSI ID                                        False
start_timeout  180                              START unit time out                            True
ww_name        0x50000975000265ac               World Wide Name                                False

4> Set the device attr according to old luns

aur6051l:Bansalr1[2300]# chdev -l hdiskpower3 -a reserve_policy=no_reserve -a queue_depth=32
hdiskpower3 changed
aur6051l:Bansalr1[2301]# lsvg -o
nim6051T2C
rootvg

5> Luns are detected, now add them in the Volume Group

aur6051l:Bansalr1[2307]# extendvg nim6051T2C hdiskpower3
0516-1254 extendvg: Changing the PVID in the ODM.

aur6051l:Bansalr1[2308]# lsvg nim6051T2C
VOLUME GROUP:       nim6051T2C               VG IDENTIFIER:  00c32b7700004c000000013b97085b50
VG STATE:           active                   PP SIZE:        256 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      3598 (921088 megabytes)
MAX LVs:            256                      FREE PPs:       2009 (514304 megabytes)
LVs:                7                        USED PPs:       1589 (406784 megabytes)
OPEN LVs:           7                        QUORUM:         2 (Enabled)
TOTAL PVs:          2                        VG DESCRIPTORS: 3
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         2                        AUTO ON:        yes
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 1024 kilobyte(s)         AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
MIRROR POOL STRICT: off                                      
PV RESTRICTION:     none                     INFINITE RETRY: no
DISK BLOCK SIZE:    512                                      

6> Increase the file system
Run below command in case you want to add same hdiskpower to the file system
Here PP Size of ftxd1_lv is 512MB and we want to increase it by 100GB, so add 200 PP by below command.

aur6051l:mksysb[2313]# extendlv ftxd1_lv 200 hdiskpower3
0516-787 extendlv: Maximum allocation for logical volume ftxd1_lv
        is 1300.
aur6051l:mksysb[2313]# chlv -x 1600 ftxd1_lv
aur6051l:mksysb[2313]# extendlv ftxd1_lv 200 hdiskpower3

ftxd1_lv is LV for file system /export/nim/mksysb

Note: Above steps can be skipped if there is no specific need to add the assigned lun to the file system.

aur6051l:mksysb[2313]# chfs -a size=+150G /export/nim/mksysb
0516-787 extendlv: Maximum allocation for logical volume mksysb6051
        is 1024.
aur6051l:mksysb[2314]# chlv -x 4096 mksysb6051
aur6051l:mksysb[2315]# chfs -a size=+150G /export/nim/mksysb
Filesystem size changed to 633339904

Note: In case of no power devices, follow below. Use hdiskX instead of hdiskpowerXX

************************************************
chdev -l hdisk7 -a reserve_policy=no_reserve -a queue_depth=8
extendvg nimvg hdisk7
lsvg nimvg
df -g
chfs -a size=+75G /export/nim/mksysb_vionim



Vg/Lv/File System Creation:

To Create below file system:

/dev/psasworkgsba606   15728640  15664308    1%       20     1% /proj/sasdev/saswork/GuestBatch

Create VG with PPsize 256
#mkvg -y saswork6064T2P 60 -s 256 hdiskpower90

#mkdir /proj/sasdev/saswork/GuestBatch
#mklv -t jfs2 -y psasworkgsba606 saswork6064T2P 60

#/usr/sbin/crfs -v jfs2 -d /dev/psasworkgsba606 -m /proj/sasdev/saswork/GuestBatch -A yes -p rw -a options='nosuid,nodev' -a agblksize=4096 -a logname=INLINE -a isnapshot=yes -u saswork6064T2P

#mount /proj/sasdev/saswork/GuestBatch


Extending the file system having stripe LV


1>    Discover the luns
#cfgmgr -vl fcs0 ; cfgmgr -vl fcs3 ; cfgmgr -vl fcs1 ; cfgmgr -vl fcs2
#powermt config
#lspv | grep power

2>    Check the size of the luns
#bootinfo -s $i

3>    Check the attributes of an existing lun and set those values to new lun
#lsattr -El hdiskpower26
#chdev -l hdiskpower27 -a pv=yes
#chdev -l hdiskpower27 -a reserve_policy=no_reserve -a queue_depth=32
#lslv oidatas1p4095

LOGICAL VOLUME:     oidatas1p4095          VOLUME GROUP:   bhip4095T2CR_1
LV IDENTIFIER:      00c32b8700004c0000000149eb547562.1 PERMISSION:     read/write
VG STATE:           active/complete        LV STATE:       opened/syncd
TYPE:               jfs2                   WRITE VERIFY:   off
MAX LPs:            5114                   PP SIZE:        256 megabyte(s)
COPIES:             1                      SCHED POLICY:   striped
LPs:                5114                   PPs:            5114
STALE PPs:          0                      BB POLICY:      relocatable
INTER-POLICY:       maximum                RELOCATABLE:    no
INTRA-POLICY:       middle                 UPPER BOUND:    6
MOUNT POINT:        /opt/ibm/datasets1     LABEL:          /opt/ibm/datasets1
DEVICE UID:         0                      DEVICE GID:     0
DEVICE PERMISSIONS: 432                                   
MIRROR WRITE CONSISTENCY: on/ACTIVE                             
EACH LP COPY ON A SEPARATE PV ?: yes (superstrict)                     
Serialize IO ?:     NO                                    
INFINITE RETRY:     no                     PREFERRED READ: 0
STRIPE WIDTH:       2                                     
STRIPE SIZE:        64k                                   
DEVICESUBTYPE:      DS_LVZ                                        
COPY 1 MIRROR POOL: None                                  
COPY 2 MIRROR POOL: None                                  
COPY 3 MIRROR POOL: None  

Here two luns are required since depth of the LV is 2.


4>    To check the
#lsvg -P bhip4095T2CR_1
#extendvg bhip4095T2CR_1 hdiskpower27

Same way add the hdiskpower28 as well.

#chpv -p ibm_datasets1 hdiskpower27 hdiskpower28 (ibm_datasets1 is mirror pool , lsvg –P bhip4095T2CR_1)

#lsvg -P bhip4095T2CR_1
Physical Volume   Mirror Pool      
hdiskpower6       ibm_datasets1    
hdiskpower7       ibm_datasets1    
hdiskpower8       ibm_datasets2    
hdiskpower9       ibm_datasets2    
hdiskpower13      ibm_datasets3    
hdiskpower14      ibm_datasets3    

output truncated


5>    Increase the Upper Bound of LV

#chlv -u 6 oidatas1p4095

6>    Extend the LV bby 2046 which the total of two luns added to VG bhip4095T2CR_1
#extendlv oidatas1p4095 2046 hdiskpower27 hdiskpower28

7>    In case it don’t allow to extend, increase the value of PP
#chlv -x 5114 oidatas1p4095
#extendlv oidatas1p4095 2046 hdiskpower27 hdiskpower28

8>    Increase the file system

#chfs -a size=+523776M /opt/ibm/datasets1Pre-Check:
How to decide the max lun side that can be added in a VG
Suppose sapdbvg12 VG need to be extended

ax10p03::/home/pg53ot=> lsvg sapdbvg12
VOLUME GROUP:       sapdbvg12                VG IDENTIFIER:  0037031a00004c00000000f8d788c643
VG STATE:           active                   PP SIZE:        32 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      4732 (151424 megabytes)
MAX LVs:            256                      FREE PPs:       1234 (39488 megabytes)
LVs:                5                        USED PPs:       3498 (111936 megabytes)
OPEN LVs:           5                        QUORUM:         3 (Enabled)
TOTAL PVs:          4                        VG DESCRIPTORS: 4
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         4                        AUTO ON:        yes
MAX PPs per VG:     32512                                    
MAX PPs per PV:     2032                     MAX PVs:        16
LTG size:           128 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
ax10p03::/home/pg53ot=>

Now multiply Max PP per VG to PP size, it will give max lun size.

32512 * 32 =  65024 -   Means 65GB can be allocated max

How to make sure new powerpath device is not having data as sometime someone else might have run the cfgmgr and it becomes difficult to identify the power device.

New Lun id 55F8 –> 131 GB

1> # powermt display dev=all | grep 55F8

2> # powermt display dev=hdiskpower51
Pseudo name=hdiskpower51
Symmetrix ID=000295700123
Logical device ID=55F8
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
   0 fscsi0                   hdisk323  FA  8gA   active  alive       0      0
   0 fscsi0                   hdisk324  FA 10gA   active  alive       0      0
   0 fscsi0                   hdisk325  FA  6gA   active  alive       0      0

3> To confirm the size
 # bootinfo -s hdiskpower51
131077

4> To check it is not part of any VG

# lspv hdiskpower51
0516-304 : Unable to find device id hdiskpower51 in the Device
        Configuration Database.

# lspv hdiskpower50
PHYSICAL VOLUME:    hdiskpower50             VOLUME GROUP:     appsvg
PV IDENTIFIER:      00cd1fc66693f57a VG IDENTIFIER     00cd1fc600004c00000001478a24da23
PV STATE:           active                                    
STALE PARTITIONS:   0                        ALLOCATABLE:      yes
PP SIZE:            32 megabyte(s)           LOGICAL VOLUMES:  2
TOTAL PPs:          1021 (32672 megabytes)   VG DESCRIPTORS:   2
FREE PPs:           29 (928 megabytes)       HOT SPARE:        no
USED PPs:           992 (31744 megabytes)    MAX REQUEST:      1 megabyte
FREE DISTRIBUTION:  00..00..00..00..29                        
USED DISTRIBUTION:  205..204..204..204..175                   

5>Make sure this is not part of rootvg, extra precaution
# lsvg -p rootvg
rootvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk150          active            543         250         61..00..00..80..109
# powermt display dev=hdisk150
Pseudo name=hdiskpower12
Symmetrix ID=000295700123
Logical device ID=0806

So rootvg is having hdiskpower12 only.


*******************************
1> Check the powerdevices
In case of no powerdevices

aur6051l:Bansalr1[2284]# lsvg
rootvg
nim6051T2C
aur6051l:Bansalr1[2286]# lspv | awk '{print $1}' | grep hdiskpower
hdiskpower0
hdiskpower1
hdiskpower2

2> Detect the luns

aur6051l:Bansalr1[2292]# cfgmgr -v
cfgmgr is running in phase 2
----------------
Output truncated.

aur6051l:Bansalr1[2295]# lspv | awk '{print $1}' | grep hdiskpower
hdiskpower0
hdiskpower1
hdiskpower2
hdiskpower3

It shows hdiskpower3 is added

3> Check the size of the luns

aur6051l:Bansalr1[2296]# bootinfo -s hdiskpower3
512001

aur6051l:Bansalr1[2297]# lsattr -El hdiskpower3
PR_key_value   none               Reserve Key.                                   True
clr_q          yes                Clear Queue (RS/6000)                          True
location                          Location                                       True
lun_id         0x3000000000000    LUN ID                                         False
lun_reset_spt  yes                FC Forced Open LUN                             True
max_coalesce   0x100000           Maximum coalesce size                          True
max_transfer   0x100000           Maximum transfer size                          True
pvid           none               Physical volume identifier                     False
pvid_takeover  yes                Takeover PVIDs from hdisks                     True
q_err          no                 Use QERR bit                                   True
q_type         simple             Queue TYPE                                     False
queue_depth    32                 Queue DEPTH                                    True
reassign_to    120                REASSIGN time out value                        True
reserve_policy single_path        Reserve Policy used to reserve device on open. True
rw_timeout     40                 READ/WRITE time out                            True
scsi_id        0x8a0180           SCSI ID                                        False
start_timeout  180                START unit time out                            True
ww_name        0x50000975000265ac World Wide Name     
                          False
aur6051l:Bansalr1[2298]# lsattr -El hdiskpower2
PR_key_value   none                             Reserve Key.                                   True
clr_q          yes                              Clear Queue (RS/6000)                          True
location                                        Location                                       True
lun_id         0x2000000000000                  LUN ID                                         False
lun_reset_spt  yes                              FC Forced Open LUN                             True
max_coalesce   0x100000                         Maximum coalesce size                          True
max_transfer   0x100000                         Maximum transfer size                          True
pvid           00c32b77653d6be70000000000000000 Physical volume identifier                     False
pvid_takeover  yes                              Takeover PVIDs from hdisks                     True
q_err          no                               Use QERR bit                                   True
q_type         simple                           Queue TYPE                                     False
queue_depth    32                               Queue DEPTH                                    True
reassign_to    120                              REASSIGN time out value                        True
reserve_policy no_reserve                       Reserve Policy used to reserve device on open. True
rw_timeout     40                               READ/WRITE time out                            True
scsi_id        0x8a0180                         SCSI ID                                        False
start_timeout  180                              START unit time out                            True
ww_name        0x50000975000265ac               World Wide Name                                False

4> Set the device attr according to old luns

aur6051l:Bansalr1[2300]# chdev -l hdiskpower3 -a reserve_policy=no_reserve -a queue_depth=32
hdiskpower3 changed
aur6051l:Bansalr1[2301]# lsvg -o
nim6051T2C
rootvg

5> Luns are detected, now add them in the Volume Group

aur6051l:Bansalr1[2307]# extendvg nim6051T2C hdiskpower3
0516-1254 extendvg: Changing the PVID in the ODM.

aur6051l:Bansalr1[2308]# lsvg nim6051T2C
VOLUME GROUP:       nim6051T2C               VG IDENTIFIER:  00c32b7700004c000000013b97085b50
VG STATE:           active                   PP SIZE:        256 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      3598 (921088 megabytes)
MAX LVs:            256                      FREE PPs:       2009 (514304 megabytes)
LVs:                7                        USED PPs:       1589 (406784 megabytes)
OPEN LVs:           7                        QUORUM:         2 (Enabled)
TOTAL PVs:          2                        VG DESCRIPTORS: 3
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         2                        AUTO ON:        yes
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 1024 kilobyte(s)         AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
MIRROR POOL STRICT: off                                      
PV RESTRICTION:     none                     INFINITE RETRY: no
DISK BLOCK SIZE:    512                                      

6> Increase the file system
Run below command in case you want to add same hdiskpower to the file system
Here PP Size of ftxd1_lv is 512MB and we want to increase it by 100GB, so add 200 PP by below command.

aur6051l:mksysb[2313]# extendlv ftxd1_lv 200 hdiskpower3
0516-787 extendlv: Maximum allocation for logical volume ftxd1_lv
        is 1300.
aur6051l:mksysb[2313]# chlv -x 1600 ftxd1_lv
aur6051l:mksysb[2313]# extendlv ftxd1_lv 200 hdiskpower3

ftxd1_lv is LV for file system /export/nim/mksysb

Note: Above steps can be skipped if there is no specific need to add the assigned lun to the file system.

aur6051l:mksysb[2313]# chfs -a size=+150G /export/nim/mksysb
0516-787 extendlv: Maximum allocation for logical volume mksysb6051
        is 1024.
aur6051l:mksysb[2314]# chlv -x 4096 mksysb6051
aur6051l:mksysb[2315]# chfs -a size=+150G /export/nim/mksysb
Filesystem size changed to 633339904

Note: In case of no power devices, follow below. Use hdiskX instead of hdiskpowerXX

************************************************
chdev -l hdisk7 -a reserve_policy=no_reserve -a queue_depth=8
extendvg nimvg hdisk7
lsvg nimvg
df -g
chfs -a size=+75G /export/nim/mksysb_vionim



Vg/Lv/File System Creation:

To Create below file system:

/dev/psasworkgsba606   15728640  15664308    1%       20     1% /proj/sasdev/saswork/GuestBatch

Create VG with PPsize 256
#mkvg -y saswork6064T2P 60 -s 256 hdiskpower90

#mkdir /proj/sasdev/saswork/GuestBatch
#mklv -t jfs2 -y psasworkgsba606 saswork6064T2P 60

#/usr/sbin/crfs -v jfs2 -d /dev/psasworkgsba606 -m /proj/sasdev/saswork/GuestBatch -A yes -p rw -a options='nosuid,nodev' -a agblksize=4096 -a logname=INLINE -a isnapshot=yes -u saswork6064T2P

#mount /proj/sasdev/saswork/GuestBatch


Extending the file system having stripe LV


1>    Discover the luns
#cfgmgr -vl fcs0 ; cfgmgr -vl fcs3 ; cfgmgr -vl fcs1 ; cfgmgr -vl fcs2
#powermt config
#lspv | grep power

2>    Check the size of the luns
#bootinfo -s $i

3>    Check the attributes of an existing lun and set those values to new lun
#lsattr -El hdiskpower26
#chdev -l hdiskpower27 -a pv=yes
#chdev -l hdiskpower27 -a reserve_policy=no_reserve -a queue_depth=32
#lslv oidatas1p4095

LOGICAL VOLUME:     oidatas1p4095          VOLUME GROUP:   bhip4095T2CR_1
LV IDENTIFIER:      00c32b8700004c0000000149eb547562.1 PERMISSION:     read/write
VG STATE:           active/complete        LV STATE:       opened/syncd
TYPE:               jfs2                   WRITE VERIFY:   off
MAX LPs:            5114                   PP SIZE:        256 megabyte(s)
COPIES:             1                      SCHED POLICY:   striped
LPs:                5114                   PPs:            5114
STALE PPs:          0                      BB POLICY:      relocatable
INTER-POLICY:       maximum                RELOCATABLE:    no
INTRA-POLICY:       middle                 UPPER BOUND:    6
MOUNT POINT:        /opt/ibm/datasets1     LABEL:          /opt/ibm/datasets1
DEVICE UID:         0                      DEVICE GID:     0
DEVICE PERMISSIONS: 432                                   
MIRROR WRITE CONSISTENCY: on/ACTIVE                             
EACH LP COPY ON A SEPARATE PV ?: yes (superstrict)                     
Serialize IO ?:     NO                                    
INFINITE RETRY:     no                     PREFERRED READ: 0
STRIPE WIDTH:       2                                     
STRIPE SIZE:        64k                                   
DEVICESUBTYPE:      DS_LVZ                                        
COPY 1 MIRROR POOL: None                                  
COPY 2 MIRROR POOL: None                                  
COPY 3 MIRROR POOL: None  

Here two luns are required since depth of the LV is 2.


4>    To check the
#lsvg -P bhip4095T2CR_1
#extendvg bhip4095T2CR_1 hdiskpower27

Same way add the hdiskpower28 as well.

#chpv -p ibm_datasets1 hdiskpower27 hdiskpower28 (ibm_datasets1 is mirror pool , lsvg –P bhip4095T2CR_1)

#lsvg -P bhip4095T2CR_1
Physical Volume   Mirror Pool      
hdiskpower6       ibm_datasets1    
hdiskpower7       ibm_datasets1    
hdiskpower8       ibm_datasets2    
hdiskpower9       ibm_datasets2    
hdiskpower13      ibm_datasets3    
hdiskpower14      ibm_datasets3    

output truncated


5>    Increase the Upper Bound of LV

#chlv -u 6 oidatas1p4095

6>    Extend the LV bby 2046 which the total of two luns added to VG bhip4095T2CR_1
#extendlv oidatas1p4095 2046 hdiskpower27 hdiskpower28

7>    In case it don’t allow to extend, increase the value of PP
#chlv -x 5114 oidatas1p4095
#extendlv oidatas1p4095 2046 hdiskpower27 hdiskpower28

8>    Increase the file system
#chfs -a size=+523776M /opt/ibm/datasets1