Featured post

Quiz: Data PreProcessing

Monday, 5 December 2016

AIX : How To Extend File System

How To Extend File System:


1> Suppose we want to increase below file system

# df -k /export/nim/mksysb
Filesystem    1024-blocks      Free %Used    Iused %Iused Mounted on
/dev/mksysb6051   138412032  22990208   84%       47     1% /export/nim/mksysb

2> Check to which VG this LV belong

# lslv mksysb6051
LOGICAL VOLUME:     mksysb6051             VOLUME GROUP:   nim6051T2C
LV IDENTIFIER:      00c32b7700004c000000013b97085b50.3 PERMISSION:     read/write
VG STATE:           active/complete        LV STATE:       opened/syncd
TYPE:               jfs2                   WRITE VERIFY:   off
MAX LPs:            1024                   PP SIZE:        256 megabyte(s)
COPIES:             1                      SCHED POLICY:   parallel
LPs:                528                    PPs:            528
STALE PPs:          0                      BB POLICY:      relocatable
INTER-POLICY:       minimum                RELOCATABLE:    yes
INTRA-POLICY:       middle                 UPPER BOUND:    1024
MOUNT POINT:        /export/nim/mksysb     LABEL:          /export/nim/mksysb
DEVICE UID:         0                      DEVICE GID:     0
DEVICE PERMISSIONS: 432                                   
MIRROR WRITE CONSISTENCY: on/ACTIVE                             
EACH LP COPY ON A SEPARATE PV ?: yes                                    
Serialize IO ?:     NO                                    
INFINITE RETRY:     no                                    
DEVICESUBTYPE:      DS_LVZ                                       
COPY 1 MIRROR POOL: None                                  
COPY 2 MIRROR POOL: None                                  
COPY 3 MIRROR POOL: None                                  

3> Check if physical volume have the free space. Here highlighted field shows only 2560 M is free.

# lsvg nim6051T2C
VOLUME GROUP:       nim6051T2C               VG IDENTIFIER:  00c32b7700004c000000013b97085b50
VG STATE:           active                   PP SIZE:        256 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      1599 (409344 megabytes)
MAX LVs:            256                      FREE PPs:       10 (2560 megabytes)
LVs:                7                        USED PPs:       1589 (406784 megabytes)
OPEN LVs:           7                        QUORUM:         2 (Enabled)
TOTAL PVs:          1                        VG DESCRIPTORS: 2
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         1                        AUTO ON:        yes
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 1024 kilobyte(s)         AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
MIRROR POOL STRICT: off                                      
PV RESTRICTION:     none                     INFINITE RETRY: no
DISK BLOCK SIZE:    512                               

4> To increase the file system by 2000MB

   #chfs -a size=+2G /export/nim/mksysb
   #chfs -a size=+2000M /export/nim/mksysb


   Without “+” sign it will make the file system size as mentioned. With “+” sign it basically increase it by mentioned size.

AIX: Difference Between HDisk and Hdiskpower:

Difference Between HDisk and Hdiskpower:

All the physical partitions in a volume group are the same size, although different volume groups can have different PP sizes.

The hdiskpower devices are logical pointers to hdisks. Hdiskpower are EMC devices created by EMC software in order provide load balancing and fault tolerance, so and hdiskpower device is one, two, four hdisks depending on how many paths (FC Cards) to the EMC controller (switch) your server has.

# lspv
hdisk0          00c32b87a77983ca                    rootvg          active     
hdisk1          00cd1fc6f1839e04                    rootvg          active     
hdisk2          none                                None                       
hdisk3          00c32b87a77983ca                    rootvg          active     
hdisk4          00cd1fc6f1839e04                    rootvg          active     
hdisk5          none                                None                       
hdisk6          00c32b87a77983ca                    rootvg          active     
hdisk7          00cd1fc6f1839e04                    rootvg          active     
hdisk8          none                                None                       
hdisk9          00c32b87a77983ca                    rootvg          active     
hdisk10         00cd1fc6f1839e04                    rootvg          active     
hdisk11         none                                None                       
hdiskpower0     none                                None                        
hdiskpower1     none                                None                       
hdiskpower2     00c32b87ad9198ad                    auq4073lT2P     active    

# powermt display
Symmetrix logical device count=3
CLARiiON logical device count=0
Hitachi logical device count=0
HP xp logical device count=0
Ess logical device count=0
Invista logical device count=0
==============================================================================
----- Host Bus Adapters ---------  ------ I/O Paths -----  ------ Stats ------
###  HW Path                       Summary   Total   Dead  IO/Sec Q-IOs Errors
==============================================================================
   0 fscsi0                        optimal       3      0       -     0      0
   1 fscsi1                        optimal       3      0       -     0      0
   2 fscsi2                        optimal       3      0       -     0      0
   3 fscsi3                        optimal       3      0       -     0      0

It says 3 logical device, which can be seen in lspv command (hdiskpower0/1/2).
Below will show you which hdisks belong to which hdiskpower and the current path status.

# powermt display dev=all
Pseudo name=hdiskpower0
Symmetrix ID=000492600083
Logical device ID=4EE0
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
   0 fscsi0                   hdisk0    FA  7eB   active  alive       0      0
   1 fscsi1                   hdisk3    FA  8eB   active  alive       0      0
   2 fscsi2                   hdisk6    FA  9eB   active  alive       0      0
   3 fscsi3                   hdisk9    FA 10eB   active  alive       0      0

Pseudo name=hdiskpower1
Symmetrix ID=000492600083
Logical device ID=4EE4
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
   3 fscsi3                   hdisk10   FA 10eB   active  alive       0      0
   0 fscsi0                   hdisk1    FA  7eB   active  alive       0      0
   1 fscsi1                   hdisk4    FA  8eB   active  alive       0      0
   2 fscsi2                   hdisk7    FA  9eB   active  alive       0      0

Pseudo name=hdiskpower2
Symmetrix ID=000492600083
Logical device ID=4EE8
state=alive; policy=SymmOpt; priority=0; queued-IOs=0;
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
   3 fscsi3                   hdisk11   FA 10eB   active  alive       0      0
   0 fscsi0                   hdisk2    FA  7eB   active  alive       0      0
   1 fscsi1                   hdisk5    FA  8eB   active  alive       0      0
   2 fscsi2                   hdisk8    FA  9eB   active  alive       0      0

# lsdev -Cc disk
hdisk0      Available 41-T1-01 EMC Symmetrix FCP VRAID
hdisk1      Available 41-T1-01 EMC Symmetrix FCP VRAID
hdisk2      Available 41-T1-01 EMC Symmetrix FCP VRAID
hdisk3      Available 42-T1-01 EMC Symmetrix FCP VRAID
hdisk4      Available 42-T1-01 EMC Symmetrix FCP VRAID
hdisk5      Available 42-T1-01 EMC Symmetrix FCP VRAID
hdisk6      Available 61-T1-01 EMC Symmetrix FCP VRAID
hdisk7      Available 61-T1-01 EMC Symmetrix FCP VRAID
hdisk8      Available 61-T1-01 EMC Symmetrix FCP VRAID
hdisk9      Available 62-T1-01 EMC Symmetrix FCP VRAID
hdisk10     Available 62-T1-01 EMC Symmetrix FCP VRAID
hdisk11     Available 62-T1-01 EMC Symmetrix FCP VRAID
hdiskpower0 Available 62-T1-01 PowerPath Device
hdiskpower1 Available 62-T1-01 PowerPath Device
hdiskpower2 Available 62-T1-01 PowerPath Device


$ lspv -M hdisk2
0516-1396 : The physical volume hdisk2, was not found in the
system database.

lspv Command

Purpose

       Displays information about a physical volume within a volume group.

Lists the following fields for each logical volume on the physical volume:

            PVname:PPnum [LVname: LPnum [:Copynum] [PPstate]]


$ lspv -M hdisk0
hdisk0:1        hd5:1  
hdisk0:2        hd5:2  
hdisk0:3        hd5:3  
hdisk0:4        hd5:4  
hdisk0:5        bos_hd5:1      
hdisk0:6        bos_hd5:2      
hdisk0:7        bos_hd5:3      
hdisk0:8        bos_hd5:4      
hdisk0:9-23
hdisk0:24       dump00:15      
hdisk0:25       dump00:16      
hdisk0:26       dump00:17      
hdisk0:27       aconnect:1     
hdisk0:28       aconnect:2     
hdisk0:29       aconnect:3     
hdisk0:30       aconnect:4     
hdisk0:31       aconnect:5     
hdisk0:32       aconnect:6     
hdisk0:33       aconnect:7     
hdisk0:34       aconnect:8      
hdisk0:35       dump00:1       

$ lsvg
rootvg
auq4073lT2P

$ lsvg -m rootvg
Logical Volume    Copy 1            Copy 2            Copy 3           
hd5               None              None              None              
hd6               None              None              None             
hd8               None              None              None             
hd4               None              None              None             
hd2               None              None              None             
hd9var            None              None              None             
hd3               None              None              None             
hd1               None              None              None              
hd10opt           None              None              None             
hd11admin         None              None              None             
livedump          None              None              None             
hd12audit         None              None              None             
bos_hd5           None              None              None             
osysadm           None              None              None             
voperf            None              None              None              
actmagent         None              None              None             
dump00            None              None              None             
aconnect          None              None              None             
bos_hd4           None              None              None             
bos_hd2           None              None              None             
bos_hd9var        None              None              None             
bos_hd10opt       None              None              None              

$ lsvg -m auq4073lT2P
Logical Volume    Copy 1            Copy 2            Copy 3           
aihs              None              None              None             
aMDM              None              None              None             
apatrol           None              None              None             
aoraclient        None              None              None             
vMDM              None              None              None             
aWebSphere        None              None              None             
vWebSphere        None              None              None             
nmonperf          None              None              None     

$ lslv -m hd5
hd5:N/A
LP    PP1  PV1               PP2  PV2               PP3  PV3
0001  0001 hdisk9           
0002  0002 hdisk9           
0003  0003 hdisk9           
0004  0004 hdisk9           
$ lslv -m voperf
voperf:/var/opt/perf
LP    PP1  PV1               PP2  PV2               PP3  PV3
0001  0052 hdisk9           
0002  0049 hdisk9           
0003  0050 hdisk9           
0004  0051 hdisk9           

AIX: Creating new file systems in AIX? Don't edit /etc/filesystems by hand!

Don’t edit the /etc/filesystems file manually when creating file systems.

When you import a volume group, the importvg command will populate the /etc/filesystems file based on the logical volume minor number order (which is stored in the VGDA on the physical volume/hdisk). If someone manually edits the /etc/filesystems, then its contents will no longer match the order contained in the VGDA of the physical volume. This can become a problem the next time someone attempts to export and import a volume group. Essentially they may end up with file systems over-mounted and what appears to be the loss of data!

Here’s a quick example of the problem.

Let’s create a couple of new file systems; /fs1 and /fs1/fs2. I’ll deliberately create them in the “wrong” order.

# mklv -tjfs2 -y lv2 cgvg 1
lv2

# crfs -vjfs2 -dlv2 -Ayes -u fs -m /fs1/fs2
File system created successfully.
65328 kilobytes total disk space.
New File System size is 131072

# mklv -tjfs2 -y lv1 cgvg 1
lv1

# crfs -vjfs2 -dlv1 -Ayes -u fs -m /fs1
File system created successfully.
65328 kilobytes total disk space.
New File System size is 131072

Hmmm, lv2 appears before lv1 in the output from lsvg. The first indication of a potential problem!

# lsvg -l cgvg
cgvg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
lv2                 jfs2       1       1       1    closed/syncd  /fs1/fs2
loglv00             jfs2log    1       1       1    closed/syncd  N/A
lv1                 jfs2       1       1       1    closed/syncd  /fs1

Whoops! /fs1 should be mounted before /fs1/fs2!!! Doh!

# mount -t fs

# mount | tail -2
         /dev/lv2         /fs1/fs2         jfs2   Jul 19 23:07 rw,log=/dev/loglv00
         /dev/lv1         /fs1             jfs2   Jul 19 23:07 rw,log=/dev/loglv00

Data in /fs1/fs2 is now hidden and inaccessible. The /fs1 file system has over-mounted the /fs1/fs2 file system. This could look like data loss i.e. someone removed all the files from the file system.

# df -g | grep fs
/dev/lv2              -         -    -         -     -  /fs1/fs2
/dev/lv1           0.06      0.06    1%        4     1% /fs1

The file systems are listed in the wrong order in /etc/filesystems as well. Double Doh!

# tail -15 /etc/filesystems
/fs1/fs2:
        dev             = /dev/lv2
        vfs             = jfs2
        log             = /dev/loglv00
        mount           = true
        type            = fs
        account         = false

/fs1:
        dev             = /dev/lv1
        vfs             = jfs2
        log             = /dev/loglv00
        mount           = true
        type            = fs
        account         = false

No problem. I’ll just edit the /etc/filesystems file and rearrange the order. Simple, right?

# vi /etc/filesystems

/fs1:
        dev             = /dev/lv1
        vfs             = jfs2
        log             = /dev/loglv00
        mount           = true
        type            = fs
        account         = false

/fs1/fs2:
        dev             = /dev/lv2
        vfs             = jfs2
        log             = /dev/loglv00
        mount           = true
        type            = fs
        account         = false

Let’s remount the file systems in the correct order.

# umount -t fs
# mount -t fs
# df -g | grep fs
/dev/lv1           0.06      0.06    1%        5     1% /fs1
/dev/lv2           0.06      0.06    1%        4     1% /fs1/fs2

That looks better now, doesn’t it!? I’m happy now.....although, lsvg still indicates there could be a potential problem here…

# lsvg -l cgvg
cgvg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
lv2                 jfs2       1       1       1    open/syncd  /fs1/fs2
loglv00             jfs2log    1       1       1    open/syncd  N/A
lv1                 jfs2       1       1       1    open/syncd  /fs1

All is well, until one day someone exports the VG and re-imports it, like so:

# varyoffvg cgvg
# exportvg cgvg

# importvg -y cgvg hdisk2
cgvg

# mount -t fs
# mount | tail -2
         /dev/lv2         /fs1/fs2         jfs2   Jul 19 23:07 rw,log=/dev/loglv00
         /dev/lv1         /fs1             jfs2   Jul 19 23:07 rw,log=/dev/loglv00

Huh? What’s happened here!?  I thought I fixed this before!?

Try to avoid this situation before it becomes a problem (for you or someone else!) in the future. If you discover this issue whilst creating your new file systems, remove the file systems and recreate them in the correct order. Obviously, try to do this before you place any data in the file systems. Otherwise you may need to back up and restore the data!

# mklv -tjfs2 -y lv1 cgvg 1
lv1

# crfs -vjfs2 –dlv1 -Ayes -u fs -m /fs1
File system created successfully.
65328 kilobytes total disk space.
New File System size is 131072

# mklv -tjfs2 -y lv2 cgvg 1
lv2

# crfs -vjfs2 –dlv2 -Ayes -u fs -m /fs1/fs2
File system created successfully.
65328 kilobytes total disk space.
New File System size is 131072

You may be able to detect this problem, prior to importing a volume group, by using the lqueryvg command. Looking at the output in the “Logical” section, you might be able to ascertain a potential LV and FS mount order issue.

# lqueryvg -Atp hdisk2 | grep lv
0516-320 lqueryvg: Physical volume hdisk2 is not assigned to
        a volume group.
Logical:        00f603cd00004c000000013ff2fc1388.1   lv2 1
                00f603cd00004c000000013ff2fc1388.2   loglv00 1
                00f603cd00004c000000013ff2fc1388.3   lv1 1

Once you’ve identified the problem you can fix the issue retrospectively (once the VG is imported) by editing /etc/filesystems. Of course, this is just a temporary fix until someone exports and imports the VG again, in which case the mount order issue will occur again.

The essential message here is do NOT edit the /etc/filesystems file by hand when creating file systems.

 Creating new file systems in AIX? Don't edit /etc/filesystems by hand!

 

redhat Linux 7.2: How to increase the size of LUN dynamically

Case: How to increase the size of LUN dynamically ? or Increase size of LUN without Reboot


Procedure:


# fdisk -l | grep Disk
Disk /dev/sdb: 85.9 GB, 85899345920 bytes
Disk identifier: 0x00000000


# echo "1" > /sys/block/sdb/device/rescan

# tail /var/log/messages
Dec  6 13:52:56 kernel: sd 2:0:1:0: [sdb] 209715200 512-byte logical blocks: (107 GB/100 GiB)
Dec  6 13:52:56 kernel: sd 2:0:1:0: [sdb] Cache data unavailable
Dec  6 13:52:56 kernel: sd 2:0:1:0: [sdb] Assuming drive cache: write through
Dec  6 13:52:56 kernel: sdb: detected capacity change from 85899345920 to 107374182400


# fdisk -l | grep -i sdb
Disk /dev/sdb: 107.4 GB, 107374182400 bytes

# vgdisplay -v
   
  --- Volume group ---
  VG Name               hosatname_app_vg
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               80.00 GiB
  PE Size               4.00 MiB
  Total PE              20479
  Alloc PE / Size       20224 / 79.00 GiB
  Free  PE / Size       255 / 1020.00 MiB
  VG UUID               AT2vyL-aDzL-tWWQ-sqqb-BAlD-x1oE-Soj7eG

# pvresize -v /dev/sdb
    Using physical volume(s) on command line.
    Wiping cache of LVM-capable devices
    Finding all volume groups.
    Archiving volume group "hostname_app_vg" metadata (seqno 2).
    Resizing volume "/dev/sdb" to 209715200 sectors.
    Resizing physical volume /dev/sdb from 0 to 25599 extents.
    Updating physical volume "/dev/sdb"
    Creating volume group backup "/etc/lvm/backup/hostname_app_vg" (seqno 3).
  Physical volume "/dev/sdb" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

# vgdisplay -v
    Using volume group(s) on command line.
    Finding all volume groups.
  --- Volume group ---
  VG Name               hostname_app_vg
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               100.00 GiB
  PE Size               4.00 MiB
  Total PE              25599
  Alloc PE / Size       20224 / 79.00 GiB
  Free  PE / Size       5375 / 21.00 GiB
  VG UUID               AT2vyL-aDzL-tWWQ-sqqb-BAlD-x1oE-Soj7eG

 
# vgs
  VG              #PV #LV #SN Attr   VSize   VFree
  hostname_app_vg   1   1   0 wz--n- 100.00g 21.00g
  hostname_vg       1   9   0 wz--n-  48.97g  6.09g

#lvextend -L +2G /dev/<vgname>/<lvname>
#vgdisplay -v
#resize2fs –p /dev/<vgname>/<lvname>

#df –h