Bug#511171: lvm2: Unable to use full 3.2 TB of LVM LG on HP SmartArray E200

teh test3s tehtest3s at gmail.com
Thu Jan 8 05:09:35 UTC 2009


I think the person who originally set up the server used windows to create a
partition table so Linux could boot off of the array.  Possibly to get
around GPT partitioning?
In the RAID setup, I created two logical drives, a 200 GB drive for /boot,
/, and the MBR, and a logical drive for the remainder of the 3.5TB of
storage.

I re-installed debian, and after the install, I set up the second logical
drive as a large LVM physical volume, created a large LVM group on top of
that, and created a single large LVM logical volume on top of that.  So far,
so good.


~# pvs -v
Scanning for physical volume names
PV              VG         Fmt  Attr PSize PFree DevSize PV UUID
/dev/cciss/c0d1 bigstorevg lvm2 a-   3.22T    0    3.22T
lMjnFR-uAQ4-iBYT-rh17-egkN-q3Wi-BF0sSV


~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/cciss/c0d1
  VG Name               bigstorevg
  PV Size               3.22 TB / not usable 2.93 MB
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              844215
  Free PE               0
  Allocated PE          844215


~# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "bigstorevg" using metadata type lvm2


~# lvdisplay
  --- Logical volume ---
  LV Name                /dev/bigstorevg/bigstorelv
  VG Name                bigstorevg
  LV UUID                M0A2b9-N6el-Qxsu-r8Lq-qBn2-efnM-2MsGkZ
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                3.22 TB
  Current LE             844215
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:0


~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/cciss/c0d0p3     189G  612M  179G   1% /
tmpfs                 1.8G     0  1.8G   0% /lib/init/rw
udev                   10M   88K   10M   1% /dev
tmpfs                 1.8G     0  1.8G   0% /dev/shm
/dev/cciss/c0d0p1     236M   15M  209M   7% /boot
/dev/mapper/bigstorevg-bigstorelv
                      3.2T  199M  3.1T   1% /storage



On Wed, Jan 7, 2009 at 8:43 PM, teh test3s <tehtest3s at gmail.com> wrote:

> That's a GREAT question.  The really weird thing is that the RAID array in
> question *does* actually have 3.2 TB of free space in it!  (5 x 750GB SATA
> in RAID 5 array).  I am completely baffled.
> ~# blockdev --report
> RO    RA   SSZ   BSZ   StartSec     Size    Device
> rw   256   512  4096          0  409599360  /dev/cciss/c0d0
> rw   256   512   512         63  409593177  /dev/cciss/c0d0p1
> rw   256   512  4096          0 6915815280  /dev/cciss/c0d1
> rw   256   512  1024         63     497952  /dev/cciss/c0d1p1
> rw   256   512  4096     498015    7807590  /dev/cciss/c0d1p2
> rw   256   512  1024    8305605          2  /dev/cciss/c0d1p3
> rw   256   512   512    8305668 2612532821  /dev/cciss/c0d1p5
> rw   256   512  4096          0  409591808  /dev/dm-0
>
>
> On Wed, Jan 7, 2009 at 8:35 PM, Alasdair G Kergon <agk at redhat.com> wrote:
>
>> On Wed, Jan 07, 2009 at 08:23:36PM -0500, teh test3s wrote:
>> >   PV                VG         Fmt  Attr PSize   PFree DevSize PV UUID
>> >   /dev/cciss/c0d1p5 volgroup00 lvm2 a-     3.22T 2.22T   1.22T
>> > wF9oyR-R0tf-bZeW-0es1-FhYK-f2g3-xS1Rd9
>>
>> So that's what you need to explain:
>>
>>  How come LVM thinks that device is 3.22T while it's really only 1.22T ?
>>
>> Presumably blockdev --getsize will tell you the same.
>>
>> IOW It's not an LVM issue.  Check your cciss partitioning.
>>
>> Alasdair
>> --
>> agk at redhat.com
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.alioth.debian.org/pipermail/pkg-lvm-maintainers/attachments/20090108/68809acf/attachment-0001.htm 


More information about the pkg-lvm-maintainers mailing list