[TriLUG] Problem with LVM
tbryan at python.net
Mon Jun 5 12:42:06 EDT 2006
On Sunday 04 June 2006 23:58, Brian McCullough wrote:
> On Sun, Jun 04, 2006 at 03:55:49PM -0400, T. Bryan wrote:
> > I tried rebuilding everything. My only remaining problem is that things
> > don't quite work on boot. Details are below, but after I boot and login,
> > the quickest way to get everything working is to
> > /etc/init.d/lvm stop
> > mdadm -S /dev/md0
> > mdadm -As /dev/md0
> > /etc/init.d/lvm start
> I'm sure that David is quite capable of contributing some words of
> wisdom, here. I'm sure that I'm missing something obvious.
> This is starting to smell suspiciously like a faulty initrd file. As
> you say, you have MD active during the dmesg time, but no sign of LVM
> activity. Although you aren't using LVM as root, it might be a good
> idea to make sure that you have LVM available "in the kernel" at
> boot time.
Good idea. My old LVM set up died when I upgraded my kernel to 2.6, and I've
just been using the stock Debian kernel. Sounds like it's time to build a
kernel. I'll probably give that a try first to rule that out.
> Incidentally, side note time -- do you have both MD component partitions
> ( hde1 and hdg1, if I remember correctly ) marked as type "fd",
> Linux Automatic Raid? ( not really an issue, since you show that the MD
> part of things is working )
> Also, we haven't discussed this -- we talked about filtering out the
> components of the md array, but did we talk about filtering IN the md0
> device itself? Same part of the lvm.conf file, but using the "a" flag
> instead of the "r'.
No. I can try that, too. (Not able to at the moment...broken dist-upgrade.)
> OK, I think that we have established the following, but bear with me.
> Use fdisk to create two, identically-sized, type "fd" partitions, hde1
> and hdg1.
> Use mdadm to create md0 from those two partitions, in RAID 1
> Use pvcreate on md0.
> Use vgcreate, using md0.
> Use lvcreate, using new VG.
> Format the new LV.
> Modify fstab appropriately.
> At this point you should be able to mount and use the LV as a normal
> part of your filesystem.
And I can...after the system has booted. I just shouldn't have to do it by
> > I have the following symlinks in /etc/rc2.d/
> > S25mdadm
> > S25mdadm-raid
> > S26lvm
> Silly question, but S26lvm _does_ point at a reasonable location,
> doesn't it?
Yes. I was actually using /etc/rc2.d/S26lvm stop and start to mess with LVM
after boot. Just to verify that the symlink and linked script were acting
> > Now, if I run lvmdiskscan, it sees the LVM physical volume
> > /dev/md0 [ 111.79 GB] LVM physical volume
> But you haven't run vgscan yet. That is essential for any of the
> funtionality that you are looking for. vgscan followed by vgchange,
> which is what your restart of /etc/init.d/lvm is doing.
> For some reason, this looks as if MD is not actually starting properly
> at boot time -- weird.
At some point it is. When the system finishes booting, /dev/md0 has been
started (assembled and status shown in /proc/mdstat).
> > # /etc/rc2.d/S26lvm start
> > Setting up LVM Volume Groups...
> > Reading all physical volumes. This may take a while...
> > Found volume group "localvg" using metadata type lvm2
> > /dev/localvg: opendir failed: No such file or directory
> > 1 logical volume(s) in volume group "localvg" now active
> I don't like that error message. It seems to be out of order.
Okay. I'm planning to ignore it for now until I figure out whether any of the
other things fix my problem. Maybe then I'll return to this one to see
whether the error message changes.
> I agree that something is broken. I just think that I'm missing
> something vital in what you are saying, or not saying.
Thanks for all of your help. It's quite frustrating since the LVM on RAID 1
is *almost* working. Not well enough that I feel safe trusting it or using
it. Not broken enough that I can just look on Google and find an error
I haven't done much with my systems in a while. I forgot how much "fun" Linux
could be. :-/
More information about the TriLUG