DMEVENTD(8) System Manager's Manual DMEVENTD(8)NAME
dmeventd - Device-mapper event daemon
SYNOPSIS
dmeventd [-d [-d [-d]]] [-f] [-h] [-R] [-V] [-?]
DESCRIPTION
dmeventd is the event monitoring daemon for device-mapper devices. Library plugins can register and carry out actions triggered when par-
ticular events occur.
LVM PLUGINS
Mirror Attempts to handle device failure automatically. See lvm.conf(5).
Raid Attempts to handle device failure automatically. See lvm.conf(5).
Snapshot
Monitors how full a snapshot is becoming and emits a warning to syslog when it exceeds 80% full. The warning is repeated when 85%,
90% and 95% of the snapshot is filled. See lvm.conf(5).
Thin Monitors how full a thin pool is becoming and emits a warning to syslog when it exceeds 80% full. The warning is repeated when 85%,
90% and 95% of the thin pool is filled. See lvm.conf(5).
OPTIONS -d Repeat from 1 to 3 times (-d, -dd, -ddd) to increase the detail of debug messages sent to syslog. Each extra d adds more debugging
information.
-f Don't fork, run in the foreground.
-h, -? Show help information.
-R Replace a running dmeventd instance. The running dmeventd must be version 2.02.77 or newer. The new dmeventd instance will obtain a
list of devices and events to monitor from the currently running daemon.
-V Show version of dmeventd.
SEE ALSO lvm(8), lvm.conf(5)Red Hat Inc DM TOOLS 2.02.105(2)-RHEL7 (2014-03-26) DMEVENTD(8)
Check Out this Related Man Page
LVMETAD(8) System Manager's Manual LVMETAD(8)NAME
lvmetad - LVM metadata cache daemon
SYNOPSIS
lvmetad [-l {all|wire|debug}] [-p pidfile_path] [-s socket_path] [-f] [-h] [-V] [-?]
DESCRIPTION
lvmetad is a metadata caching daemon for LVM. The daemon receives notifications from udev rules (which must be installed for LVM to work
correctly when lvmetad is in use). Through these notifications, lvmetad has an up-to-date and consistent image of the volume groups avail-
able in the system.
By default, lvmetad, even if running, is not used by LVM. See lvm.conf(5).
OPTIONS
To run the daemon in a test environment both the pidfile_path and the socket_path should be changed from the defaults.
-f Don't fork, but run in the foreground.
-h, -? Show help information.
-l {all|wire|debug}
Select the type of log messages to generate. Messages are logged by syslog. Additionally, when -f is given they are also sent to
standard error. Since release 2.02.98, there are two classes of messages: wire and debug. Selecting 'all' supplies both and is
equivalent to a comma-separated list -l wire,debug. Prior to release 2.02.98, repeating -d from 1 to 3 times, viz. -d, -dd, -ddd,
increased the detail of messages.
-p pidfile_path
Path to the pidfile. This overrides both the built-in default (#DEFAULT_PID_DIR#/lvmetad.pid) and the environment variable
LVM_LVMETAD_PIDFILE. This file is used to prevent more than one instance of the daemon running simultaneously.
-s socket_path
Path to the socket file. This overrides both the built-in default (/run/lvm/lvmetad.socket) and the environment variable
LVM_LVMETAD_SOCKET. To communicate successfully with lvmetad, all LVM2 processes should use the same socket path.
-V Display the version of lvmetad daemon.
ENVIRONMENT VARIABLES
LVM_LVMETAD_PIDFILE
Path for the pid file.
LVM_LVMETAD_SOCKET
Path for the socket file.
SEE ALSO lvm(8), lvm.conf(5)Red Hat Inc LVM TOOLS 2.02.105(2)-RHEL7 (2014-03-26) LVMETAD(8)
Hi all,
I'm about to reconfigure my system with Raid Level 0. I haven't been able to find out much about Raid and Linux through searching. I'll be multi-booting with Win 2K, XP, Mandrake 9 and Red Hat 8. I have 3 drives (2 X 60GB and 1 X 80GB) and want to use Raid 0 to configure the drives... (9 Replies)
Good Morning all,
I just have a quick question, on some systems I am working with Software Raid Level 0 devices.
Yes, I know, this is not a good idea, but it was requested :-(
Now, due to a new requirement, I need to add a second internal disk to the system, but with adding the new disk,... (1 Reply)
Hi thank you for your help in advance,
I am running SCO 5.0.6 with 75 user license And I have thin clients connected (around 50 no.) to it through a terminal server.The problem is that when the thin clients are terminated abnormally it takes a lot of time for me to conect to the SERVER again and... (1 Reply)
Hi, there are tons of RAID1 tutorials, but none of them deal with lvm. The problem is that I want to expand my current lvm partition over RAID1 rather than creating a new lvm partition after RAID1 is created.
My master harddrive has lvm partition. I'm wondering how to create a RAID1 image of... (1 Reply)
I run, 2 'C' Files, gapw.c and getkey.c, but I get the following errors :-
I) $ gcc gapw.c
gapw.c: In function `main':
gapw.c:96: warning: cast to pointer from integer of different size
/tmp/cck4I8mW.o(.text+0x227): In function `main':
: undefined reference to `getprofilestring'... (1 Reply)
Does anyone know how to write a lilo.conf for lvm configurations.
I have the devices addressed as:
/dev/VolGroup00/LogVol00 / (root)
/dev/VolGroup00/LogVol01 (swap)
When running lilo,
Error states:
can't put the boot sector on logical partition 0xFD02
Would... (2 Replies)
Hi,
I'm new here and have a quick question about Linux Raid and LVM.
I have a HP server with 2 physically separate raid arrays both contain 8 spindles which are configured in hardware as raid 5. When I installed Cent OS 5.7 they were seen, as you would expect, as two separate physical... (11 Replies)
Hi,
I am running Red Hat Enterprise Linux Server release 5.6 (Tikanga)
i am havining a hard time to find these RPMs:
i tried to install device-mapper wich require for LVM to function, but got error with dependencies:
rpm -Uvh ./device-mapper-multipath-libs-0.4.9-56.el6_3.1.x86_64.rpm
... (1 Reply)
Good morning,
I'm working in a lab that generates a good amount of data and we've just about filled our 9.1TB RAID.
The system is a Dell PowerEdge 2950 running Scientific Linux 5.4 with a PERC H800 and a Dell PowerVault MD1200. The MD1200 has 12 bays, 6 of which were filled with 2TB drives... (1 Reply)
Hello!
I have a 4-disc Raid 5 server running Open Media Vault (Debian). The other day, it disappeared from OMV, which was reporting 3 drives failed. Panic Stations. However, using MDADM I can get info from 3 of the drives which suggests they are functioning ok (info below). The remaining 4th... (1 Reply)