In Red Hat 4 :
In RedHat 5 :
In Red Hat 6 :
A cluster configured with qdiskd supports a maximum of 16 nodes. The reason for the limit is because of scalability; increasing the node count increases the amount of synchronous I/O contention on the shared quorum disk device.
NOTE: It is better to have 10 1TB file systems than one 10TB file system.
When you run mkfs.gfs2 without the size attribut for journal to create a GFS2 partition, by default a 128MB size journal is created which is enough for most of the applications
In case you plan on reducing the size of the journal, it can severely affect the performance. Suppose you reduce the size of the journal to 32MB it does not take much file system activity to fill an 32MB journal, and when the journal is full, performance slows because GFS2 has to wait for writes to the storage.
Tie-breakers are additional heuristics that allow a cluster partition to decide whether or not it is quorate in the event of an even-split - prior to fencing.
With such a tie-breaker, nodes not only monitor each other, but also an upstream router that is on the same path as cluster communications. If the two nodes lose contact with each other, the one that wins is the one that can still ping the upstream router.That is why, even when using tie-breakers, it is important to ensure that fencing is configured correctly.
CMAN has no internal tie-breakers for various reasons. However, tie-breakers can be implemented using the API.
The minimum size of the block device is 10 Megabytes.
Quorum Disk is a disk-based quorum daemon, qdiskd, that provides supplemental heuristics to determine node fitness.
With heuristics you can determine factors that are important to the operation of the node in the event of a network partition
For a 3 node cluster a quorum state is present untill 2 of the 3 nodes are active i.e. more than half. But what if due to some reasons the 2nd node also stops communicating with the the 3rd node? In that case under a normal architecture the cluster would dissolve and stop working. But for mission critical environments and such scenarios we use quorum disk in which an additional disk is configured which is mounted on all the nodes with qdiskd service running and a vote value is assigned to it.
So suppose in above case I have assigned 1 vote to qdisk so even after 2 nodes stops communicating with 3rd node, the cluster would have 2 votes (1 qdisk + 1 from 3rd node) which is still more than half of vote count for a 3 node cluster. Now both the inactive nodes would be fenced and your 3rd node would be still up and running being a part of the cluster.
clusvcadm -r service_name -m node_name
A lock state indicates the current status of a lock request.
A lock is always in one of three states:
A lock's state is determined by its requested mode and the modes of the other locks on the same resource.
This is a service termed as Resource Group Manager
RGManager manages and provides failover capabilities for collections of cluster resources called services, resource groups, or resource trees
it allows administrators to define, configure, and monitor cluster services. In the event of a node failure, rgmanager will relocate the clustered service to another node with minimal service disruption
The use of NetworkManager is not supported on cluster nodes. If you have installed NetworkManager on your cluster nodes, you should either remove it or disable it.
# service NetworkManager stop
# chkconfig NetworkManager off
The cman service will not start if NetworkManager is either running or has been configured to run with the chkconfig command
A cluster is two or more computers (called nodes or members) that work together to perform a task.
There are four major types of clusters:
In Red Hat 4 :
In Red Hat 5
In Red Hat 6 :
A journaling filesystem is a filesystem that maintains a special file called a journal that is used to repair any inconsistencies that occur as the result of an improper shutdown of a computer.
In journaling file systems, every time GFS2 writes metadata, the metadata is committed to the journal before it is put into place.
This ensures that if the system crashes or loses power, you will recover all of the metadata when the journal is automatically replayed at mount time.
GFS2 requires one journal for each node in the cluster that needs to mount the file system. For example, if you have a 16-node cluster but need to mount only the file system from two nodes, you need only two journals. If you need to mount from a third node, you can always add a journal with the gfs2_jadd command.