In a RAC environment, it is the combining of data blocks, which are shipped across the interconnect from remote database caches (SGA) to the local node, in order to fulfill the requirements for a traction (DML, Query of Data Dictionary).
There are two types of connection load-balancing: server-side load balancing and client-side load balancing.
The Clusterware is installed on each node (on an Oracle Home) and on the shared disks (the voting disks and the CSR file).
The VIP is an alternate Virtual IP address assigned to each node in a cluster. During a node failure the VIP of the failed node moves to the surviving node and relays to the application that the node has gone down. Without VIP, the application will wait for TCP timeout and then find out that the session is no longer live due to the failure.
Oracle Clusterware is used to manage high-availability operations in a cluster. Anything that Oracle Clusterware manages is known as a CRS resource. Some examples of CRS resources are database, an instance, a service, a listener, a VIP address, an application process etc.
The Cluster Health Monitor (CHM) stores operating system metrics in the CHM repository for all nodes in a RAC cluster. It stores information on CPU, memory, process, network and other OS data, This information can later be retrieved and used to troubleshoot and identify any cluster related issues. It is a default component of the 11gr2 grid install. The data is stored in the master repository and replicated to a standby repository on a different node.
RAC cluster is a database with a shared cache architecture that overcomes the limitations of traditional shared nothing and shared disk approaches.
It is a key component of Oracle's private cloud architecture.
ACMS ensures global updates to System Global Area (SGA) in a RAC set up.
Oracle Flex ASM instance when fails on a particular node, then the Oracle Flex ASM instance is passed over to another node in the cluster.
The background processes required for RAC are given as follows:
The base software is installed on each node of the cluster and the database storage on the shared disks.