Top 26 Oracle 11g Rac Interview Questions You Must Prepare 19.Mar.2024

$ crsctl check crs

CRS-4638: Oracle High Availability Services is online.

CRS-4537: Cluster Ready Services is online.

CRS-4529: Cluster Synchronization Services is online.

CRS-4533: Event Manager is online.

  • Stripes files rather than logical volumes.
  • Provides redundancy on a file basis.
  • Enables online disk reconfiguration and dynamic rebalancing.
  • Reduces the time significantly to resynchronize a trient failure by tracking changes while disk is offline.
  • Provides adjustable rebalancing speed.
  • Is cluster-aware.
  • Supports reading from mirrored copy instead of primary copy for extended clusters.
  • Is automatically installed as part of the Grid Infrastructure.

In 10g its not possible, where in 11g it is possible

[root@pic1]# crsctl start cluster -all

[root@pic2]# crsctl stop cluster –all

  • cat /etc/oracle/ocr.loc

ocrconfig local=+DATA

local_only=FALSE

  • #OCRCHECK (also about OCR integrity)

  • Cluster Interconnect (HAIP)
  • Shared Storage (OCR/Voting Disk)
  • Cluster ware software

To start or stop Oracle Cluster ware on a specific node:

  • # crsctl stop crs
  • # crsctl start crs

To enable or disable Oracle Cluster ware on a specific node:

  • # crsctl enable crs
  • # crsctl disable crs

crsctl enable crs (as root)

to disable

crsctl disable crs (as root)

Yes, as per documentation, if you have multiple voting disk you can add online, but if you have only one voting disk, by that cluster will be down as its lost you just need to start crs in exclusive mode and add the vote disk using

crsctl add vote disk <path>

Well, there is not much difference between 10g and 11gR (1) RAC.

But there is a significant difference in 11gR2.

Prior to 11gR1 (10g) RAC, the following were managed by Oracle CRS:

  • Databases
  • Instances
  • Applications
  • Node Monitoring
  • Event Services
  • High Availability

From 11gR2 (onwards) it’s completed HA stack managing and providing the following resources as like the other cluster software like VCS etc.

  • Databases
  • Instances
  • Applications
  • Cluster Management
  • Node Management
  • Event Services
  • High Availability
  • Network Management (provides DNS/GNS/MDNSD services on behalf of other traditional services) and SCAN – Single Access Client Naming method, HAIP.
  • Storage Management (with help of ASM and other new ACFS filesystem).
  • Time synchronization (rather depending upon traditional NTP).
  • Removed OS dependent hang checker etc, manages with own additional monitor process.

To check the viability of Cluster Synchronization Services (CSS) across nodes:

$ crsctl check cluster

CRS-4537: Cluster Ready Services is online.

CRS-4529: Cluster Synchronization Services is online.

CRS-4533: Event Manager is online.

To add a SCAN VIP resource:

  • $ srvctl add scan -n cluster01-scan

To remove Cluster ware resources from SCAN VIPs:

  • $ srvctl remove scan [-f]

To add a SCAN listener resource:

  • $ srvctl add scan listener
  • $ srvctl add scan listener -p 1521

To remove Cluster ware resources from all SCAN listeners:

  • $ srvctl remove scan listener [-f]

Software that provides various interfaces and services for a cluster. Typically, this includes capabilities that:

  • Allow the cluster to be managed as a whole.
  • Protect the integrity of the cluster.
  • Maintain a registry of resources across the cluster.
  • Deal with changes to the cluster.
  • Provide a common view of resources.

Basically Oracle kernel need to switched on with RAC On option when you convert to RAC, that is the difference as it facilitates few RAC bg process like LMON, LCK, LMD, LMS etc.

To turn on RAC:

  • # link the oracle libraries
  • $ cd $ORACLE_HOME/rdbms/lib
  • $ make -f ins_rdbms.mk rac_on
  • # rebuild oracle
  • $ cd $ORACLE_HOME/bin
  • $ relink oracle

Oracle RAC is composed of two or more database instances. They are composed of Memory structures and background processes same as the single instance database. Oracle RAC instances use two processes GES(Global Enqueue Service), GCS(Global Cache Service) that enable cache fusion.

Oracle RAC instances are composed of following background processes:

  • ACMS—Atomic Controlfile to Memory Service (ACMS)
  • GTX0-j—Global Traction Process
  • LMON—Global Enqueue Service Monitor
  • LMD—Global Enqueue Service Daemon
  • LMS—Global Cache Service Process
  • LCK0—Instance Enqueue Process
  • RMSn—Oracle RAC Management Processes (RMSn)
  • RSMN—Remote Slave Monitor

On a single node in the cluster, add the new global interface specification:

  • $ oifcfg setif -global eth2/192.0.2.0:cluster_interconnect

Verify the changes with oifcfg getif and then stop Cluster ware on all nodes by running the following command as root on each node:

  • # oifcfg getif
  • # crsctl stop crs

Assign the network address to the new network adapters on all nodes using ifconfig:

  • #ifconfig eth2 192.0.2.15 netmask 255.255.255.0 broadcast 192.0.2.255

Remove the former adapter/subnet specification and restart Cluster ware:

  • $ oifcfgdelif -global eth1/192.168.1.0
  • # crsctl start crs

  1. Each cluster node has a local registry for node-specific resources.
  2. The OLR should be manually created after installing Grid Infrastructure on each node in the cluster.
  3. One of its functions is to facilitate Cluster ware startup in situations where the ASM stores the OCR and voting disks.
  4. You can check the status of the OLR using ocrcheck.

Crsctl manages cluster ware-related operations:

  • Starting and stopping Oracle Cluster ware.
  • Enabling and disabling Oracle Cluster ware daemons.
  • Registering cluster resources.

Srvctl manages Oracle resource–related operations:

  • Starting and stopping database instances and services.
  • Also from 11gR2 manages the cluster resources like network, vip, disks etc.

ASM can use variable size data extents to support larger files, reduce memory requirements, and improve performance.

  1. Each data extent resides on an individual disk.
  2. Data extents consist of one or more allocation units.

The data extent size is:

  • Equal to AU for the first 20,000 extents (0–19999)
  • Equal to 4 × AU for the next 20,000 extents (20000–39999)
  • Equal to 16 × AU for extents above 40,000

ASM stripes files using extents with a coarse method for load balancing or a fine method to reduce latency.

  • Coarse-grained striping is always equal to the effective AU size.
  • Fine-grained striping is always equal to 128 KB.

A scan listener is something that additional to node listener which listens the incoming db connection requests from the client which got through the scan IP, it got endpoints configured to node listener where it routes the db connection requests to particular node listener.

Grid Naming service is alternative service to DNS , which will act as a sub domain in your DNS but managed by Oracle, with GNS the connection is routed to the cluster IP and manages internally.

ASM imposes the following limits:

  • 63 disk groups in a storage system
  • 10,000 ASM disks in a storage system
  • Two-terabyte maximum storage for each ASM disk (non-Exadata)
  • Four-petabyte maximum storage for each ASM disk (Exadata)
  • 40-exabyte maximum storage for each storage system
  • 1 million files for each disk group

ASM file size limits (database limit is 128 TB):

  1. External redundancy maximum file size is 140 PB.
  2. Normal redundancy maximum file size is 42 PB.
  3. High redundancy maximum file size is 15 PB.

In 11gr2 the listeners will run from Grid Infrastructure software home

  • The node listener is a process that helps establish network connections from ASM clients to the ASM instance.
  • Runs by default from the Grid $ORACLE_HOME/bin directory.
  • Listens on port 1521 by default.
  • Is the same as a database instance listener.
  • Is capable of listening for all database instances on the same machine in addition to the ASM instance.
  • Can run concurrently with separate database listeners or be replaced by a separate database listener.
  • Is named tnslsnr on the Linux platform.

With Oracle Cluster ware 11g release 2, Oracle Universal Installer (OUI) detects when the minimum requirements for an installation are not met, and creates shell scripts, called fixup scripts, to finish incomplete system configuration steps. If OUI detects an incomplete task, then it generates fixup scripts (runfixup.sh). You can run the fixup script after you click the Fix and Check Again Button.

The Fixup script does the following:

If necessary sets kernel parameters to values required for successful installation, including:

  • Shared memory parameters.
  • Open file descriptor and UDP send/receive parameters.

Sets permissions on the Oracle Inventory (central inventory) directory. Reconfigures primary and secondary group memberships for the installation owner, if necessary, for the Oracle Inventory directory and the operating system privileges groups.

  • Sets shell limits if necessary to required values.

crsctl stop cluster (possible only from 11gr2), please note crsctl commands becomes global now, if you do not specify node specifically the command executed globally

for example:

  • crsctl stop crs (stops in all crs resource in all nodes).
  • crsctl stop crs –n <neonate) (stops only in specified node).

The cluster stack will be down due to the fact that cssd is unable to maintain the integrity, this is true in 10g, from 11gR2 onwards, the crsd stack will be down, the house still up and running. You can add the OCR back by restoring the automatic backup or import the manual backup.