Top 50 Oracle Exadata Database Interview Questions You Must Prepare 27.Apr.2024

Before and after any configuration change in Database Machine

We can execute Exacheck and verify the best practice setup on Exadata machine.

Approximate 3 hours per Cell and DB server including Infiniband & PDU patching required 1 hour each

There are 53 wait events are exadata specific events.

The flash cache is a hardware component configured in the exadata storage cell server which delivers high performance in read and write operations.

Primary task of smart flash cache is to hold frequently accessed data in flash cache so next time if same data required than physical read can be avoided by reading the data from flash cache.

  • Database Instance
  • ASM Instance
  • Database Resource Manager

  • Exacheck
  • sundiagtest
  • oswatcher
  • OEM 12c

  • writethrough --> Falshcache will be used only for reading purpose.
  • writeback --> Flashcache will be used for both reading and writing.

The protocol used for communication between database server and storage server is  : iDB protocol

All the HDD are hot swapable so if we are using proper redundacny than we can direct remove and replace new HDD.

Storage software will take care all the stuffs in background after replacing the HDD.

 

Grid Disks are created on top of Cell Disks and are presented to Oracle ASM as ASM disks. Space is allocated in chunks from the outer tracks of the Cell disk and moving inwards. One can have multiple Grid Disks per Cell disk.

Exadata is pre-configured combination of hardware and software which provides a platform to run the Oracle Database. 

DBRM is the feature of database while IORM is the feature of storage server software.

 

CELLIP.ORA file contains the list of storage server which is accessed by DB server.

@Golden Gate
@Trportable Tablespace
@Incremental Trportable Tablespace
@Data Pump

IORM stand for I/O Resource Manager which manages the I/Os of multiple database on storage cell.

SNMP : Simple Network Management Protocol

It can be done through ILOM of DB or Cell server.

  • It is not required to take a backup as it happens automatically.
  • Exadata use internal USB drive called the Cellboot Flash Drive to take backup of software.

IORM stands for I/O Resource Manager. It manages the I/O demand based on the configuration, with the amount of resources available. It ensures that none of the I/O cells become oversubscribed with the I/O requests. This is achieved by managing the incoming requests at a consumer group level. Using IORM, you can divide the I/O bandwidth between multiple databases. To implement IORM resource groups, consumers and pl need to e created.

Some of the key hardware and software features are:
Hardware level
• Storage Server Cells.
• High Speed Infiniband Switch.
Software level
• Smart Scan.
• Flash Cache.
• Hybrid Columnar Compression.
• IORM (I/O Resource Manager).

There are 14 cell storage comes in full rack exdata machine.

  • High capacity disk comes with more storage space and less rpm (7.5k).
  • High Performance disk comes with less storage and high rpm (15k).

The parameter PARALLEL_FORCE_LOCAL can be specified at the session level for a particular job.

Hybrid Columnar compression, also called HCC, is a feature of Exadata which is used for compressing data at column level for a table. It creates compression data units which consist of logical grouping of columns values typically having several data blocks in it. Each data block has data from columns for multiple rows. This logarithm has the potential to reduce the storage used by the data and reduce disk I/O enhancing performance for the queries.

Four 96G PCIe flash memory cards are present on each Exadata Storage Server cell which provide very fast access to the data stored on it. This reduces data acess latency by retrieving data from memory rather than having to access data from disk. A total flash storage of 384GB per cell is available on the Exadata appliance.

STEPS :

  • Create Directory
  • Create Tablespace on database which you are going to use for DBFS
  • Create user for DBFS
  • Grant required privileges to created user
  • Now connect to database with created user
  • Create dbfs filesystem by invoking dbfs_create_filesystem_advanced
  • Mount file system by starting dbfs_client

Depending on the downtime allowed there are several options:
• Traditional Export/Import
• Oracle DataGuard
• Tablespace trportation.
• Goldengate Replication after a data restore onto Exadata.

  • The Exadata Appliance configuration comes as a Full Rack, Half Rack or Quarter Rack.
  • The Full Rack X2-2 has 6 CPUs per node with Intel Xeon 5670 processors and a total of 8 Database Server nodes, also known as compute nodes.
  • These servers have 96GB of memory on each node. A total of 14 Storage server cells communicate with the storage and push the requested data from the storage to the compute nodes.
  • The Half Rack has exactly half the capacity. It has 6 CPUs per node with core Intel Xeon 5670 processors and a total of 4 Database Server nodes. It has 96GB of memory per database server node with a total of 7 Storage server cells

  • ASR is the tool to manage the Oracle hardware. Full form of ASR is Auto Service Request.
  • Whenever any hardware fault occurs ASR automatically raise SR in Oracle Support and send notification to respective customer.

The X3-8 is comprised of 2 large SMP compute servers while the X3-2 can scale to as many as 8 compute servers as processing requirements increase. 

 

EHCC is Exadata Hybrid Columnar Compression which is used to compress data in the Database.

It refers to the fact that part of the traditional SQL processing done by the database can be “offloaded” from the database layer to the storage layer.

The primary benefit of Offloading is the reduction in the volume of data that must be returned to the database server. This is one of the major bottlenecks of most  large databases.

  • Public/Client Network - For Application Connectivity.
  • Management Network -  For Exadata H/W management.
  • Private Network - For cluster inter connectivity and Storage connectivity.

SQL>alter table table_name move compress for query high;

Cell and Grid Disk are a logical component of the physical Exadata storage. A cell or Exadata Storage server cell is a combination of Disk Drives put together to store user data. Each Cell Disk corresponds to a LUN (Logical Unit) which has been formatted by the Exadata Storage Server Software. Typically, each cell has 12 disk drives mapped to it.

Grid Disks are created on top of Cell Disks and are presented to Oracle ASM as ASM disks. Space is allocated in chunks from the outer tracks of the Cell disk and moving inwards. One can have multiple Grid Disks per Cell disk.

  • Cellcli can be used on respective cell storage only.
  • DCLi (Distributed command Line Utility) - DCLI can be used to replicate command on multipla storage as well as DB servers.

 

  • Database servers has two option for OS either Linux or Solaris which can be finalized at the time of configuration.
  • Cell storage comes with Linux only.

  • Install OEM agent on DB server
  • Launch auto discovery with the use of One Command XML file
  • Specify required credentials for all the components
  • Review Configuration
  • Complete the setup

Spine switch is used to connect or add more Exadata machine in the cluster

CellCLI> ALTER CELL flashCacheCompress=true

512MB per module.
Each storage cell having 4 modules so its 4X512 MB per CELL

 

  • Storage Indexes consist of a minimum and a maximum value for up to eight columns. This structure is maintained for 1MB chunks of storage (storage regions). 
  • Storage Indexes are stored in memory only and are never written to disk.
  • Storage Index filter out data from the consideration.