Top 22 Sap Hana Sql Script Interview Questions You Must Prepare 26.Apr.2024

It is a set of SQL extensions for SAP HANA database which allow developers to push data intensive logic into the database.

TRACE operator. It traces the tabular data passed as its argument into a local temporary table and return its input unmodified. The names of the temporary tables can be retrieved from the SYS.SQL SCRIPT TRACE view. 

Example: out = TRACE (:input);

Break means loop should stop processing, CONTINUE means loop should stop processing the current iteration and immediately start processing the next iteration.

Yes it is possible to specify type of data load and replication in SAP HANA, it can be either in real time, or scheduled by time or by interval.

Private attributes are the attributes used inside a modeling views and cannot be used outside the view.These are used in a modeling view to customize the behavior of an attribute for only that view.

SAP HANA 1.0 is an analytics appliance which consists of certified hardware, and In Memory DataBase (IMDB), an Analytics Engine and some tooling for getting data in and out of HANA. The logic and structures are built by the user and a tool is used.

Example: SAP BusinessObjects, to visualize or analyze data.

A shared network attached and varies from vendor to vendor and is called the storage. Here both regular magnetic disks and SSD storage can be used for the backup of the database (HANA runs in memory remember, so disk storage is just for backup, and later, for data ageing). If the operator require 2x storage that you have RAM, which is 2x the database size - i.e. storage size = 4x database size. Anyways in most cases there is additional ultra-high speed SSD storage for log files.

Latency is the length of time to replicate data (a table entry) from the source system to the target system.

When a user creates a new procedure, the HANA database query compiler first:

  • Parse the statements
  • Check the statement semantic correctness
  • Optimize the code for Declarative and Imperative logic
  • Code generation creates Calculation models for Declarative logic and L nodes for Imperative logic
  • Compiled procedure creates Content in the database Catalog and in Repository

DMIS add-on is installed in SLT replication server for SAP source systems. The role IUUC_REPL_CONTENT is assigned to the user for RFC connection but not DDIC. DMIS add-on is notrequired and grants a database user sufficient authorization for data replication for a non-SAP source system.

The information required to create the connection between the source system, SLT system, and the SAP HANA system is specified within the SLT system as a Configuration. A new configuration in Configuration & Monitoring Dashboard (transaction LTR) can hence be defined.

From the Administration perspective, navigate to tab “Trace Configuration” . In order to change settings, you need to have system privileges “TRACE ADMIN” and “INFILE ADMIN”. 

Transformation rule is a rule specified in the Advanced Replication settings transaction for source tables such a way that the data is transformed during the replication process. 

For example: one can specify rule to Convert fields, Fill empty fields and Skip records

The full form of SLT is SAP Landscape Transformation which is nothing but a trigger based replication. It is the replication technology to pass the data from the source system to the target system. Here the source can be either SAP or non-SAP whereas the Target system is SAP HANA system which contains HANA database.

SRS has additional restrictions which are worth to be understood. It only replicates Unicode data and does not support IBM DB2 compress tables.

During compilation, the call to the procedure is rewritten for processing by the calculation engine.

Each job occupies 1 BGD work processes in SLT replication server. For each configuration, the parameter Data Transfer Jobs restricts the maximum number of data load job for each mass transfer ID (MT_ID).

A mass transfer ID requires at least 4 background jobs to be available:

One master job

One master controller job

At least one data load job

One additional job either for migration/access plan calculation/to change configuration settings in "Configuration and Monitoring Dashboard".

If Sybase Replication Server (SRS) for near real-time data is used then the licensing still (SAP have license deals pending) needs to checked. If DB2 is run then its fine but with Oracle and Microsoft SQL Server there are some license challenges. So if license is bought through SAP, because you may have a limited license that does not allow extraction

Theory wise at least very well. Most mid-size workloads - 2TB of in-memory storage is equivalent to 5-20TB of Oracle storage can be run by the biggest single server HANA hardware. HANA works in a way that means it is possible to chain multiple systems together which means that scalability has thus-far been determined by the size of customers' wallets. Whilst SAP talk up "Big Data" quite a lot, HANA currently only scales to the small-end of Big Data referring to the kind of huge datasets that FaceBook or Google have to store - not Terabytes, but rather Petabytes.

replication settings like modifying target table structures, specifying performance optimization settings that defines transformation rules.

 It should be used in cases where other modeling constructs of HANA such as Attribute views or Analytic views are not sufficient.