Top 50 Sap Bi Interview Questions You Must Prepare 19.Mar.2024

  • Uses generated numeric keys and aggregates in its own tables for faster access.
  • Uses an external hierarchy.
  • Supports multiple languages.
  • Contains master data common to ;dl cubcs.
  • Supports slowly changing dimensions.

SAP has defined a business blueprint phase to help extract pertinent information about your company that is necessary for implementation. These blueprints are in the form of questionnaires that are designed to probe for information that uncovers how your company does business. As such, they also serve to document the implementation. Each business blueprint document essentially outlines your future business processes and business requirements.

Functional Specs are requirements of the business user.Technical Specs translate these requirements in a technical fashion.Let's say Functional Spec says,

  1. the user should be able to enter the Key date, Fiscal Year, Fiscal Version.
  2. The Company variable should be defaulted to USA but then if the user wants to change it, they can check the drop down list and choose other countries.
  3. The calculations or formulas for the report will be displayed in precision of one decimal point.
  4. The report should return values for 12 months of data depending on the fiscal year that the user enters Or it should display in quarterly values. Functional specs are also called as Software requirements.

Now from this Technical Spec follows, to resolve each of the line items listed above.

  1. To give the option of key date, Fiscal year and Fiscal Version – certain Info Objects should be available in the system. If available, then should we create any variables for them - so that they are used as user entry variable. To create any variables, what is the approach, where do you do it, what is the technical of the objects you'll use, what'll be the technical name of the objects you'll Crete as a result of this report.
  2. Same explanation goes for the rest. How do you set up the variable,
  3. What changes in properties will you do to get the precision.
  4. How will you get the 12 months of data. What will be the technical and display name of the report, who'll be authorized to run this report, etc are clearly specified in the technical specs.

Through DataMarts data can be loaded from one InfoCube to another InfoCube.

BAPI & ALE are programs to extract data from DataSources. BW connects SAP systems (R/3 or BW) and flat files via ALE. BW connects with non SAP systems via BAPI.

In BIW, Saving à actually saves the defined structure and retrieves whenever required.

Activating à It saves and generates required tables and structures.

Yes, when we want to report on Characteristics or Master Data. We have to right click on the InfoArea and select “Insert characteristic as data target”. For example, we can make 0CUSTOMER as an InfoProvider and report on it.

Two partitions are created for date before the begin date and after the end date.

Process chains which are clubbed together are called a Meta Chain. Each sub chain is triggered only when the previous Process Chain is successful.

Amount /quantity is always combined with units. For example, sales will be linked to currency and inventory will be linked to quantity in units. In your design if you don't need units then you should use number or integer to improve performance.

Replication of Data Source enables the extract structure from the source system to be replicated in the BW.

The load performance of LO extraction is better than that of LIS. In LIS two tables are used for delta management that is cumbersome. In LO only one delta queue is used for delta management.

No, we cannot, there are no hierarchies in CO/PA.

Yes. By deleting the setup tables we are deleting the data that is in the setup tables from the previous update. This avoids updating the records twice into the BW.

It allows us to select a particular value of a particular field and delete its contents.

Data Transfer Process:

Data transfer process (DTP) loads data within BI from one object to another object with respect to transformations and filters. In short, DTP determines how data is transferred between two persistent objects.

It is used to load the data from PSA to data target (cube or ods or infoobject) thus, it replaced the data mart interface and the Info Package.

ReadPointer connects Aggregates and InfoCube. We can view the ReadPointer in table RSDDAGGRDIR, the field name is RN_SID, whenever we are rolling up the data, it contains the request number, it will check with the next request for second roll up. Just follow the table for a particular InfoCube and roll up the data.

Data Marts are used to exchange data between different BW systems or to update data within the same BW system (Myself Data Mart). Here, the InfoProviders that are used to provide data are called Data Marts.

Start Routines: The start routine is run for each DataPackage after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global DataStructures. This structure or table can be accessed in the other routines. The entire DataPackage in the transfer structure format is used as a parameter for the routine.

Transfer / Update Routines: They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks.

An InfoCube will have additive Delta, but you will still be able to see all individual records in the InfoCube contents. This is because if you choose to delete the current request - then the records have to be rolled back to the prior status. You build a query on the InfoCube and on the query you will find that the data is actually summed up. The ODS records will not have duplicate records. You will have only one record.

Star schema: Only characteristics of the dimension tables can be used to access facts. No structured drill downs can be created. Support for many languages is difficult.

Extended star schema: Master data tables and their associated fields (attributes), External hierarchy tables for structured access to data, Text tables with extensive multilingual descriptions are supported using SIDs.

Operational DataSource is used for detailed storage of data. We can overwrite data in the ODS. The data is stored in transparent tables.

Generally attribute change run is used when there is any change in the master data. it is used for realignment of the master data. Attribute change run is nothing but adjusting the master data after its been loaded from time to time so that it can change or generate or adjust the Sid’s so that u may not have any problem when loading the transaction data in to data targets. The detail explanation about Attribute change run. The hierarchy/attribute change run which activates hierarchy and attribute changes and adjusts the corresponding aggregates is divided, into 4 phases:

  1. Finding all affected aggregates
  2. Set up all affected aggregates again and write the result in the new aggregate table.
  3. Activating attributes and hierarchies
  4. Rename the new aggregate table. When renaming, it is not possible to execute queries.

In some databases, which cannot rename the indexes, the indexes are also created in this phase.

It is an IDOC parameter source system. This table contains the details of the data transfer like the source system of the data, data packet size, maximum number of lines in a data packet, etc. The data packet size can be changed through the control parameters option on SBIW i.e., the contents of this table can be changed.

  • Assign an Infoobject - direct transfer, no transformation
  • Assign a constant eg. If you are loading data from a specified country from a flat file, you can make the country (17s) as a constant and assign the value explicitly
  • ABAP routine eg. If you want to do some complex string manipulation, assume that you are getting a flat file from legacy data and the cost center is in a field and you have to "massage" the data to get it in. In this case the use of an ABAP routine is most appropriate 
  • Formula - for simple calculations usc formula eg. If you want to convert all lower casc cl~uractcrst o upper case, use the TOUPPER formula. You can use formilla builder to help put your formulas together.

 

  • Direct Delta
  • Queued Delta
  • Unserialized V3 Update

In R/3 the record mode determines this as seen in the RODELTAM table i.e., whether it will be a new status or additive delta for the respective DataSource. Based on this you need to select the appropriate update type for the data target in BW. For e.g., ODS supports additive as well as Overwrite function. Depending on which DataSource is updating the ODS, and the record mode supported by this DataSource, you need to do the right selection in BW.

The transaction data gets loaded and the master data fields remain blank.

• An aggregate is a materialized, aggregated view of the data in an InfoCube. In an aggregate, the dataset of an InfoCube is saved redundantly and persistently in a consolidated form. Aggregates make it possible to access InfoCube data quickly in Reporting. Aggregates can be used in following cases:

  • To speed up the execution and navigation of a specific query.
  • Use attributes often in queries.
  • To speed up reporting with characteristic hierarchies by aggregating specific hierarchy levels.

  • Transfer data and Metadata from SAP Systems
  • Transfer data from XML files
  • Transfer data between BW data targets or from one BW system to another (Data Marts)

When requests loaded into ODS object are neither required for delta update nor for initialization, they can be deleted. If delta initialization for update exists in connected data targets, the requests have to be updated first before the data can be deleted.

Activation of objects enables them to be executed, in other words used elsewhere for different purposes. Unless an object is activated it cannot be used.

Depending on the type of report the data is stored in Info Cube or ODS. BW is used to store high volumes of data and faster reporting. Info Cube is used to store normalized data. Master Data and transaction data are stored in Info Cube as per the Extended Star Schema using SIDs. The reporting is fast.

ODS stores data in more detail utilizing its structure of transparent tables. Reporting on this will be slow. ODS is better used for RRI.

One would have to see if the InfoCubes are used individually. If these InfoCubes are often used individually, then it is better to go for a MultiProvider with many InfoCubes since the reporting would be faster for an individual InfoCube query rather than for a big InfoCube with lot of data.

BW has a better performance advantage over reporting in R/@For a huge amount of data, the R/3 reporting tool is at a serious disadvantage because R/3 is modeled as an OLTP system and is good for transaction processing rather than analytical processing.

In serialized V3 Update data is transferred from the LIS communication structure, using extract structures (e.g. MC02M_0HDR for the header purchase documents), into a central delta management area.

With Unserialized V3 Update mode, the extraction data continues to be written to the update tables using a V3 update module and then is read and processed by a collective update run (through LBWE).

NO, it is not always necessary to create the partner profiles in case of file to idoc scenario if you are doing it for testing purposes otherwise you have to configure partner profile to assure XI for receiver client.

DB connect is a database connecting program. It is used in connecting third party tools with BW for reporting purpose.

If the extract structure is activated then any online transaction or on the compilation of setup tables, the data is posted to the extract structures depending on the update method selected. Activation marks the DataSource with green else it is yellow. The activation/deactivation makes entries to the TMC EXACT table.

Initially we study the business process of client, like what kind of data is flowing in the system, the volume, changes taking place in it, the analysis done on the data by users, what are they expecting in the future, how can we use the BW functionality. Later we have meetings with business analyst and propose the data model, based on the client. Later we give a proof of concept demo wherein we demo how we are going to build a BW data warehouse for their system. Once we get an approval we start requirement gatherings and building your model and testing follows in QA.

Depending on work load on BW side and source system side loading time varies. Typically it takes half an hour to load a million records.

Transfer mode as requested in the Scheduler of the BW. Not normally required.

  • Errors in loading data (ODS loading, InfoCube loading, delta loading etc)
  • Errors in activating BW or other objects.

Newly generated records will be stored in the extraction queue and from there a scheduled job will push it to delta queue.

Efficient reporting is one of the targets of using hierarchies. Easy drilldown paths can be built using hierarchies.