SAP BW .....all info @ one place
SAP BW relevant Information
Loading

Scenario : Distribute Keyfigure values into formula elements and reprsent them in report by separate coulmn.
suppose the key figure is having 0 to 500 values then i want to make a distribution as follows:
0-50 formula 1
50-100 formula 2
100-200 formula 3
and so on--

Solution: Using boolean options in BEx we can achive this.
Create new formula and use one of below conditions:

Formula1 : ((keyfigur) => 0 * (keyfigure) <= 50) * keyfigure
Formula2 : ((keyfigur) => 51 * (keyfigure) <= 100) * keyfigure
Formula1 : ((keyfigur) => 101 * (keyfigure) <= 200) * keyfigure
Formula1 : ((keyfigur) => 20 * (keyfigure) <= 300) * keyfigure
and so on ....

More info refer to link : https://forums.sdn.sap.com/thread.jspa?threadID=635618&tstart=0

 

Sub : Need to delete data...? for enhancing datasource..?

No need to delete any data in BW or R/3 side for enhancing datasource if loading data into ODS in overwrite mode.

Simply follow the below steps:

1. Add new fields to ODS & cubes and adjust update rules.
2. Clear LBWQ(update queue - Run V3 job only).
3. Clear RSA7(Delta queue - Run infopak to pull data into bw)
4. Move datasource changes to Production, replicate and activate transfer rules.
5. Delete data in LBWG(setup tables).
6. Fill setup tables for historic data.
7. Initialize datasource if required, without data transfer(Zero initialization).
8. Pull data from R/3 to BW ODS in overwrite mode with Repair Full option.
Loading data in overwrite mode, so no problem, Just load again historic data as well. and push delta from ODS to further(ods/cube).
9. Push delta from ODS to CUBE.

More info @ https://www.sdn.sap.com/irj/sdn/thread?threadID=624282&messageID=4393416#4393416

 

Scenario : Create and Populate User Exit Variable with default date as current date.

Step1: Create a variable(ex: cur_date) on requrired characteristic with Processing type "Customer Eixt" and check the check box for ready for input(if this variable need in selection screen).

Step 2: Goto T Code : CMOD and provide appropriate Project and choose components and click on display.

Step3: Double click on exit "EXIT_SAPLRRS0_001" you can see include "ZXRSRU01", double click on include.

Step4: Sample code to populate.

WHEN 'CUR_DATE'.
Data : l_p_range_SSS TYPE rrrangesid.

IF I_STEP = 1.
l_p_range_SSS-Low = sy-datum.
l_p_range_SSS-Sign = 'I'.
l_p_range_SSS-Opt = 'EQ'.
APPEND l_p_range_SSS TO e_t_range.
Endif.


The following values are valid for I_STEP:
· I_STEP = 1
Call up takes place directly before variable entry
· I_STEP = 2
Call up takes place directly after variable entry. This step is only started up when the same variable could not be filled at I_STEP=1.
· I_STEP = 3
In this call up, you can check the values of the variables. Triggering an exception (RAISE) causes the variable screen to appear once more. Afterwards, I_STEP=2 is also called up again.
· I_STEP = 0
The enhancement is not called from the variable screen. The call up can come from the authorization check or from the Monitor.


More info @ http://help.sap.com/saphelp_bw320/helpdata/en/1d/ca10d858c2e949ba4a152c44f8128a/content.htm

 

Some times requirement comes like, we need to create multiple records from single record...

Scenario : We are getting data at month level and want to distribute to week or day level

If we are loading to cube then we can use time distribution option to split the records,
but if we are loading into an ODS time distribution option not available, and also distribution other than time char we need to write code to split the records.

Sample Code: This code is to generate multiple records based on number in keyfigure NONCNFRMTY.

*** Data Declaration
*** Declare one more internal table - Lt_data_package same as data_package.
Data : Lt_data_package like data_package occurs 0 with header line,
l_NONCNFRMTY type data_package-NONCNFRMTY or take reference.

Loop at data_package.l_NONCNFRMTY = data_package-NONCNFRMTY.
Do l_NONCNFRMTY times.
data_package-NONCNFRMTY = 1.
Append data_package to lt_data_package.
enddo.

Endloop.
*** Clear data_package contents using refresh statement
Refresh data_package.
*** Pust contents of lt_data_package contents to data_package
Insert lines of lt_data_package from 1 into data_package.

More info @ https://www.sdn.sap.com/irj/sdn/thread?threadID=610540&tstart=0

 

-->> For some questions no answers are maintained, you can also answer questions as a "Comment" (end of post) by refering questio no.... dont forget to drop your comments .. :-)


1) Please describe your experience with BEx (Business Explorer)A) Rate your level of experience with BEx and the rationale for you’re self-rating
B) How many queries have you developed? :
C) How many reports have you written?
D) How many workbooks have you developed?
E) Experience with jump targets (OLTP, use jump target)
F) Describe experience with BW-compatible ETL tools (e.g. Ascential)


2) Describe your experience with 3rd party report tools (Crystal Decisions, Business Objects a plus)

3) Describe your experience with the design and implementation of standard & custom InfoCubes.
1. How many InfoCubes have you implemented from start to end by yourself (not with a team)?
2. Of these Cubes, how many characteristics (including attributes) did the largest one have.
3. How much customization was done on the InfoCubes have you implemented?

4) Describe your experience with requirements definition/gathering.

5) What experience have you had creating Functional and Technical specifications?

6) Describe any testing experience you have:

7) Describe your experience with BW extractors
1. How many standard BW extractors have you implemented?
2. How many custom BW extractors have you implemented?

8) Describe how you have used Excel as a compliment to BEx
A) Describe your level of expertise and the rationale for your self-rating (experience with macros, pivot tables and formatting)B)

9) Describe experience with ABAP

10) Describe any hands on experience with ASAP Methodology.

11) Identify SAP functional areas (SEM, CRM, etc.) you have experience in. Describe that experience.

12) What is partitioning and what are the benefits of partitioning in an InfoCube?
A) Partitioning is the method of dividing a table (either column wise or row wise) based on the fields available which would enable a quick reference for the intended values of the fields in the table. By partitioning an infocube, the reporting performance is enhanced because it is easier to search in smaller tables. Also table maintenance becomes easier.

13) What does Rollup do?
A) Rollup creates aggregates in an infocube whenever new data is loaded.

14) What are the inputs for an infoset?
A) The inputs for an infoset are ODS objects and InfoObjects (with master data or text).

15) What internally happens when BW objects like Info Object, Info Cube or ODS are created and activated?
A) When an InfoObject, InfoCube or ODS object is created, BW maintains a saved version of that object but does not make it available for use. Once the object is activated, BW creates an active version that is available for use.

16) What is the maximum number of key fields that you can have in an ODS object?
A) 16.

17) What is the specific advantage of LO extraction over LIS extraction?
A) The load performance of LO extraction is better than that of LIS. In LIS two tables are used for delta management that is cumbersome. In LO only one delta queue is used for delta management.

18) What is the importance of 0REQUID?
A) It is the InfoObject for Request id. OREQUID enables BW to distinguish between different data records.

19) Can you add programs in the scheduler?
A) Yes. Through event handling.

20) What is the importance of the table ROIDOCPRMS?
A) It is an IDOC parameter source system. This table contains the details of the data transfer like the source system of the data, data packet size, maximum number of lines in a data packet, etc. The data packet size can be changed through the control parameters option on SBIW i.e., the contents of this table can be changed.

21) What is the importance of 'start routine' in update rules?
A) A Start routine is a user exit that can be executed before the update rule starts to allow more complex computations for a key figure or a characteristic. The start routine has no return value. Its purpose is to execute preliminary calculations and to store them in a global data structure. You can access this structure or table in the other routines.

22) When is IDOC data transfer used?
A) IDOCs are used for communication between logical systems like SAP R/3, R/2 and non-SAP systems using ALE and for communication between an SAP R/3 system and a non-SAP system. In BW, an IDOC is a data container for data exchange between SAP systems or between SAP systems and external systems based on an EDI interface. IDOCs support limited file size of 1000 bytes. So IDOCs are not used when loading data into PSA since data there is more detailed. It is used when the file size is lesser than 1000 bytes.

23) What is partitioning characteristic in CO-PA used for?
A) For easier parallel search and load of data.

24) What is the advantage of BW reporting on CO-PA data compared with directly running the queries on CO-PA?
A) BW has a better performance advantage over reporting in R/3. For a huge amount of data, the R/3 reporting tool is at a serious disadvantage because R/3 is modeled as an OLTP system and is good for transaction processing rather than analytical processing.

25) What is the function of BW statistics cube?
A) BW statistics cube contains the data related to the reporting performance and the data loads of all the InfoCubes in the BW system.

26) When an ODS is in 'overwrite' mode, does uploading the same data again and again create new entries in the change log each time data is uploaded?
A) No.

27) What is the function of 'selective deletion' tab in the manage->contents of an infocube?
A) It allows us to select a particular value of a particular field and delete its contents.

28) When we collapse an infocube, is the consolidated data stored in the same infocube or is it stored in the new infocube?
A) Data is stored in the same cube.

29) What is the effect of aggregation on the performance? Are there any negative effects on the performance?
A) Aggregation improves the performance in reporting.

30) What happens when you load transaction data without loading master data?
A) The transaction data gets loaded and the master data fields remain blank.

31) When given a choice between a single infocube and multiple InfoCubes with a multiprovider, what factors does one need to consider before making a decision?
A) One would have to see if the InfoCubes are used individually. If these cubes are often used individually, then it is better to go for a multiprovider with many cubes since the reporting would be faster for an individual cube query rather than for a big cube with lot of data.

32) How many hierarchy levels can be created for a characteristic info object?
A) Maximum of 98 levels.

33) What is open hub service?
A) The open hub service enables you to distribute data from an SAP BW system into external data marts, analytical applications, and other applications. With this, you can ensure controlled distribution using several systems. The central object for the export of data is the Infospoke. Using this, you can define the object from which the data comes and into which target it is transferred. Through the open hub service, SAP BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.

34) What is the function of 'reconstruction' tab in an infocube?
A) It reconstructs the deleted requests from the infocube. If a request has been deleted and later someone wants the data records of that request to be added to the infocube, one can use the reconstruction tab to add those records. It goes to the PSA and brings the data to the infocube.

35) What are secondary indexes with respect to InfoCubes?
A) Index created in addition to the primary index of the infocube. When you activate a table in the ABAP Dictionary, an index is created on the primary key fields of the table. Further indexes created for the table are called secondary indexes.

36) What is DB connect and where is it used?
A) DB connect is database connecting piece of program. It is used in connecting third party tools with BW for reporting purpose.

37) Can we extract hierarchies from R/3 for CO-PA?
A) No We cannot, “NO hierarchies in CO/PA‿.

38) Explain ‘field name for partitioning’ in CO-PA
A) The CO/PA partitioning is used to decrease package size (eg: company code)

39) What is V3 update method ?
A) It is a program in R/3 source system that schedules batch jobs to update extract structure to data source collectively.

40) Differences between serialized and non-serialized V3 updates

41) What is the common method of finding the tables used in any R/3 extraction
A) By using the transaction LISTSCHEMA we can navigate the tables.

42) Differences between table view and infoset query
A) An InfoSet Query is a query using flat tables.

43) How to load data from one InfoCube to another InfoCube ?
A) Thro DataMarts data can be loaded from one InfoCube to another InfoCube.

44) What is the significance of setup tables in LO extractions ?A) It adds the Selection Criteria to the LO extraction.

45) Difference between extract structure and datasource
A) In Datasource we define the data from diff source sys,where as in extract struct it contains the replicated data of datasource n where in we can define extract rules, n transfer rulesB) Extract Structure is a record layout of InfoObjects.C) Extract Structure is created on SAP BW system.

46) What happens internally when Delta is Initialized

47) What is referential integrity mechanism ?
A) Referential integrity is the property that guarantees that values from one column depend on values from another column.This property is enforced through integrity constraints.48) What is activation of extract structure in LO ?

49) What is the difference between Info IDoc and data IDoc ?

50) What is D-Management in LO ?A) It is a method used in delta update methods, which is based on change log in LO.

51) What is entity relationship model in data modeling ?A) An ERD(Entity Relation Diagram) that can be used to generate a physical database.B) It is an high level data model.C) It is a schematic that shows all the entities within the scope of integration and the direct relationship between the entities.

52) What is the difference between direct delta and queued delta updates in LO ?

53) What is non-cumulative infocube ?

54) What kind of tools are available to monitor the overall Query Performance?

55) How can we have a delta update for generic data source ?

56) What are the methods available to debug the load failures ?

57) What is datamining concept ?A) Process of finding hidden patterns and relationships in the data.B) With typical data analysis requirements fulfilled by data warehouses,business users have an idea of what information they want to see.C) Some opportunities embody data discovery requirements,where the business user wants to correlate sets of data to determine anomalies or patterns in the data.

58) What is scoring ?59) Usage of Geo-coordinates ?A) The georelevant data can be displayed and evaluated on a map with the help of the BEx Map.60) What are the different query areas related to Infoset ?A) Jump queries,ODS queries areas are related to InfoSet

61) How does the time dependency works for BW objects ?A) Time Dependent attributes have values that are valid for a specific range of dates(i.e valid period).62) What is I_ISOURCE?A) Name of the InfoSource

63) What is I_T_FIELDS?A) List of the transfer structure fields. Only these fields are actually filled in the data table and can be sensibly addressed in the program.

64) What is C_T_DATA?A) Table with the data received from the API in the format of source structure entered in table ROIS (field ROIS-STRUCTURE).

65) What is I_UPDMODE?A) Transfer mode as requested in the Scheduler of the Business Information Warehouse. Not normally required.

66) What is I_T_SELECT?A) Table with the selection criteria stored in the Scheduler of the SAP-Business Information Warehouse. This is not normally required.

67) What is Serialized V3 Update?A) This is the normal update method. Here, document data is collected in the order it was created and transferred into the BW as a batch job. The transfer sequence is not the same as the order in which the data was created in all scenarios.

68) What is Direct Delta?A) In this method, extraction data is transferred directly from document postings into the BW delta queue. The transfer sequence is the same as the order in which the data was created.

69) What is Queued Delta?A) In this method, extraction data from document postings is collected in an extraction queue, from which a periodic collective run is used to transfer the data into the BW delta queue. The transfer sequence is the same as the order in which the data was created.

70) What is Unserialized V3 Update?A) This method is almost exactly identical to the serialized update method. The only difference is that the order of document data in the BW delta queue does not have to be the same as the order in which it was posted. We only recommend this method when the order in which the data is transferred is not important, a consequence of the data target design in the BW.

71) What are the different Update Modes?A) Serialized V3 UpdateB) Direct DeltaC) Queued DeltaD) Unserialized V3 Update

72) What are the different ways Data Transfer?A) Complete Update: All the data from the information structure us transferred according to the selection criteria defined in the scheduler in the SAP BW.
B) Delta Update: Only the data that has been changed or is new since the last update is transferred. To use this option, you must activate the delta update.

73) What is the major importance for the usage of ODS Object?A) ODS is majorly used as a staging area.

74) What is the benefit of using BW reporting over SAP Reporting?A) PerformanceB) Data AnalysisC) Better front end reporting.D) Ability to pull the data from SAP and Non - SAP sources.

75) Differences between star and extended star schema ?A) Star schema: Only characteristics of the dimension tables can be used to access facts. No structured drill downs can be created. Support for many languages is difficult.B) Extended starschema: Master data tables and their associated fields(attributes). External hierarchy tables for structured access to data. Text tables with extensive multilingual descriptions.

76) What are the new features of SAP BW 30b?

77) What are the new features of the R3 Plugin PI2002_1.

78) What are the major errors in BW and R3 pertaining to BW?A) Errors in loading data (ODS loading, Cube loading, delta loading etc)B) Errors in activating BW or other objects.C) Issues in delta loadings

79) When are tables created in BW?A) when the objects are activated, the tables are created. The location depends on the Basis installation.

80) What is a start routine and return table, how do they synchronize with each other?A) Start routine is used at update rules and return table is used to return the Value following the execution of start routine

81) What is the difference between start routine and update routine, when, how and why are they called?A) Start routine can be used to access INFOPACKAGE, update routines cant.

82) What are the different Non - R/3 systems that BW supports?

83) In a general project, how many InfoCubes, InfoObjects, InfoSources, Multi-Providers can you expect?A) It depends on size of the project inturn their business goal.Differs from project to project.

84) What does a M table signify?
A) Master table.

85) What does a F table signify?
A) Fact table

86) What is data warehousing?
A) Data Warehousing is a concept in which the data is stored and analysis is performed over it.

87) What is process chain and how you used it?
A) Process chains are tool available in BW for Automation of upload of master data and transaction data while taking care of dependency between each processes.
B) In one of our scenario we wanted to upload wholesale price infoobject which will have wholesale price for all the material. Then we wanted to load transaction data. While loading transaction data to populate wholesale price, there was a look up in the update rule on this InfoObject masterdata table. This dependency of first uploading masterdata and then uploading transaction data was done through the process chain.

88) What are Remotecubes and how you accessed and used it in your project?
A) A RemoteCube is an InfoCube whose transaction data is not managed in the Business Information Warehouse but externally. Only the structure of the RemoteCube is defined in BW. The data is read for reporting using a BAPI from another system.B) Using a RemoteCube, you can carry out reporting using data in external systems without having to physically store transaction data in BW. You can, for example, include an external system from market data providers using a RemoteCube.

89) Hope you have worked on enhancements and on which userexit you worked can you explain?A) Extended the Data source 0MATERIAL_ATTR , 0PLANT_ATTR, 0MAT_PLANT_ATTR for Master Data load from R/3 to BW. Edited User exit EXIT_SAPLRSAP_002 to populate Master Data for extended fields and EXIT_SAPLRSAP_001 for transaction data to extract from R/3 to BW

90) What is the t-code for generic extractor?
A) RSO2

91) What is infoset query?
A) InfoSet is special kind of InfoProvider. It is used to report by Joining ODS Objects and InfoObjects. InfoSets have been used in the Business Information Warehouse for InfoObjects (master data), ODS objects, and joins for these objects. The InfoSet Query can be used to carry out tabular (flat) Reporting on these InfoSets.

92) What is the purpose of aggregates?
A) Aggregates are like indices to database tables. They are rolled up data on few characteristics on which report is run frequently. They are created for performance improvement of reporting. If a report is used very extensively and its performance is slow then we can create aggregate on the characteristics used in the report, so that when the report runs OLAP processer selects data from aggregate instead of cube.

93) How you did Datamodeling in your project? Explain
A) We had collected data from the user and created HLD(High level Design document) and we analyzed to find the source for the data. Then datamodels were done indicating dataflow, lookups. While designing the datamodel considerations were given to use existing objects(like ODS and Cube) not storing redundant data, volume of data, Batch dependency.

94) As you said you have worked on Cubes and ODS,Which one is better suited for reporting? Explain and what are the drawbacks n benefits of each one
A) Cubes are best for reporting to queries. It runs faster. In ODS we can have only simple reports. If we query based on Nonkey fields(Data fields) in ODS then, report runs slower. But in ODS we can overwrite, non key fields. But we can not overwrite in Cube. This is one of the disadvantage in Cube.

95) What are the different cubes you worked in FI?
A) Please look at Business content cubes and BW documentation on them to answer this question.

96) What is delta upload? What is the use of delta upload? Data that has been changed or added is extractor or full data is extractor?
A) When transactional data is pulled from R3 system instead of pulling all the data daily(Instead of having full load), if we pull only the changed records, or newly added records, the load on the system will be very less. So where ever it is possible we have to go for delta load than full load.

97) What are hierarchies? Explain how you used in your project?
A) Hierarchies are organizing data in a structured way. For example BOM(Bill of material) can be configured as hierarchies.

98) What is t-code for CO-PA?
A) KEB0

99) What is SID? what is the impact in using SID?
A) In BW the information is stored as SIDs. SIDs are Auto generated number assigned to each characteristic value when they are uploaded. Search on Numeric character is always faster than Alpha characters and hence SIDs are assigned for each characteristic values.

100) What is Table partitioning? What are Return Tables?
A) If we have 0Calmonth or 0Fiscper as time characteristic, then we can partition the fact table physically. Table portioning has to be supported by the Database. Oracle, Informix, IBM DB2/390 supports table partitioning. SAP DB, Microsoft SQL Server IBM DB2/400 does not support table portioning. Table partitioning helps to run the report faster as data is stored in the relevant partition.
B) In Update rule routine, If we want to return multiple records, instead of single value, we can use this return table.

101) What is the t-code for Query Monitor?
A) RSRT

102) Apart from R/3 ,which legacy db you used for extraction ?
A) We had legacy system called CAM. CAM system had Open order information which was full load every day to OM Schedule line ODS. CAM system was connected to R3 through DB connect.

103) What are the three ODS Objects table explain?
A) ODS Object has three tables called New, Active and Change log. As soon as new data comes into ODS, that is stored in ODS. When it is activated, the new data is written to Active table. Change is written in the change log.

104) Can you explain about Start routines how you used in your project give me an example?
A) In start routine is used for mass processing of records. In start routine all the records of data package is available for processing. So we can process all these records together in start routine. In one of scenario, we wanted to apply size % to the forecast data. For example if material M1 is forecasted to say 100 nos in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra Large 20%), we wanted to have 4 records against one single record that is coming in the info package. This is achieved in start routine.

105) In update rules for an infocube we can specify separate update rules for characteristics of each of the key figures. In which situations is the above used?
A) To be discussed(TBD).

106) Other than BW, what are the other ETL tools used for SAP R/3 in industry?
A) Informatica, ACTA, COGNOS, Business Objects are other ETL tools.

107) Does any other ERP software use BW for data warehousing.
A) NO.
108) What is the importance of hierarchies?
A) One can display the elements of characteristics in hierarchy form and evaluate query data for the individual hierarchy levels in the Business Explorer (in Web applications or in the BEx Analyzer).

109) Where is 0RECORDMODE infoobject used?
A) It is used in Delta Management. ODS uses ORECORDMODE info object for delta load. ORECORDMODE has values as X,D,R. In delta data load X means rows to be skipped, D & R for delete and Remove of rows.

110) What is operating concern in CO-PA?
A) An organizational structure that combines controlling areas together in the same way as controlling areas group companies together.

111) Does all the characteristics present in ODS, are key fields.
A) No. An ODS object contains key fields (for example, document number/item) and data fields that can also contain character fields (for example, order status, customer).

112) What is the use BAPI, ALE?
A) BAPI, ALE => set of programs which will Extract data from data sources. BW connects SAP systems(R/3 or BW) and flat files via ALE. BW connects with non SAP systems via BAPI.

113) What is the importance of ‘Compounding’ of infoobjects?
A) A Compound attribute differentiates a characteristic to make the characteristic uniquely identifiable. For example, in a Plant, there can be some similar products manufactured. (Plant A-- Soap,Paste,Lotion; plant B--Soap, paste, Lotion) In this case Plant A and Plant B should be made unique. So the characteristics can be compounded to make them unique.

114) Are there any limitations for BEx analyzer?
A) TBD

115) How does BEx analyzer connect to BW?
A) Bex Analyzer is connected with OLAP Processor. OLE DB Connectivity makes Bex Analyzer connects with BIW.

116) What is field partitioning in CO-PA?
A) Internally allocates space in database. If needed table resides in one or few partitions, then only these partitions will be selected and examined by SQL statement, therby significantly reducing I/O volume.

117) Where to check the log for warning messages appearing in activation of transfer rules?
A) If transfer rules are not defined for Info objects, then traffic lights will not be green.

118) What are the advantages of reporting on an infocube to that of reporting on an ODS?
A) Query performance will be good with Infocube. Infocube has multidimensional model where as ODS is a flat table. Aggregates and Multi provider can be built upon Infocube, which will enhance the Query performance. Aggregates and mutiproviders cannot be built on ODS.

119) How does a navigational attribute differ from other attributes in terms of linking it with the infocube?
A) TBD

120) How does delta update mechanism work in ODS?
A) ODS has three database tables. New Table, Active Table and Change Log Table. Initially new data are loaded and their traces are kept in Change log table. When another set of data comes, it actually compares with change log and transfers the data (delta data) into active table and also notes in Change log. Everytime the tables are compared and data is written into the targets.

121) What is time dependent master data?
A) Time dependant master data are one which keeps changing according to time. For example: Assume a Scenario, Sales person A works in East Zone till (Jan 30th 2004), and then moves to North Zone from Jan31 st 2004.Thus the master data with regard to Sales person A, should be changed to differnt zone based on a time

122) Can we load transaction data into infocube without loading the master data first?
A) yes.

123) What is difference between ‘saving’ and ‘activating’?
A) In BIW, Saving--> actually saves the defined structure and retrieves whenever required.B) Activating---> It saves and generates required tables and structures.

124) Why do we use only one client in BW?

125) What is time dependent master data?
A) Time dependant master data are one which keeps changing according to time. For example: Assume a Scenario, Sales person A works in East Zone till (Jan 30th 2004), and then moves to North Zone from Jan31st 2004. Thus the master data with regard to Sales person A, should be changed to different zone based on a time

126) What are the advantages of aggregates?
A) Aggregates make it possible to access InfoCube data quickly in Reporting. Aggregates serve, in a similar way to database indexes, to improve performance.

127) In which situations we cannot use aggregates?
A) if data provider is ODS.

128) Aggregates are recommended in the following cases,
A) The execution and navigation of query data leads to delays with a group of queries.B) You want to speed up the execution and navigation of a specific query.C) You often use attributes in queries.D) You want to speed up reporting with characteristic hierarchies by aggregating specific hierarchy levels.

129) What does delta initialization do?
A) It makes BW to expect the data from Sources, after full update. It initializes the delta Update mechanism for that datasource.

130) What is difference between delta and pseudo delta?
A) Some data target and module has delta Update feature. Those can be used for delta Update of data. Say ODS, COPA are delta capable. data can be expected stage wise. After first accumulation of data, BIW expects the data in delta wise for these data target. When the other data target do not have these feature (delta update), they can be made delta capable using ODS as data target.

131) What are the Third Normal Form and its comparison with Star Schema?
A) Third normal form is normalized form of storing data in a relational database. It eliminates functional dependencies on non-key fields by putting them in a separate table. At this stage, all non-key fields are dependent on the key, the whole key and nothing but the key.B) Star schema is a denormalized form of storing data, which paves the path for storing data in a multi-dimensional model.

132) What is ASAP methodology
A) ASAP is a standard methodology for efficiently implementing and continually optimizing the SAP software. ASAP supports the implementation of the R/3 System and of mySAP.com Components, and can also be used for upgrade projects. It provides a wide range of tools that helps in all stages of implementation project - from project planning to the continual improvement of the SAP System. The two key tools in ASAP are: The Implementation Assistant, which contains the ASAP Roadmap, and provides a structured framework for your implementation, optimization or upgrade project. The Question & Answer database (Q&Adb), which allows you to set your project scope and generate your Business Blueprint using the SAP Reference Structure as a basis.

133) Significance of infoset.
A) Infoset describes data sources that are defined as a rule as joins of ODS objects or Info Objects. An Infoset is a semantic view of data sources and is not a physical data target in itself. One can define reports in the BEx Query designer using activated info sets.

134) Differences between multicube and remote cube.
A) A Multicube is a type of Info Provider that combines data from a number of Info Providers and makes them available as a whole to reporting.B) A Remote Cube is an InfoCube whose transaction data is not managed in the Business Information Warehouse but externally. Only the structure of the Remote Cube is defined in BW. The data is read for reporting using a BAPI from another system.

135) Life period of data in “Change Log‿ of an ODS.
A) The data of Change Log can be scheduled to be deleted periodically. Usually the Data is removed after it has been updated into the data targets.

136) Drilldown method of Infocube to ODS.
A) A multi provider can be designed to include the ODS and the Infocube in question. This gives a chance to drilldown from Infocube to the ODS.

137) What are “inbound ODS‿ and “consistent ODS‿?
A) In an Inbound ODS object, the data is saved in the same form as they are when delivered from the source system. This ODS type can be used to report the original data as it comes from the source system.B) In a Consistent ODS object, data is stored in granular form and consolidated. This consolidated data on a document level creates the basis for further processing in BW.

138) Life period of data in PSA.
A) Data in PSA is deleted when one feels that there is no need for any use of it in future. There is a trade off between wastage of space and usage as a back up for data in the source system.

139) How to load data from one infocube to another ?
A) A data source is created from the infocube which is supposed to feed. This can be done by right-clicking on the infocube and selecting “export data source‿. Then a suitable infosource can be created for this data source. And the intended data target infocube can be fed.

140) What is “activation‿ of objects ?
A) Activation of objects enables them to be executed, in other words used elsewhere for different purposes. Unless an object is activated it cannot be used.

141) Are key figures navigable ?
A) No, key figures are not navigable.

142) What is transactional ODS?
A) A transactional ODS object differs from a standard ODS object in the way it prepares data. In a standard ODS object, data is stored in different versions (active, delta, modified), whereas a transactional ODS object contains the data in a single version. Therefore, data is stored in precisely the same form in which it was written to the transactional ODS object by the application.

143) Are SIDs static or dynamic?
A) SIDs are static.

144) Is data in Infocube editable?
A) No.

145) What are data-marts?
A) A data mart is also known as a local data warehouse. It is an implementation of a data warehouse with a restricted scope of content, with support for analytical processing and serving a single department, part of an organization, or a particular data analysis problem domain.

146) Which one is more denormalized; ODS or Infocube?
A) Infocube is more normalized than ODS.147) Is CO-PA delta capable ?A) Yes, CO-PA is delta capable.

148) What is “replication of data source‿ process ?
A) Replication of data source enables the extract structure from the source system to be replicated in the target system.

149) Any quality checks available for inefficient cube designs ?
A) Huge Dimension tables make a cube inefficient.

150) Why not star-schema is implemented for ODS as well ?
A) Because ODS is meant to store a detailed document for quick perusal and help make short-term decisions.

151) Why do we need separate update rules for characteristics on each key figure?
A) It is dependent on the Business requirement.

152) Use of Hierarchies.
A) Efficient reporting is one of the targets of using hierarchies. Easy drilldown paths can be built using hierarchies.

153) What is "Referential Integrity"?
A) A feature provided by relational database management systems (RDBMS's) that prevents users or applications from entering inconsistent data. For example, suppose Table B has a foreign key that points to a field in Table A. Referential integrity would prevent you from adding a record to Table B that cannot be linked to Table A. In addition, the referential integrity rules might also specify that whenever you delete a record from Table A, any records in Table B that are linked to the deleted record will also be deleted. This is called cascading delete. Finally, the referential integrity rules could specify that whenever you modify the value of a linked field in Table A, all records in Table B that are linked to it will also be modified accordingly. This is called cascading update.

154) What is a Transactional Cube and when is it preferred?
A) Transactional InfoCubes differ from Basic InfoCubes in their ability to support parallel write accesses. Basic InfoCubes are technically optimized for read accesses to the detriment of write accesses. Transactional cubes are designed to meet the demands of SEM, where multiple users write simultaneously into a cube and data is read as soon as possible.

155) When is the data in Change Log table of ODS deleted.
A) Deleting data from the change log for an ODS object is recommended if several requests, which are no longer required for the delta update and also are no longer used for an initialization from the change log, have already been loaded into the ODS object. If a delta initialization for the update exists in connected data targets, the requests have to be updated first before the respective data can be deleted in the change log.

156) On what occasions do we have different update rules for each of the Key Figures in an Info Cube and how would data be stored in such cases.
A) If we want to give different values to characteristics depending on each of the key figure values, we have different update rules. Say we have two keyfigures, cost and profit, if we have a entry for account type, depending on each of keyfigure we can classfiy account as high cost, low cost or high profit or low profit. If we have seperate update rules for each of the key Figures, there can be multiple rows in the infocube corresponding to each row in the transaction data.

157) When are "Hierarchies" used in an info object and how do they differ from the hierarchies available in BEx while querying.
A) Hierarchies are used for modeling hierarchical structures. Hierarchies defined in info objects should be loaded like master data, whereas it is needed creating hierarchies in BEx while querying. Further in BEx we have the flexibility of exchanging the nodes and leaves.

158) What kinds of data fields are used in Line Items, Transactional Figures and Cost of Sales Ledger?
A) Check the respective tables in R/3.

159) What are Aggregates and when are they used?
A) An aggregate is a materialized, aggregated view of the data in an InfoCube. In an aggregate, the dataset of an InfoCube is saved redundantly and persistently in a consolidated form into the database. Aggregates make it possible to access InfoCube data quickly in Reporting. Aggregates can be used in following cases:1. The execution and navigation of query data leads to delays with a group of queries.2. You want to speed up the execution and navigation of a specific query.3. You often use attributes in queries.4. You want to speed up reporting with characteristic hierarchies by aggregating specific hierarchy levels.

160) How is the data of different modules stored in R/3?
A) Data is stored in multiple tables in R/3 based on ERM (Entity Relationship) model to prevent the reduntant storage of data.

161) In what cases to we transfer data from one info cube to another.
A) Modifications can't be made to an infocube if there is data present in the infocube. If we want to modify an infocube and no backup for data exist then we can design another infocube with the parameters specified and load data from the old infocube.

162) How often do we have a Multi-layered structure in ODS stage and in what cases.
A) Multi-layered structure in ODS stage is used to consolidate data from different data sources.

163) How is data extracted from systems other than R/3 and Flat files?
A) Data is extracted from systems other than R/3 and flat files using staging BAPI's.

164) When do TRFC and iDOC errors occur?
A) An intermediate document (IDoc) is a container for exchanging data between R/3, R/2 and non-SAP systems. IDocs are sent in the communication layer by transactional Remote Function Call (tRFC) or by other file interfaces (for example, EDI). tRFC guarantees that the data is transferred once only. Was not able to find out when the errors occur.

165) On what occasions do the key figures become attributes of characteristics?
A) When we want to display that particular key figure as display attribute in the report. Key figures can only be made a display attribute of infoobjects. Suppose we are reporting on performance of each of sales person, we can declare salary of the sales person, as an attribute. Further key figures like net price (price per unit quantiy or price per item) used as an attribute of product can be used to calculate key figures like total price ( by multiplying net price with quantity using formulas).

166) Why is there a restriction of 16 Dim tables in an Info Cube and 16 key fields in an ODS.

167) On what factors does the loading time depend on?
A) Loading time depends on the work load both on the BW side and source system side. It might also depend upon the network connectivity.

168) How long does it take to load a million records into an info cube from an R/3 system?
A) Depending on work load on BW side and source system side loading time varies. Typically it takes half an hour to load a million records.

169) Will the loading time be same for the same amount of data for non-SAP systems like Flat files.
A) It might not be the same, it depends on the extraction programs used on the source system side.

170) Can you tell me about a situation when you implemented a Remote Cube.
A) Remote cube is used when we like to report on transactional data. In a remote cube data is not stored on BW side. Ideally used when detailed data is required and we want to bypass loading of data into BW.

171) What is mySAP.com?
A) SAP solution to integrate all relevant business processes on the Internet. mySAP.com integrates business processes in SAP and non-SAP systems seamlessly, and provides a complete business environment for electronic commerce.

172) How is BW superior to other data warehousing tools (if it is superior)?
A) SAP BW provides, good compatibility with other SAP products.

173) Can we just load the transaction data without loading the master data from a source system when we are sure we are not going to query on the master data.
A) Yes you can.

174) What is operating concern and partitioning in CO-PA.
A) Operating concern is set of characteristics based on which we want to analyze the performance of company. Partitioning is dividing the data into different datasets depending on a certain characteristics. Partitioning enables parallel access of data.

175) What is the difference between value fields and key figures in CO-PA.
A) Value fields comprises of data which CO-PA gets from various modules in R/3. Whereas key figures are derived from these value fields.

176) How is the performance of an info cube measured?
A) Infocube performance can be measured based upon query response time.

177) What factors are used in measuring the performance of a query?
A) Query response time is used for measuring the performance of a query.

178) What is process chain and how you used it?
A) We have used process chains to automate the delta loading process. Once you are finished with your design and testing you can automate the processes listed in RSPC. I have a real time example in the attachment.

179) What are Remote cubes and how you accessed and used it in your project?
A) Its an Info Provider which does not physically store data, but used for non-trivial reporting. I have not used but an example would be say you want to compare the data consistency b/w R/3 and BW you can generate report on a remote cube and compare with a report in BW

180) Hope you have worked on enhancements and on which user exit you worked can you explain?

181) What is the t-code for generic extractor?
A) RSO2

182) What is infoset query?
A) InfoSet is an Info Provider which does not store data, its only a view and needs to be built as a join. In treasury we have built the currency exchange report. This report is not used often and so its stored in an ODS. So we built an InfoSet to get data from another object and built the report. On an ODS once you say its reportable and start running a query its no longer a flat table but follows a star schema and reporting becomes slow

183) What is the purpose of aggregates?
A) They are used to store frequently reporting data. Once you fill in an aggregate and activate, Bex checks for aggregates before running a query and brings the data much faster. So basically query performance improves a lot.

184) How you did Data modeling in your project? Explain
A) Initially we study the business process of client, like what kind of data is flowing in the system, the volume, changes taking place in it, the analysis done on the data by users, what are they expecting in the future, how can we use the BW functionality. Later we have meetings with business analyst and propose the data model, based on the client. Later we give a proof of concept demo wherein we demo how are we going to build a BW data warehouse for their system. Once you get an approval start requirement gatherings and building your model and testing follows in QA

185) As you said you have worked on Cubes and ODS,Which one is better suited for reporting?Expalin and what are the drawbacks n benefits of each one
A) Depending on what you want to report we store the data in Cube/ODS. Generally BW is used to store high volumes of data and faster reporting, wherein InfoCube is used as it stores normalized data. We store master data in other tables and transaction data which are basically numbers are stored in cube. So basically the property of indexing works here and the reporting is fast as we have only numeric in a cube.B) When you load master data first the SIDs are created for that data. When you load the transaction data it looks for the master data SIDs and gets linked using DIMs. You have this in a cube. So your reporting is going to be fast as both of them are numbers.C) In an ODS we store data which is of more detail utilizing its structure of flat file . reporting on this will be slow because of the reason in ans 5.

186) What are the different cubes you worked in FI?

187) What is deltaupload?What is the use of deltaupload?Data that has been changed or added is extractor or full data is extractor?
A) To load real time data and make accurate decisions we use delta upload.

188) What are hierarchies?Explain how you used in your project?

189) What is t-code for CO-PA?
A) KEB0
190) What is SID ? what is the impact in using SID?

191) What is Table partitioning? What are Return Tables?

192) What is the t-code for Query Monitor?RSRT193) Apart from R/3 ,which legacy db you used for extraction ?
A) Access, Informatica

194) What are the three ODS Objects table explain?

195) Can you explain about Start routines how you used in your project ,give me an example?

Source : http://www.sapprofessionals.org/

 

Many of the problems associated with the basic star schema are resolved with the BW extended star schema. With the extended star schema, attributes are removed from the dimensions and placed outside the InfoCube in master data tables.


The BW extended star schema differs from the basic star schema. It is divided by a solution dependent part (InfoCube) and a solution independent part (attribute, text and hierarchy tables) which is shared among other InfoCubes.In BW, attributes located in the dimensions are called characteristics. In BW, attributes located in a master data table of a characteristic are called attributes of the characteristic. When designing a solution, it is a great challenge to decide whether an attribute should reside in a dimension table and thus in the InfoCube or in a master table or even both. Data is loaded separately into the master data (attributes), text and hierarchy tables. The SID table provides the link between the master data and the dimension tables.
The fact table and the relevant dimension tables of an InfoCube are connected with one another relationally using the dimension keys. The dimension key is provided by the system per characteristic combination in a dimension table.
With the execution of a query the OLAP processor checks the dimension tables of the InfoCube to be evaluated for the characteristic combinations required in the selection.
The dimension keys determined in this way point the way to the information in the fact table. Dimension tables consist of a maximum of 248 characteristics. The Time dimension holds the time characteristics needed for analysis. The Unit dimension contains the unit of measure and currency characteristics needed to describe the key figures properly. The Data Packet dimension is used to identify discrete packets of information loaded into the InfoCube. In this way, packets can be deleted, reloaded or maintained individually.

 

Subject : To differentiate different flat files or populate file source in datatargets.

Hi,
I need to load 2 similar flat files to a single ODS. Only change is that, during the load, in the first file, File1, I want to set a field, File Type, as File1; and during the load of the second file, File2, I want to set File Type, as File2.One update rule which loads File1, already exists. I wanted to create the second update rule(for File2), but I encountered problems because I kept receiving a message that the update rule already exists for the ODS.I went to the InfoSource tree and under the infoSource for File1, I could see the transfer rule; at this point, I was able to create a second transfer rule where I was able to set the second constant while loading File2, as File Type as File2.What is the effect of creating 2 transfer rules for the InfoSource, as against creating 2 update rules for the ODS, which was I originally wanted to do?

Thanks
Amanda Baah

Solutions:

Hi Amanda,

If both file formats are same, it better to populate File1 or File2 dynamically then a constant. Every time not supposed to create new transfer rules and data soruce for new flat file. Instead of constant, write a routine for File Type. Based on file name sinply pass File1 or File2 in routine. OrUse function module "BAPI_IPAK_GETDETAIL" to get infopackage details including file name. from flat file name also File type can be derived. In this way no need to maintain multiple transfer rules or update rules. Existing flow is enough.

Hope it Helps
Srini

Ref : https://www.sdn.sap.com/irj/sdn/thread?threadID=598675&messageID=4283351#4283351

 

Daily BW Systems Checklist

Posted In: , , , , , , , , , , , , . By Srinivas Neelam

Daily BW Systems Checklist

This list of transactions is designed to guide a Basis Admin team member through the process of keeping tabs on major systems issues and root cause issues and trends for each system in the landscape.

This will only identify and help triage issues to Helpdesk, Infrastructure, OS team or DBA team, Basis or functional developers or whomever…but guarantees that each system in a landscape gets a regular, weekly look. The goal is to eliminate root causes of repeating issues.

Step One: Identify what is going on right now
Why? This info may skew other readings
SM51 – SM50 (or SM66 and SMGW) – what’s running and active now
SMICM – what’s going on in the web server…and max/peak usage
AL08 – who’s on and inactive…what are they doing?

Step Two: Analyze what happened overnight
Why? ID any major new issues or daily trends for loads
ST22 – short dumps indicate major issues
SM21 (change time filter to include since you last looked) for each app server – shift in SM51
SM37 – active or cancelled jobs
RSMO – overview of process chains/InfoPackages (also RSA1-monitoring-monitor)
RSPC – log view of trouble chains
STMS – import history

Step Three: Check weekly trends (once per week per system is ok)
ST02 – look at buffers – red? Quality lower than 90%? >20,000 swaps?
ST04 – database buffer quality – in detail analysis, lockwait, latchwait, db message log
ST06 – CPU usage, now and detail-CPU snap and CPU previous hrs, OS Log
DB02 – disk space and weekly growth...and that disk storage availability is ahead of needs

Step Four: Major troubleshooting transactions if deeper analysis required
SE03 – STMS – SE09/SE10 TMS info – observe import history patterns
SLG1 – Application Logs
RSD1 – Repair InfoObjects
RSRV – Analysis and repair of BW Objects – Master data to Transactional Data ratio
ST03N – Query and Load performance analysis – BW Expert mode
RSRCACHE – Web query cache (parameters RSCUSTV14)
RSRT/RSRT2 – query analysis – which area is that problem happening in?
AL11 /work process/RFC logs
BW Stats Queries (0TCT) – examine for trends out of the ordinary/expected
WE20/WE21/SM59 – system connections or SE16 tables related to these
SE38 Reports:
SAP_INFOCUBE_INDEXES_REPAIR
RSPARAM
RSCR_CHECK_INSTALLATION_PARAMS (Crystal Enterprise)
RSPOR_SETUP (portal configuration)
RSAR_RSISOSMAP_REPAIR
BW_IGS_ADMIN or BW_IGS_CHART_TEST (IGS)
SRMO or RSODADMIN (TREX)SPAD – printing issues

 

Forum post in BI General: Re: Collapsed request in infocube-How to get delta back
https://forums.sdn.sap.com/thread.jspa?threadID=588063&messageID=4236057#4236057

Yes you are correct SVR,

We can not delete request once it is compressed. Selective deletion is not usefull in this case.

You can try following -- Be careful -- follow steps -- first try in Dev

If you know last requests generated todays delta in base/source ODS and those requests are available in PSA for reconstruction.... then...

1. Delete initialization from this source ODS to all 3 targets.
2. If requests are available for reconstruction, delete required requests from ODS.
3. Now initialize without data transfer from ODS to 3 targets(2cubes + 1ods).
4. Reconstruct deleted requests in Source ODS(it generated delta)
5. Pust delta to 3 targets.

Hope it helps
Srini

- - - - - - - - - - - - - - - - - - - - - -

Visit the SAP Developer Network at https://www.sdn.sap.com.

 

1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
3. Within structures, make sure the filter order exists with the highest level filter first.
4. Check code for all exit variables used in a report.
5. Move Time restrictions to a global filter whenever possible.
6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
9. If Alternative UOM solution is used, turn off query cache.
10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queries—for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
11. Turn off formatting and results rows to minimize Frontend time whenever possible.
12. Check for nested hierarchies. Always a bad idea.
13. If “Display as hierarchy” is being used, look for other options to remove it to increase performance.
14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
16. Check Sequential vs Parallel read on Multiproviders.
17. Turn off warning messages on queries.
18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
19. Check to see where currency conversions are happening if they are used.
20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
21. Avoid Cell Editor use if at all possible.
22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The “not assigned” nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.

 

Question 1
Update records are written to the SM13, although you do not use the extractors from the logistics cockpit (LBWE) at all.
Active datasources have been accidentally delivered in a PI patch.For that reason, extract structures are set to active in the logistics cockpit. Select transaction LBWE and deactivate the active structures. From now on, no additional records are written into SM13.
If the system displays update records for application 05 (QM) in transaction SM13, even though the structure is not active, see note 393306 for a solution.

Question 2
How can I selectively delete update records from SM13?
Start the report RSM13005 for the respective module (z.B. MCEX_UPDATE_03).

  • Status COL_RUN INIT: without Delete_Flag but with VB_Flag (records are updated).
  • Status COL_RUN OK: with Delete_Flag (the records are deleted for all modules with COL_RUN -- OK)

With the IN_VB flag, data are only deleted, if there is no delta initialization. Otherwise, the records are updated.
MAXFBS : The number of processed records without Commit.

ATTENTION: The delta records are deleted irrevocably after executing report RSM13005 (without flag IN_VB). You can reload the data into BW only with a new delta-initialization!

Question 3
What can I do when the V3 update loops?
Refer to Note 0352389. If you need a fast solution, simply delete all entries from SM13 (executed for V2), however, this does not solve the actual problem.

ATTENTION: THIS CAUSES DATA LOSS. See question 2 !

Question 4
Why has SM13 not been emptied even though I have started the V3 update?

  • The update record in SM13 contains several modules (for example, MCEX_UPDATE_11 and MCEX_UPDATE_12). If you start the V3 update only for one module, then the other module still has INIT status in SM13 and is waiting for the corresponding collective run. In some cases, the entry might also not be deleted if the V3 update has been started for the second module.In this case, schedule the request RSM13005 with the DELETE_FLAG (see question 2).
  • V3 updating no longer functions after the PI upgrade because you did not load all the delta records into the BW system prior to the upgrade.Proceed as described in note 328181.
Question 5
The entries from SM13 have not been retrieved even though I followed note 0328181!
Check whether all entries were actually deleted from SM13 for all clients. Look for records within the last 25 years with user * .

Question 6
Can I schedule V3 update in parallel?
The V3 update already uses collective processing.You cannot do this in parallel.

Question 7
The Logistics Cockpit extractors deliver incorrect numbers. The update contains errors !
Have you installed the most up-to-date PI in your OLTP system?
You should have at least PI 2000.1 patch 6 or PI 2000.2 patch 2.

Question 8
Why has no data been written into the delta queue even though the V3 update was executed successfully?
You have probably not started a delta initialization. You have to start a delta initialization for each DataSource from the BW system before you can load the delta.Check in RSA7 for an entry with a green status for the required DataSource. Refer also to Note 0380078.

Question 9
Why does the system write data into the delta queue, even though the V3 update has not been started?
You are using the automatic goods receipt posting (transaction MRRS) and start this in the background.In this case the system writes the records for DataSources of application 02 directly into the delta queue (RSA7).This does not cause double data records.This does not result in any inconsistencies.

Question 10
Why am I not able to carry out a structural change in the Logistics Cockpit although SM13 is blank?
Inconsistencies occurred in your system. There are records in update table VBMOD for which there are no entries in table VBHDR. Due to those missing records, there are no entries in SM13. To remove the inconsistencies, follow the instructions in the solution part of Note 67014. Please note that no postings must be made in the system during reorganization in any case!

Question 11
Why is it impossible to plan a V3 job from the Logistics Cockpit?
The job always abends immediately. Due to missing authorizations, the update job cannot be planned. For further information see Note 445620.

 

Questions and answers related to T-Code: RSA7(Delta Queue)

This note maintained here for my quick reference and for those dont have SAP Notes access :-)

Question 1:
What does the number in the 'Total' column in Transaction RSA7 mean?
Answer:
The 'Total' column displays the number of LUWs that were written in the delta queue and that have not yet been confirmed. The number includes the LUWs of the last delta request (for repeating a delta request) and the LUWs for the next delta request. An LUW only disappears from the RSA7 display when it has been transferred to the BW System and a new delta request has been received from the BW System.

Question 2:
What is an LUW in the delta queue?
Answer:
An LUW from the point of view of the delta queue can be an individual document, a group of documents from a collective run or a whole data packet from an application extractor.

Question 3:
Why does the number in the 'Total' column, in the overview screen of Transaction RSA7, differ from the number of data records that are displayed when you call up the detail view?
Answer:
The number on the overview screen corresponds to the total number of LUWs (see also question 1) that were written to the qRFC queue and that have not yet been confirmed. The detail screen displays the records contained in the LUWs. Both the records belonging to the previous delta request and the records that do not meet the selection conditions of the preceding delta init requests are filtered out. This means that only the records that are ready for the next delta request are displayed on the detail screen. The detail screen of Transaction RSA7 does not take into account a possibly existing customer exit.

Question 4:
Why does Transaction RSA7 still display LUWs on the overview screen after successful delta loading?
Answer:
Only when a new delta has been requested does the source system learn that the previous delta was successfully loaded into the BW System. The LUWs of the previous delta may then be confirmed (and also deleted). In the meantime, the LUWs must be kept for a possible delta request repetition. In particular, the number on the overview screen does not change if the first delta is loaded into the BW System.

Question 5:
Why are selections not taken into account when the delta queue is filled?
Answer:
Filtering according to selections takes place when the system reads from the delta queue. This is necessary for performance reasons.

Question 6:
Why is there a DataSource with '0' records in RSA7 if delta exists and has been loaded successfully?
Answer:
It is most likely that this is a DataSource that does not send delta data to the BW System via the delta queue but directly via the extractor . You can display the current delta data for these DataSources using TA RSA3 (update mode ='D')

Question 7:
Do the entries in Table ROIDOCPRMS have an impact on the performance of the loading procedure from the delta queue?
Answer:
The impact is limited. If performance problems are related to the loading process from the delta queue, then refer to the application-specific notes (for example in the CO-PA area, in the logistics cockpit area, and so on).
Caution: As of PlugIn 2000.2 patch 3, the entries in Table ROIDOCPRMS are as effective for the delta queue as for a full update. Note, however, that LUWs are not split during data loading for consistency reasons. This means that when very large LUWs are written to the delta queue, the actual package size may differ considerably from the MAXSIZE and MAXLINES parameters.

Question 8:
Why does it take so long to display the data in the delta queue (for example approximately 2 hours)?
Answer:
With PlugIn 2001.1 the display was changed: you are now able to define the amount of data to be displayed, to restrict it, to selectively choose the number of a data record, to make a distinction between the 'actual' delta data and the data intended for repetition, and so on.

Question 9:
What is the purpose of the function 'Delete Data and Meta Data in a Queue' in RSA7? What exactly is deleted?
Answer:
You should act with extreme caution when you use the delete function in the delta queue. It is comparable to deleting an InitDelta in the BW System and should preferably be executed there. Not only do you delete all data of this DataSource for the affected BW System, but you also lose all the information concerning the delta initialization. Then you can only request new deltas after another delta initialization.
When you delete the data, this confirms the LUWs kept in the qRFC queue for the corresponding target system. Physical deletion only takes place in the qRFC outbound queue if there are no more references to the LUWs.
The delete function is intended for example, for cases where the BW System, from which the delta initialization was originally executed, no longer exists or can no longer be accessed.

Question 10:
Why does it take so long to delete from the delta queue (for example half a day)?
Answer:
Import PlugIn 2000.2 patch 3. With this patch the performance during deletion improves considerably.

Question 11:
Why is the delta queue not updated when you start the V3 update in the logistics cockpit area?
Answer:
It is most likely that a delta initialization had not yet run or that the the delta initialization was not successful. A successful delta initialization (the corresponding request must have QM status 'green' in the BW System) is a prerequisite for the application data to be written to the delta queue.

Question 12:
What is the relationship between RSA7 and the qRFC monitor (Transaction SMQ1)?
Answer:
The qRFC monitor basically displays the same data as RSA7. The internal queue name must be used for selection on the initial screen of the qRFC monitor. This is made up of the prefix 'BW, the client and the short name of the DataSource. For DataSources whose name is shorter than 20 characters, the short name corresponds to the name of the DataSource. For DataSources whose name is longer than 19 characters (for delta-capable DataSources only possible as of PlugIn 2001.1) the short name is assigned in Table ROOSSHORTN.
In the qRFC monitor you cannot distinguish between repeatable and new LUWs. Moreover, the data of a LUW is displayed in an unstructured manner there.

Question 13:
Why is there data in the delta queue although the V3 update has not yet been started?
Answer:
You posted data in the background. This means that the records are updated directly in the delta queue (RSA7). This happens in particular during automatic goods receipt posting (MRRS). There is no duplicate transfer of records to the BW system. See Note 417189.

Question 14:
Why does the 'Repeatable' button on the RSA7 data details screen not only show data loaded into BW during the last delta but also newly-added data, in other words, 'pure' delta records?
Answer:
It was programmed so that the request in repeat mode fetches both actually repeatable (old) data and new data from the source system.

Question 15:
I loaded several delta inits with various selections. For which one
is the delta loaded?
Answer:
For delta, all selections made via delta inits are summed up. This
means a delta for the 'total' of all delta initializations is loaded.

Question 16:
How many selections for delta inits are possible in the system?
Answer:
With simple selections (intervals without complicated join conditions or single values), you can make up to about 100 delta inits. It should not be more.
With complicated selection conditions, it should be only up to 10-20 delta inits.
Reason: With many selection conditions that are joined in a complicated way, too many 'where' lines are generated in the generated ABAP source code which may exceed the memory limit.

Question 17:
I intend to copy the source system, i.e. make a client copy. What will happen with may delta? Should I initialize again after that?
Answer:
Before you copy a source client or source system, make sure that your deltas have been fetched from the delta queue into BW and that no delta is pending. After the client copy, an inconsistency might occur between BW delta tables and the OLTP delta tables as described in Note 405943. After the client copy, Table ROOSPRMSC will probably be empty in the OLTP since this table is client-independent. After the system copy, the table will contain the entries with the old logical system name which are no longer useful for further delta loading from the new logical system. The delta must be initialized in any case since delta depends on both the BW system and the source system. Even if no dump 'MESSAGE_TYPE_X' occurs in BW when editing or creating an InfoPackage, you should expect that the delta has to be initialized after the copy.

Question 18.
Am I permitted to use the functions in Transaction SMQ1 to manually control processes?
Answer:
Use SMQ1 as an instrument for diagnosis and control only. Make changes to BW queues only after informing BW Support or only if this is explicitly requested in a note for Component 'BC-BW' or 'BW-WHM-SAPI'.

Question 19.
Despite the delta request only being started after completion of the collective run (V3 update), it does not contain all documents. Only another delta request loads the missing documents into BW. What is the cause for this "splitting"?
Answer:
The collective run submits the open V2 documents to the task handler for processing. The task handler processes them in one or several parallel update processes in an asynchronous way. For this reason, plan a sufficiently large "safety time window" between the end of the collective run in the source system and the start of the delta request in BW. An alternative solution where this problem does not occur is described in Note 505700.

Question 20.
Despite deleting the delta init, LUWs are still written into the DeltaQueue
Answer:
In general, delta initializations and deletions of delta inits should always be carried out at a time when no posting takes place. Otherwise, buffer problems may occur: If you started the internal mode at a time when the delta initialization was still active, you post data into the queue even though the initialization had been deleted in the meantime. This is the case in your system.

Question 21.
In SMQ1 (qRFC Monitor) I have status 'NOSEND'. In the Table TRFCQOUT, some entries have the status 'READY', others 'RECORDED'. ARFCSSTATE is 'READ'. What do these statuses mean? Which values in the field 'Status' mean what and which values are correct and which are alarming? Are the statuses BW-specific or generally valid in qRFC?
Answer:
Table TRFCQOUT and ARFCSSTATE: Status READ means that the record was read once either in a delta request or in a repetition of the delta request. However, this still does not mean that the record has successfully reached the BW. The status READY in the TRFCQOUT and RECORDED in the ARFCSSTATE means that the record has been written into the delta queue and will be loaded into the BW with the next delta request or a repetition of a delta. In any case only the statuses READ, READY and RECORDED in both tables are considered to be valid. The status EXECUTED in TRFCQOUT can occur temporarily. It is set before starting a delta extraction for all records with status READ present at that time. The records with status EXECUTED are usually deleted from the queue in packages within a delta request directly after setting the status before extracting a new delta. If you see such records, it means that either a process which confirms and deletes records loaded into the BW is successfully running at the moment, or, if the records remain in the table for a longer period of time with status EXECUTED, it is likely that there are problems with deleting the records which have already been successfully been loaded into the BW. In this state, no more deltas are loaded into the BW. Every other status indicates an error or an inconsistency. NOSEND in SMQ1 means nothing (see note 378903). However the value 'U' in field 'NOSEND' of table TRFCQOUT is of concern.

Question 22.
The extract structure was changed when the delta queue was empty. Afterwards new delta records were written to the delta queue. When loading the delta into the PSA, it shows that some fields were moved. The same result occurs when the contents of the delta queue are listed via the detail display. Why is the data displayed differently? What can be done?
Answer:
Make sure that the change of the extract structure is also reflected in the database and that all servers are synchronized. We recommend resetting the buffers using Transaction $SYNC. If the extract structure change is not communicated synchronously to the server where delta records are being created, the records are written with the old structure until the new structure has been generated. This may have disastrous consequences for the delta. When the problem occurs, the delta needs to be re-initialized.

Question 23.
How and where can I control whether a repeat delta is requested?
Answer:
Via the status of the last delta in the BW Request Monitor. If the request is RED, the next load will be of type 'Repeat'. If you need to repeat the last load for any reason, manually set the request in the monitor to red. For the contents of the repeat, see Question 14. Delta requests set to red when data is already updated lead to duplicate records in a subsequent repeat, if they have not already been deleted from the data targets concerned.

Question 24.
As of PI 2003.1, the Logistic Cockpit offers various types of update methods. Which update method is recommended in logistics? According to which criteria should the decision be made? How can I choose an update method in logistics?
Answer:
See the recommendation in Note 505700.

Question 25.
Are there particular recommendations regarding the maximum data volume of the delta queue to avoid danger of a read failure due to memory problems?
Answer:
There is no strict limit (except for the restricted number area of the 24-digit QCOUNT counter in the LUW management table - which is of no practical importance, however - or the restrictions regarding the volume and number of records in a database table).
When estimating "soft" limits, both the number of LUWs and the average data volume per LUW are important. As a rule, we recommend bundling data (usually documents) as soon as you write to the delta queue to keep number of LUWs low (this can partly be set in the applications, for example in the Logistics Cockpit). The data volume of a single LUW should not be much larger than 10% of the memory available to the work process for data extraction (in a 32-bit architecture with a memory volume of about 1 GByte per work process, 100 MByte per LUW should not be exceeded). This limit is of rather small practical importance as well since a comparable limit already applies when writing to the delta queue. If the limit is observed, correct reading is guaranteed in most cases.
If the number of LUWs cannot be reduced by bundling application transactions, you should at least make sure that the data is fetched from all connected BWs as quickly as possible. But for other, BW-specific, reasons, the frequency should not exceed one delta request per hour.
To avoid memory problems, a program-internal limit ensures that no more than 1 million LUWs are ever read and fetched from the database per delta request. If this limit is reached within a request, the delta queue must be emptied by several successive delta requests. We recommend, however, to try not to reach that limit but trigger the fetching of data from the connected BWs as soon as the number of LUWs reaches a 5-digit value.

---> Some more related Notes....
873694 - Consulting: Delta repeat and status in monitor/data target
771894 - No data during delta upload: Selection on Z* fields
723935 - Adding the TID display to the DeltaQueue monitor
691721 - Restoring lost data from a delta request
576896 - Checks when PSA contains incorrect data for delta requests

574601 - BW-SAPI: Endless loop when confirming qRFC LUWs
417307 - Extractor package size: Collective note for applications
417189 - BW/SAPLEINS - Online update of delta queue
405943 - Calling an InfoPackage in BW causes short dump
377732 - Collective SAP note SAP BW BCT 2.1C for EBP 2.0 and 3.0

 

SAP BW Document Downloads

Posted In: , , , . By Srinivas Neelam

Documents for sap business warehouse from other web sites

SAP BW BPS Advanced Budgeting Example which gives a glance of BPS Functionality
SAP BW BPS Advanced Budgeting Example

Sap BW Authorizations docs in detail
Sap BW Authorizations

Sap BW BPS Planning Folders and Layouts
Sap BW BPS Planning Folders and Layout

Sap BW Best practices overview Presentation
Sap BW Best practices overview

How to integrate Sap BI with XI PDF
How to integrate Sap BI with XI

Sap BW Netweaver Installation MEDIA_LIST_NETWEAVER
Sap BW Netweaver Installation MEDIA_LIST_NETWEAVER

Online Analytical Processing(OLAP)
Online Analytical Processing(OLAP)

Sap BW Sizing Help for estimating the hardware resources needed
Sap BW Sizing Help

Sap BW Business Explorer(BEX) a detail learning document
Sap BW Business Explorer(BEX)

ABAP Required in Sap BW for ABAP routines,Functional modules in detail
ABAP Required in Sap BW

Sap BI Accelerator for high performance in queries
Sap BI Accelerator

Sap BW Cell editing in Bex
Sap BW Cell editing in Bex

Control and profitability Analysis in SAP BW
Sap BW copa

Exit Function in Sap BW
Exit Function in Sap BW

Sap BW Front End Designing
Sap BW Front End Designing

Sap BW Installation Guide
Sap BW Installation Guide

How to handle inventory in SAP BW
How to handle inventory in SAP BW

Sap BW Transaction Codes (t-codes)
Sap BW Transaction Codes

Sap BW BPS WEB Based Planning
Sap BW BPS WEB Based Planning

 

How to Papers, related to BW direct download

Google
 

Recent Posts

SAP Jobs