1. Where do I find information about SQL statement tuning provided by SAP?
The SAP HANA Troubleshooting and Performance Analysis Guide contains detailed information about tuning SQL statements.
Most frequently Used Commands For SAP HANA Administrators
Also see
Top 50 SAP HANA Interview questions and Answers
2. Which indications exist for critical SQL statements?
The following SAP HANA alerts indicate problems in the SQL are:
Alert | Name | SAP Note | Description |
39 | Long-running statements | 1977262 | Identifies long-running SQL statements. |
SQL: “HANA_Configuration_MiniChecks” (SAP Notes 1969700, 1999993) returns a potentially critical issue (C = ‘X’) for one of the following individual checks:
Check ID | Details |
1110 | SQL using in average > 1 connection (last day) |
1112 | SQL using in average > 1 thread (last hour) |
1113 | SQL using in average > 1 thread (last day) |
1115 | Longest running current SQL statement (s) |
1120 | Exp. stmt. trace: SQL running > 1 h (last day) |
3. Which prerequisites are helpful for SQL statement tuning?
See SAP Note 2000000 for more information about an optimal SAP HANA performance configuration and useful prerequisites for performance and SQL statement optimization.
4. How can I identify a specific SQL statement?
Quite obviously a specific SQL statement can be identified by its SQL text. Due to the fact that the text can be quite long and there can be different similar SQL texts it is useful to identify a SQL statements based on a hash value that is derived from the SQL text. This hash value is called statement hash and can be found in column STATEMENT_HASH of SAP HANA performance views like M_SQL_PLAN_CACHE, M_EXPENSIVE_STATEMENTS or M_SERVICE_THREAD_SAMPLES.
The statement ID (column STATEMENT_ID) can’t be used for that purpose as it is based on a connection ID and not on a SQL statement text.
5. What is an expensive SQL statement?
An expensive SQL statement is a database access that shows high values in areas like:
- High execution time: Usually most important in the area of performance analysis are SQL statements with a particular high overall runtime. Usually the overall runtime counts, so it doesn’t matter if a statement is executed 1 time with a duration of 1000 seconds or if it is executed 1 million times with an average duration of 1 ms, because in both cases the statement is responsible for a total duration of 1000 seconds.
- High memory utilization
- High CPU consumption
- High lock wait time
6. How can time information in the SQL cache (M_SQL_PLAN_CACHE) be interpreted?
Tables like M_SQL_PLAN_CACHE, M_SQL_PLAN_CACHE_RESET and HOST_SQL_PLAN_CACHE contain several time related columns and it is not always clear how to interpret them:
Time column type | Description |
CURSOR | Contains the overall cursor time including SAP HANA server time and client time If the client performs other tasks between fetches of data, the cursor time can be much higher than the SAP HANA server time. This can result in MVCC issues because old versions of data need to be kept until the execution is finished. |
EXECUTION | Contains the execution time (open + fetch + lock wait + close) on SAP HANA server side, does not include table load and preparation time |
EXECUTION_OPEN | Contains the open time on SAP HANA server side Includes the actual retrieval of data in case of column store accesses with early materialization |
EXECUTION_FETCH | Contains the fetch time on SAP HANA server side Includes the actual retrieval of data in case of row store accesses or late materialization |
EXECUTION_CLOSE | Contains the close time on SAP HANA server side |
TABLE_LOAD | Contains the table load time during preparation, is part of the preparation time |
PREPARATION | Contains the preparation time |
LOCK_WAIT | Contains the transaction lock wait time, internal locks are not included |
Usually long EXECUTION_OPEN or EXECUTION_FETCH times are caused by retrieving the data.
From a SQL tuning perspective the most important information is the total elapsed time of the SQL statement which is the sum of preparation time and execution time.
7. How can I determine the most critical SQL statements?
For a pure technical SQL optimization it is useful to determine the most expensive SQL statements in terms of elapsed time and check the result list top down. The most expensive SQL statements in the SQL cache can be determined in the following ways:
- M_SQL_PLAN_CACHE / HOST_SQL_PLAN_CACHE
- SAP HANA Studio -> Administration -> Performance -> SQL Plan Cache
- SQL: “HANA_SQL_SQLCache” (SAP Note 1969700)
- Solution Manager self service “SQL Statement Tuning” (SAP Note 1601951)
Additionally expensive SQL statements can be captured by trace features:
Trace | Environment | SAP Note | Detail |
ST05 | ABAP | SQL trace | |
ST12 | ABAP | Single transaction analysis | |
SQLM | ABAP | 1885926 | SQL monitor |
SQL trace | SAP HANA Studio, SQL | 2031647 | SQL trace |
Expensive statement trace | SAP HANA Studio | 2180165 | Expensive statement trace |
If you are interested in the top SQL statements in terms of memory consumption, you can activate both the expensive statement trace and the statement memory tracking (SPS 08 or higher, see SAP Note1999997 -> “Is it possible to limit the memory that can be allocated by a single SQL statement?”) and later on run SQL: “HANA_SQL_ExpensiveStatements” (SAP Note 1969700) with ORDER_BY = ‘MEMORY’.
The currently running SQL statements can be determined via SQL: “HANA_SQL_ActiveStatements”(SAP Note 1969700).
In all cases you get a list of SQL statements / STATEMENT_HASH values that subsequently be analyzed in more detail.
8. How can I determine and interpret basic performance information for a particular SQL statement?
The following SQL statements available via SAP Note 1969700 can be used to collect further details for a given STATEMENT_HASH (to be specified in “Modification section” of the statements):
SQL command | Details |
SQL: “HANA_SQL_StatementHash_BindValues” | Captured bind values from SQL cache in case of long running prepared SQL statements (SPS 08 and higher) |
SQL: “HANA_SQL_StatementHash_KeyFigures” | Important key figures from SQL cache, for examples see below |
SQL: “HANA_SQL_StatementHash_SQLCache” | SQL cache information (current and / or historic) like executions, records, execution time or preparation time |
SQL: “HANA_SQL_StatementHash_SQLText” | SQL statement text |
SQL: “HANA_SQL_ExpensiveStatements” | Important key figures from expensive statement trace (SAP Note 2180165) |
SQL: “HANA_SQL_ExpensiveStatements_BindValues” | Captured bind values from expensive statement trace (SAP Note 2180165) |
Below you can find several typical output scenarios for SQL: “HANA_SQL_StatementHash_KeyFigures”.
Scenario 1: Transactional lock waits
We can see that nearly the whole execution time is caused by lock wait time, so transactional lock (i.e. record or object locks) are responsible for the long runtime. Further transactional lock analysis can now be performed based on SAP Note 1999998.
Scenario 2: High number of executions
An elapsed time of 0.55 ms for 1 record is not particularly bad and for most of the SQL statements no further action is required. In this particular case the number of executions is very high, so that overall the execution time of this SQL statement is significant. Optimally the number of executions can be reduced from an application perspective. If this is not possible, further technical analysis should be performed. Really quick single row accesses can be below 0.10 ms, so there might be options for further performance improvements (e.g. index design or table store changes).
Scenario 3: High elapsed time
An execution time of 284 ms for retrieving one row from a single table is definitely longer than expected and it is very likely that an improvement can be achieved. Further analysis is required to understand the root cause of the increased runtime.
Scenario 4: High elapsed time for DML operations, no lock wait time
An elapsed time of 10 ms for inserting a single record is quite high. If DML operations have an increased elapsed time it can be caused by internal locks that are e.g. linked to the critical phase of savepoints. Further internal lock analysis can now be performed based on SAP Note 1999998.
Scenario 5: Many records
Reading about 200,000 records in less than 2 seconds is not a bad value. In the first place you should check from an application perspective if it is possible to reduce the result set or the number of executions. Apart from this there are typically also technical optimizations available to further reduce the elapsed time (e.g. delta storage or table store optimizations).
Scenario 6: High Preparation Time
Preparation time is linked to the initial parsing and compilation. You can reduce this overhead by using bind variables rather than literals so that several SQL statements with the same structure need to be parsed only once. Furthermore you can perform a more detailed analysis to understand why the preparation step is taking longer than expected.
9. Which options exist to understand the execution of a SQL statement in detail?
In order to understand how a SQL statement can be tuned it is essential to know how it is processed. For this purpose you an use the following tools:
Tool | Details |
Explain | High-level information for SQL execution (e.g. joins, used indexes)
|
PlanViz | Detailed graphical insight in SQL execution
|
Thread sample analysis | High-level thread state and lock type information (e.g. useful in case of waits for internal locks which are not reflected in the “lock wait time”)
|
Performance trace | Detailed insight in SQL execution including SQL plan and function profiling. See SAP Note 1787489 for more details. |
User-specific trace | Granular trace information for configurable components of the SQL statement execution
Trace components depend on individual scenario, usually only one component is activated, most common components are:
|
10. What are typical approaches to tune expensive SQL statements?
Depending on the results of the analysis of an SQL statement the following optimizations can be considered.
Possible symptoms | Optimization | Optimization Details |
High number of executions | Reduce high number of executions | Check from application perspective if the number of executions can be reduced, e.g. by avoiding identical SELECTs or adjusting the application logic. |
High number of selected records | Reduce number of selected records | Check from application perspective if you can restrict the number of selected records by adding further selection conditions or modifying the application logic.
Check if you can reduce the amount of relevant data by archiving or cleanup of basis tables (SAP Note 706478). |
High lock wait time due to record locks | Reduce record lock contention | Check from application perspective if you can reduce concurrent changes of same records.
Check from application perspective if you can reduce the critical time frame between change operation and next COMMIT. |
High lock wait time due to object locks | Reduce object lock contention | Check if you can schedule critical offline DDL operations less often or at times of lower workload.
Check if you can use online instead of offline DDL operations. |
High total execution time, significant amount of thread samples pointing to internal lock waits | Reduce internal lock contention | Check if you can reduce the internal lock wait time (SAP Note 1999998). |
Execution time higher than expected, optimal index doesn’t exist Table and column scan related thread methods and details like searchDocumentsIterateDocidsParallel, IndirectScanBvOutJob<range>, IndirectScanVecOutJob<range>, sparseSearch, SparseScanRangeVecJob |
Synchronize index and application design | Check from an application perspective if you can adjust the database request so that existing indexes can be used efficiently.
Create a new index or adjust an existing index so that the SQL statement can be processed efficiently:
|
Execution time higher than expected, negative impact by existing partitioning | Synchronize partitioning and application design | Make sure that existing partitioning optimally supports the most important SQL statements (e.g. via partition pruning, load distribution and minimization of inter-host communication). See SAP Note 2044468 for more information. If it is technically possible to disable partitioning, you may check if the performance is better without it. |
Long runtime with OR condition having selection conditions on both tables | Avoid OR in this context | If an OR concatenation is used and the terms reference columns of more than one table, a significant overhead can be caused by the fact that a cartesian product of both individual result sets is required in some cases. If you face performance issues in this context, you can check if the problem disappears with a simplified SQL statement accessing only one table. If yes, you should check if you can avoid joins with OR concatenated selection conditions on both tables. |
Execution time higher than expected, significant portion for accessing delta storage | Optimize delta storage | Make sure that the auto merge mechanism is properly configured. See SAP Note 2057046 for more details. Consider smart merges controlled by the application scenario to make sure that the delta storage is minimized before critical processing starts. |
Execution time slightly higher than expected | Change to row store | In general the number of tables in the row store should be kept on a small level, but under the following circumstances it is an option to check if the performance can be optimized by moving a table to the row store:
|
Execution time sporadically increased | Avoid resource bottlenecks | Check if peaks correlate to resource bottlenecks (CPU, memory, paging) and eliminate bottleneck situations. |
Execution time higher than expected, significant portion for sorting (trex_qo trace: doSort) | Optimize sorting | Sort operations (e.g. related to ORDER BY) are particularly expensive if all of the following conditions are fulfilled:
In order to optimize the sort performance you can check from an application side if you can reduce the number of records to be sorted (e.g. by adding further selection conditions) or if you can put a column with a high amount of distinct values at the beginning of the ORDER BY) |
Long BW query runtime Long execution time of TREXviaDBSL calls Long execution time of TREXviaDBSLWithParameter calls |
Optimize BW queries | If you aren’t aware about the actual query, yet, get in touch with the application team to understand which actual application operations result in long running TREXviaDBSL calls and start the analysis from that perspective. When you are able to reproduce the issue, more detailed traces can be activated as required. BW allows executing queries in different execution modes which control the utilization of SAP HANA internal engines. Furthermore it is possible and usually recommended to convert infocubes to the HANA optimized type. See BW on HANA and the Query Execution Mode and SAP Notes 1682992 and 1734002 for further information and check if modifying the execution mode and / or the infocube type improves the performance of BW queries. See SAP Note 2016832 that describes how to adjust the query execution mode. |
Long BW DTP and transformation runtime using SAP HANA execution mode | Optimize BW DTP and transformation setup | See SAP Notes 2033679 (BW 7.40 SP 05 – 07) and 2067912 (BW 7.40 SP 08 – 10) in order to make sure that all recommended fixes are implemented. For example, SAP Note 2133987 provides a coding correction for SAPKW74010 in order to speed up delta extractions from a DSO. See SAP Note 2057542 and consider SAP HANA based transformations available as of BW 7.40 SP 05. |
Long runtime of FOR ALL ENTRIES query | Adjust FOR ALL ENTRIES transformation | If a FOR ALL ENTRIES selection in SAP ABAP environments takes long and consumes a lot of resources, you can consider adjusting the way how it the database requests are generated.
See SAP Note 2142945 for more information regarding SAP HANA hints. |
Long runtime of query on SAP HANA DDIC objects | Assign CATALOG READ | If the CATALOG READ privilege is not assigned to a user, queries on SAP HANA DDIC objects like TABLES, INDEXES or TABLE_COLUMNS can take much longer, because SAP HANA needs to filter the relevant (own) data and suppress the display of information from other schemas. Make sure that CATALOG READ is assigned (either directly or indicrectly via roles) to users having to access SAP HANA DDIC objects. You can use SQL: “HANA_Security_GrantedRolesAndPrivileges” (SAP Note 1969700) to check if this privilege is already assigned or not. |
Long runtime of queries on monitoring views | Use fewer and larger selections
Select information from other sources |
For technical reasons accesses to monitoring views like M_TABLE_LOCATIONS or M_TABLE_PERSISTENCE_LOCATIONS often scan the complete underlying structures regardless of the WHERE clause. Thus, you should avoid frequent selections of small amounts of data (e.g. one access per table) and use fewer selections reading larger amounts of data (e.g. for all tables of a schema at once). Alternatively check if you can select the required information from other sources, e.g. monitoring view M_CS_TABLES. |
Wrong join order | Update join statistics | An obviously wrong join order can be caused by problems with join statistics. Join statistics are created on the fly when two columns are joined the first time. Up to SPS 08 the initially created join statistics are kept until SAP HANA is restarted. This can cause trouble if join statistics were created at a time when the involved tables had a different filling level, e.g. when they were empty. In this case you can restart the indexserver in order to make sure that new join statistics are created. Starting with SPS 09 SAP HANA will automatically invalidate join statistics (and SQL plans) when the size of an involved table changed significantly. If you suspect problems with join statistics, you can create a join_eval trace (SAP Note 2119087) and check for lines like: JoinEvaluator.cpp(01987) : getJoinStatistics: SAPSR3:CRMM_BUAG (-1)/BUAG_GUID<->SAPP25:CRMM_BUAG_H (-1)/BUAG_GUID JoinEvaluator.cpp(01999) : jStats clSizesL:0 clCntAL:0 cntSJL:0 TTL:0 clSizesR:0 clCntAR:0 cntSJR:0 TTR:0 Zero values in the second line can indicate that join statistics were created at a time when one table was empty. |
High runtime, not reproducible in SAP HANA Studio / DBACOCKPIT | Recompile execution plan | If the runtime of a query is longer than expected and you can’t reproduce the long runtime with SAP HANA Studio or DBACOCKPIT (if bind variables are used: using a prepared statement with proper bind values), the issue can be caused by an inadequate execution plan (e.g. generated based on the bind values of the first execution or based on statistical information collected during the first execution). In this case you can check if an invalidation of the related SQL cache entry can resolve the issue:
ALTER SYSTEM RECOMPILE SQL PLAN CACHE ENTRY '<plan_id>' You can identify the plan ID related to a statement hash by executing SQL: “HANA_SQL_SQLCache” (STATEMENT_HASH = ‘<statement_hash>’, AGGREGATE_BY = ‘NONE’, DATA_SOURCE = ‘CURRENT’) available via SAP Note 1969700. |
Long preparation times | Optimize parsing | See SAP Note 2124112 and check if the amount and duration of preparations can be reduced (e.g. by increasing the SQL cache or using bind variables). |
High runtime with range condition on multi column index | Avoid selective ranges on multi column indexes | Range conditions like BETWEEN, “<“, “>”, “>=”, “<=” OR LIKE can’t be used in order to restrict the search in a multi column index. As a consequence the performance can be significantly worse than expected. Possible solutions are:
|
High runtime with multiple OR concatenated ranges on indexed column | Decompress column | Due to a design limitation with SPS <= 100 SAP HANA doesn’t use an index on an advanced compressed column if multiple OR concatenated range conditions exist on the same column. As a consequence statemens like
SELECT ... FROM COSP WHERE ... AND ( OBJNR BETWEEN ? AND ? OR OBJNR BETWEEN ? AND ? ) can have a long runtime and high resource consumption. As a workaround you can only use DEFAULT compression for the table. See SAP Note 2112604 for more information. |
High runtime although good single column index exists | Make sure that index column isn’t compressed with SPARSE or PREFIXED | If a column is compressed with type SPARSE or PREFIXED, an index on this column doesn’t take effect. In order to use a different compression for the column, you can run the following command:
UPDATE "<table_name>" WITH PARAMETERS ('OPTIMIZE_COMPRESSION' = 'FORCE', 'COLUMNS' = ('<column_name>') ) See SAP Note 2112604 for more information related to SAP HANA compression. |
High runtime with join and TOP <n> | Avoid TOP selections on unselective joins | SAP HANA performs a significant amount of join activities on the overall data set before finally returning the TOP <n> records. Therefore you should consider the following options:
|
High runtime of MIN and MAX searches | Avoid frequent MIN and MAX searches on large data volumes | Indexes in SAP HANA can’t be used to identify the maximum or minimum value of a column directly. Instead the whole column has to be scanned. Therefore you avoid frequent MAX or MIN searches on large data volumes (e.g. by using additional selective conditions or by maintaining the MIN and MAX values independently in a separate table. |
High runtime of COUNT DISTINCT | Use SPS 09 or OLAP engine | if SAP HANA <= SPS 08 is used and a COUNT DISTINCT is executed on a column with a high amount of distinct values, a rather larger internal data structure is created regardless of the actual number of records that have to be processed. This can significantly increase the processing time for COUNT DISTINCT. As a workaround you can check if the SQL statement is processed more efficiently using the OLAP engine by using the USE_OLAP_PLAN hint (see SAP Note 2142945). As a permanent solution you have to upgrade to SPS 09 or higher, so that the size of the internal data structure takes the amount of processed records into account. |
High mass UPDATE runtime | Use primary key for updating | Updating a high amount of records in a single command (e.g. “UPDATE … FROM TABLE” in ABAP systems) is more efficient than performing individual UPDATEs for each record, but it still can consume significant time. A special UPDATE performance optimization is available when the UPDATE is based on a primary key. So if you suffer from long mass UPDATE runtimes you can check if you can implement an appropriate primary key. A unique index is not sufficient for this purpose, a real primary key constraint is needed. |
High runtime of TOP 1 requests | Check TOP 1 optimization | If a query with a TOP 1 restriction (e.g. “SELECT TOP 1 …”) runs much longer than expected and call stacks (e.g. via SQL: “HANA_Threads_Callstacks”, SAP Note 1969700) indicate that most of the time is spent in UnifiedTable::MVCCObject::generateOLAPBitmapMVCC, you can check if disabling the TOP 1 optimization feature can be used as a workaround:
indexserver.ini -> [search] -> qo_top_1_optimization = false |
Sporadically increased runtimes of calculation scenario accesses | Check calculation engine cache size | If accesses to calculation scenarios are sometimes slow, cache displacements can be responsible. You can use SQL: “HANA_Configuration_MiniChecks” (SAP Note 1999993, check ID 460) and SQL: “HANA_CalculationEngine_CalculationScenarios”(SAP Note 1969700) to find out more. If the cache is undersized, you can increase it using the following parameter:
indexserver.ini -> [calcengine] -> max_cache_size_bytes = <byte> |
Unexplainable long runtime | Optimize internal processing | If you experience a runtime of a SQL statement that is much higher than expected, but none of the above scenarios apply, a few more general options are left. See SAP Note 2142945 and test if the performance of SQL statements improves with certain hints (e.g. USE_OLAP_PLAN, NO_USE_OLAP_PLAN), because this can provide you with more ideas for workarounds and underlying root causes. Check if SAP Notes exist that describe a performance bug for similar scenarios. Check if the problem remains after having implemented a recent SAP HANA patch level. Open a SAP incident on component HAN-DB in order to get assistance from SAP support. |
11. Are secondary indexes required to provide an optimal performance?
SAP HANA is able to process data efficiently so that often a good performance is possible even without the use of indexes. In case of frequent, selective accesses to large tables it is nevertheless useful to create additional secondary indexes. As a rule of thumb column store indexes should be created on single columns whenever possible, because single column indexes require much less memory compared to multi column indexes.
See SAP Note 2160391 for more details.
12. Which advanced features exist for tuning SQL statements?
The following advanced features exist to optimize SQL statements:
Feature | SAP Note | Details |
Hints | 2142945 | If the long runtime is caused by inadequate execution plans, you can influence them by using SAP HANA hints. |
Result cache | 2014148 | If complex queries on rather static tables have to be executed repeatedly for the same result, you can consider using the query result cache. |
13. Are there standard recommendations for specific SQL statements available?
In general it is hard to provide standard recommendations because optimizations often depend on the situation on the individual system. Nevertheless there are some exceptions where the underlying root cause is of a general nature. These SQL statements are collected in the following table:
Statement hash | Statement description | Optimization |
eb82038136e28e802bd6913b38a7848c | CALL of BW_CONVERT_CLASSIC_TO_IMO_CUBE | This procedure is executed when a classic infocube is converted to an in-memory optimized infocube using transaction RSMIGRHANADB. Increased load is normal when large infocubes are converted. After the conversion of the existing infocubes is finished, executing this procedure is no longer required, so it is only a temporary activity. |
651d11661829c37bf3aa668f83da0950 42bf4a47fbf7eabb5e5b887dffd53c7c |
CALL of COLLECTOR_HOST_CS_UNLOADS INSERT into HOST_CS_UNLOADS_BASE |
Disable the data collection for HOST_CS_UNLOADS as described in SAP Note 2084747. This action is also recommended in SAP Note 2147247. |
2c7032c1db3d02465b5b025642d609e0 b8b6f286b1ed1ef2e003a26c3e8e3c73 5ec9ba31bee68e09adb2eca981c03d43 5f42d3d4c911e0b34a7c60e3de8a70d2 |
SELECT on M_BACKUP_CATALOG_FILES and M_BACKUP_CATALOG | These SELECTs are regularly issued by the backup console in SAP HANA Studio. In order to minimize the impact on the system you should open the backup console only when required and not permanently. If you repeatedly need to open the backup console, you can increase the refresh interval (default: 3 seconds): Check if the backup catalog is unnecessarily large and delete old catalog entries if possible using: BACKUP CATALOG DELETE ALL BEFORE BACKUP_ID ... |
68f35c58ff746e0fe131a22792ccc1b5 | CALL of BW_F_FACT_TABLE_COMPRESSION | It is normal to see a significant cumulated runtime, because it is a central procedure call for all F fact table compressions. This procedure performs BW compression on F fact tables (i.e. elimination of requests and transition of data into E fact table or dedicated F fact table partition). If F fact tables with a significant amount of records are processed, a significant runtime is expected. Otherwise you can use SQL: “HANA_Threads_ThreadSamples_FilterAndAggregation” (SAP Note 1969700) in order to check for the THREAD_DETAIL information related to this statement hash, which contains the involved table names. Based on this information you can check if you can optimize BW F fact table compression for the top tables. Be aware that the THREAD_DETAIL information is only available when the service_thread_sampling_monitor_thread_detail_enabled parameter is set to true (see SAP Note 2114710). |
91d7aff2b6fb6c513293d9359bf559a6 | CALL of DSO_ACTIVATE_PERSISTED | It is normal to see a significant cumulated runtime, because it is a central procedure call for all DSO activations. This procedure is the HANA server side implementation of DSO activation in BW environments. During this activation data is moved between three involved tables (/BIC/A*40, /BIC/A*00 and /BIC/B000*). It is much more efficient than the previous, SAP application server based approach, because moving a high amount from the database to the application and back is no longer required. If the DSO activation in distributed environments takes longer than expected, you should check if the partitioning of the /BIC/A*40 and /BIC/A*00 tables is consistent, i.e. same partitioning and same hosts for the related partitions. You can check this DSO consistency via SQL: “HANA_BW_InconsistentDSOTables” (SAP Note 1969700). For SAP HANA optimized DSOs you can check SAP Note 1646723 for DSO activation parameters. Apart from this you can also apply optimizations on BW side, e.g. smaller requests or smaller keys. |
bb45c65f7cc79e50dbdf6bbb3c1bf67e b0f95b51bcece71350ba40ce18832179 |
SELECT on M_CONNECTIONS | This SELECT is executed when a SAP ABAP work process connects to SAP HANA. Implement a sufficiently new DBSL patch level according to SAP Note 2207349 in order to replace this query with a more efficient approach based on session variables. |
fcdf8f8383886aaf02c3a45e27fbd5f2 | SELECT on M_CONNECTIONS and M_ACTIVE_STATEMENTS | This SELECT is executed if performance monitoring is scheduled via transaction /SDF/MON and the checkbox “SQL statements” is flagged. Disable this checkbox if you want to eliminate the load introduced by this statement hash. |
09214fa9f6a8b5322d329b156c86313b 28996bd0243d3b4649fcb713f43a45d7 7439e3c53e84b9f849c61d672da8cf79 84e1f3afbcb91b8e02b80546ee57c537 9ce26ae22be3174ceaffcfeee3fdd9b7f32de4a673809ad96393ac60dfc41347 |
SELECT on M_STATISTICS_LASTVALUES
UPDATE on STATISTICS_LASTVALUES |
This SELECT is executed by the Solution Manager in order to read history data from the statistics server. It is particularly critical if the standalone statistics server is used. The UPDATE is executed by the standalone statistics server in order to update statistical information. Implement SAP Note 2147247 (“How can the memory requirements of the statistics server be minimized?”), so that the collection and processing of statistical data is optimized. A switch to the embedded statistics server is typically already sufficient to resolve this issue. |
0ad1a4c1c1a5844d01595f5b3cdc2977 | SELECT on M_TRACEFILE_CONTENTS | This SELECT is regularly issued by the backup console in SAP HANA Studio. In order to minimize the impact on the system you should open the backup console only when required and not permanently. |
6b1d10732401fe82992d93fa91f4ae86 | SELECT FOR UPDATE on NRIV |
Check if you can reduce the critical time frame between SELECT FOR UPDATE on NRIV and the next COMMIT. Identify the involved number range objects from application perspective and optimize their number range buffering.
See SAP Note 1398444 for buffering RF_BELEG and SAP Note 1524325 for buffering RV_BELEG.
Make sure that the table NRIV is located in row store, because the frequent UPDATE operations can be handled more efficiently in row store compared to column store.
Be aware that bad performance for the SELECT FOR UPDATE on NRIV can also be a consequence of some underlying general issues, so if it happens occasionally and in combination with some other issues, it can be a symptom for another problem and not an issue itself.
|
a77c6081dd733bd4641d4f42205f6c84 | SELECT FOR UPDATE on QIWKTAB | This SELECT FOR UPDATE can suffer from row store garbage collection activities if the QRFC scheduler issues a very high amount of updates on a few records of this table. This typically happens if there are STOP entries in transaction SMQ2 that never can be processed, but result in a looping scheduler check. Go to transaction SMQ2 and check if there are STOP entries in any client which can be removed. After cleanup the update frequency should significantly reduce and the row store garbage collection is able to keep up without problems. See SAP Note 2169283 for more information related to garbage collection. SAP Note 2125972 provides an application correction to reduce the amount of updates on QIWKTAB. |
6662b470dc055adfdd4c70952de9d239 93db9a151f16022e99d6c1c7694a83b0 |
SELECT on RSDDSTATEVDATA |
Consider reducing the execution frequency of the related Solution Manager extraction “WORKLOAD ANALYSIS (BI DATA)”.
See SAP Note 706478 and delete / archive old records from RSDDSTATEVDATA. The minimum STARTTIME in table RSDDSTATINFO is from 30 days ago, so the actual retention time is 30 days. You could reduce it via parameter TCT_KEEP_OLAP_DM_DATA_N_DAYS (SAP Note891740).
|
0e30f2cd9cda54594a8c72afbb69d8fd a115fd6e8a4da78f0813cfe35ffc6d4200218f32b03b44529da647b530bf4e6d8a8ec2baa7873e835da66583aa0caba2b51046ad3685e1d45034c4ba78293fd8 ec91f8ecc5030996f1e15374f16749a8 f0a4785a6ada81a04e600dc2b06f0e494fe60bbfbfa2979b89e8da75f5b2aac7 ca8758b296bd2321306f4fee7335cce5 |
SELECT on RSICCONT
SELECT on RSSELDONE SELECT on RSMONICDP SELECT on RSSTATMANPART SELECT on RSSTATMANSTATUS SELECT on TESTDATRNRPART0 |
If theses queries return a significant amount of records (> 100), it might be caused by a high number of existing requests. In this case you can check the following:
|
8437943036a2a9fd82b290b2c6eafce7 | SELECT on RSR_CACHE_FFB and RSR_CACHE_VARSH | See SAP Note 706478 and check if you can clean up old entries in RSR_CACHE tables. |
e5332b10f3a1a4215728857efc0f8eda | SELECT on SOURCE_ALERT_65_BACKUP_CATALOG | This SELECT is linked to the statistics server action Alert_Backup_Long_Log_Backup. See SAP Note 2147247 (“How can the runtime and CPU requirements of the statistics server actions be analyzed and optimized?” -> “Alert_Backup_Long_Log_Backup”) for details about optimizations. |
5ef81b3843efbe961b19fdd79c2bd86b
a3bbe3e5447bc42eb3f0ece820585294 |
CALL of STATISTICS_PREPARE_CALL_TIMER
UPDATE of STATISTICS_SCHEDULE |
The UPDATE on STATISTICS_SCHEDULE is executed from within the procedure STATISTICS_PREPARE_CALL_TIMER. High runtimes in combination with record locks (ConditionalVariable Wait) on table STATISTICS_SCHEDULE can be caused by inefficient COMMIT handling. This problem is resolved as of Rev. 101. |
ac08e80aa8512f9669eaf56dedc02202 c6c59150977220ea4fdea412cac7d1ce 0800928edd9d43764935a4e02abbfa15 16c226e204b54efc280091df267152fd 2b7fee2d2b95e792d3890051cbb97ec9 55bb62c4b9ff3f0b32de3a4df248e18c |
SELECT on STATISTICS_CURRENT_ALERTS
SELECT on STATISTICS_LAST_CHECKS |
See SAP Notes 2147247 (“How can the runtime and CPU requirements of the statistics server actions be analyzed and optimized?” -> STATISTICS_ALERTS_BASE) and make sure that the size of the underlying table STATISTICS_ALERTS_BASE remains on a reasonable level. |
dc571bf5eb7cad9c313c20de904ab709 d6fd6678833f9a2e25e7b53239c50e9a |
CALL of STATISTICS_SCHEDULABLEWRAPPER | This procedure is a wrapper for all embedded statistics server actions like history collections and alert checks. See SAP Note 2147247 (“How can the runtime and CPU requirements of the statistics server actions be analyzed and optimized?”) for more information how to analyze and optimize the actions. |
ba8161b6133633e2984828ce38aa669a 7f55e72c22694541b531e83d8bab8557 2b459e0d42037fe4d6880b238018a6f7 70efe3b4e470438543e6ecec418e4f02 905dbaa93a672b087c6f226bc283431d |
CALL of ALERT_DEC_EXTRACTOR_STATUS
CALL of COLLECTOR_GLOBAL_DEC_EXTRACTOR_STATUS |
This CALL and SELECT is executed by the statistics server under the _SYS_STATISTICS user. It is linked to DXC (see M_EXTRACTORS) and looks for tables with names like ‘/BIC/A%AO’. Tables following this naming convention are technical tables to control the DXC DSO activation process. If the CATALOG READ privilege is not granted to _SYS_STATISTICS, the query can take quite long and return limited results. You can massively improve the runtime by granting CATALOG READ to _SYS_STATISTICS. |
371d616387247f2101fde31ed537dec0 aaff34b83d10188d08109cb52a0069ae 3f96e3762412e8d4cf4c670f2993aa8a 51bffaafaddd0ec825ecb9f5bf1b5615 5d0b4a21b0c17c08a536bec04f2825cd a3f19325031503e9e4088a8e57515cd3 1cbbe08ef2bf075daecb9e383ae74deb a5539c73611c1d0ba9e4a5df719329b8 a382ffaeafac914c422752ab9c2f9eab 81d83c08828b792684d4fd4d2580038e 1ce978ccb4cfed10e1ef12dc554c2273 54c8e15427f41b9074b4f324fdb07ee9 da7ef0fee69db516f1e048217bca39e7 c197df197e4530b3eb0dcc1367e5ba4b |
DELETE on TESTDATRNRPART0
DELETE on TESTDATRNRPART1 DELETE on TESTDATRNRPART2 DELETE on TESTDATRNRPART3 DELETE on TESTDATRNRPART4 |
SAP Note 1964024 provides a correction that reduces the DML operations on the TESTDATRNRPART<id> tables for all types of databases. The optimization is available as of BW support packages SAPKW73012, SAPKW73112 and SAPKW74007. Also make sure that no data targets exist with a high number of requests. See SAP Note2037093 for more information. |
e1cdd703df87fc61ce8163fa107162a9 | CALL SYS.TREXviaDBSL | TREXviaDBSL is used to execute queries that are not possible with plain SQL (e.g. BW queries with execution modes > 0). In the SAP HANA SQL cache both the content and the bind values are hidden, so a direct analysis and optimization is not possible. SAP Note 2125337 provides a coding correction that allows to trace bind values via an expensive statements trace. See “What are typical approaches to tune expensive SQL statements?” -> “Long runtime of TREXviaDBSL calls” for more information. |
14. Is it required to create optimizer statistics in order to support optimal execution plans?
No, it is not necessary to create optimizer statistics. SAP HANA determines optimal execution plans by certain heuristics (e.g. based on unique indexes and constraints) and by ad-hoc sampling of data.
15. Are all database operations recorded in the SQL cache (M_SQL_PLAN_CACHE)?
Standard operations like SELECT, INSERT, UPDATE or DELETE are recorded, but some more specific DML (e.g. TRUNCATE) and DDL operations (e.g. ALTER) are not contained.
Starting with Revision 74 also accesses from stored procedures and SQLScript to temporary tables are no longer stored in the SQL cache (see SAP Note 2003736, “Changed the implementation of SQL queries…”).
16. Can sorting be supported by an appropriate index?
No, due to the technical structure it is not possible that a sort operation is supported by an index.
17. Is it possible to capture bind values of prepared SQL statements?
Often SQL statements are prepared with bind variables in order to avoid frequent reparses. Rather than literals you will then see question marks in the WHERE clause, e.g.:
SELECT * FROM "A004" WHERE "MANDT" = ? AND "KNUMH" = ? AND "KAPPL" = ?
In order to understand selectivities and correlation and to be able to reproduce a problem with a SQL statement it is important to know which actual bind values are used. This information can be determined based on specific traces (e.g. the ST05 SQL trace in ABAP environments).
On SAP HANA side the expensive statement trace captures bind values which can e.g. be evaluated viaSQL: “HANA_SQL_ExpensiveStatements_BindValues” (SAP Note 1969700).
Additionally SAP HANA is able to capture the bind values of critical SQL statements in the SQL plan cache per default as of SPS 08. This capturing is controlled by the following parameters:
Parameter | Details |
indexserver.ini -> [sql] -> plan_cache_parameter_enabled | true: Activate capturing of bind values (for non-LOB columns) for long running SQL statements (default) false: Deactivate capturing of bind values |
indexserver.ini -> [sql] -> plan_cache_parameter_sum_threshold | Minimum threshold for the total execution time of a SQL statement before first set of bind values is captured (in ms, default: 100000) |
indexserver.ini -> [sql] -> plan_cache_parameter_threshold | After having captured the first set of bind values for a certain SQL statement, it will capture further sets of bind values if the single execution time exceeds the parameter value (in ms, default: 100) and is higher than the single execution time of the previously captured bind values. |
indexserver.ini -> [sql] -> plan_cache_parameter_for_lob_enabled | true: Activate capturing of bind values for LOB columns for long running SQL statements, can result in signifcant data volume false: Deactivate capturing of bind values for LOB columns (default) |
The captured values are stored in view M_SQL_PLAN_CACHE_PARAMETERS and can be evaluated via SQL: “HANA_SQL_StatementHash_BindValues” (SAP Note 1969700).
18. How can the performance of INSERTs and data loads be tuned?
INSERTs and data loads write new data, while other operations like SELECT, UPDATE or DELETE have to work on existing data. Therefore typical performance factors are partially different. If you want to improve the performance of INSERTs and data loads, you can consider the following areas:
Area | Details |
Lock waits | See SAP Note 1999998 and optimize lock wait situations if required. Typical situation when an INSERT has to wait for a lock are:
|
Columns | During an INSERT every column has to be maintained individually, so the INSERT time significantly depends on the number of table columns. |
Indexes | Every existing index slows down an INSERT operation. Check if you can reduce the number of indexes during mass INSERTs and data loads. SAP BW provides possibilities to automatically drop and recreate indexes during data loads. Primary index normally mustn’t be dropped. |
Bulk load | If a high number of records is loaded, you shouldn’t perform an INSERT for every individual record. Instead you should take advantage of bulk loading options (i.e. inserting multiple records with a single INSERT operation) whenever possible. |
Parallelism | If a high number of records is loaded, you should consider parallelism on client side, so that multiple connections to SAP HANA are used to load the data. |
Commits | Make sure that a COMMIT is executed on a regular basis when mass INSERTs are done (e.g after each bulk of a bulk load). |
Delta merge | Usually it is of advantage to disable auto merges when a mass load is performed. See SAP Note 2057046 for more details. An extremely large delta storage can reduce the load performance and increase the memory footprint, so it is usually good to trigger individual delta merges when really large loads are done, e.g. after every 100 million records. Avoid repeated merges of small delta storages or with a high amount of uncommitted data in order to avoid unnecessary overhead. |
Typical INSERT throughputs are:
Constellation | Typical throughput |
Problem situations like long critical savepoint phases or other locks | < 500 records / second |
Normal, sequential single-row INSERTs | 1,000 – 10,000 records / second |
Highly parallelized bulk loads | 1,000,000 records / second |