Quantcast
Channel: SAP Notes – Sap Hana Wiki
Viewing all 18 articles
Browse latest View live

SAP HANA Database Maintenance Revision Notes[All]

$
0
0
2085811 SAP HANA SPS 09 Database Revision 90
2085810 SAP HANA SPS 08 Database Revision 85
2075266 SAP HANA Platform SPS 09 Release Note
2004651 SAP HANA Platform SPS 08 Release Note
1958910 EarlyWatch Alert For HANA Database
1948334 SAP HANA Database Update Paths for Maintenance Revisions
1935871 SAP HANA SPS 06 Database Maintenance Revisions
1921675 SAP HANA Platform SPS 07 Release Note
1892593 Preparing Support Services for SAP HANA Scenarios
1329189 GoingLive check for solutions

2123714 SAP HANA SPS 08 Database Maintenance Revision 85.02
1600929 SAP BW powered by SAP HANA DB: Information
2029271 SAP HANA SPS 07 Database Maintenance Revision 74.02
2138515 SAP HANA SPS 09 Database Revision 94
2036266 SAP HANA SPS 07 Database Maintenance Revision 74.03
2036554 SAP HANA SPS 08 Database Revision 81
2175754 SAP HANA SPS 10 Database Revision 100
2043465 SAP HANA SPS 07 Database Maintenance Revision 74.04
2043509 SAP HANA and SAP NetWeaver Java on a Single Host
1898497 Versioning and delivery strategy of application function libraries
2047977 SAP HANA SPS 08 Database Revision 82
2050229 Release and Upgrade Information for Unified Demand Forecast Application Function Library (UDF AFL)
2186862 SAP HANA SPS 10 Database Revision 101
2188592 SAP HANA SPS 08 Database Maintenance Revision 85.05
2056102 Release and Upgrade Information for On-Shelf Availability Application Function Library (POS AFL)
2189594 SAP HANA SPS 09 Database Maintenance Revision 97.01
2193110 SAP HANA SPS 10 Database Revision 102
2103585 Product Component Matrix for SAP Business Planning & Consolidation 10.1, version for Net Weaver
2105127 SAP HANA SPS 09 Database Revision 91
2107518 SAP HANA SPS 08 Database Maintenance Revision 85.01
2200738 SAP HANA SPS 09 Database Maintenance Revision 97.02
2121080 SAP HANA SPS 09 Database Revision 93
2144290 SAP HANA SPS 08 Database Maintenance Revision 85.03
2148769 SAP HANA SPS 09 Database Revision 95
2004651 SAP HANA Platform SPS 08 Release Note
2004952 Migration of UDF AFL and POS AFL from SAP HANA AFL as of SAP HANA Platform SPS 08
2159166 SAP HANA SPS 09 Database Revision 96
2016740 SAP HANA SPS 07 Database Maintenance Revision 74.01
2020199 SAP HANA SPS 08 Database Revision 80
2163217 SAP HANA SPS 08 Database Maintenance Revision 85.04
2062555 SAP HANA SPS 08 Database Revision 83
2170126 SAP HANA SPS 09 Database Revision 97
2075266 SAP HANA Platform SPS 09 Release Note
2076890 SAP HANA SPS 08 Database Revision 84
2085810 SAP HANA SPS 08 Database Revision 85
2085811 SAP HANA SPS 09 Database Revision 90
2220126 SAP HANA Revision Recommendations for SAP Business Suite on HANA and SAP S/4HANA
1948334 SAP HANA Database Update Paths for Maintenance Revisions
2095862 SAP Business Planning & Consolidation 10.1 NW SP06 Central Note
2115517 SAP HANA SPS 09 Database Revision 92

The post SAP HANA Database Maintenance Revision Notes[All] appeared first on Sap Hana Wiki.


SAP HANA DB: Known issues detected in SPS09 and SPS10

$
0
0

SAP HANA DB: Known issues detected in SPS09

 

2194396 After upgrade or SSFS key change wrong SSFS key on System Replication secondary site
2186299 HANA Statistics Server – high memory consumption of the collectors Host_Sql_Plan_Cache and Host_Sql_Plan_Cache_Overview
2169750 SP9 script-based calculation view activation fails due to hierarchy view name change
2150265 Recovery of a tenant database failed due to clashing volume IDs
2149649 Data Corruption or Indexserver Crash after enabling Data Aging via transaction DAGPTM in SAP HANA SPS09
2143455 Indexserver Crashes During Landscape Redistribution
2140344 SAP HANA DB: Backup information is incorrect for system DB and tenant DB after converting system to a Multitenant Database Container.
2140297 SAP HANA DB: nameserver cannot start after converting system to Multitenant Database Container
2137597 SAP HANA: NULL values in LOB column after executing DDL or DML mass operation
2137522 Incorrect uniqueness check in Multi-Container Rowstore Table
2137038 SAP HANA Database: Columnstore history table cannot be accessed due to an exception during deltalog replay
2136944 SAP HANA Database: Indexserver crashes during recovery or startup
2136496 Stopping multi-tenant database container runs into timeout / service crashs
2135985 SAP HANA: Can’t unload index exception when unloading Column Store Tables
2135729 High memory consumption, extreme long startup times and system standstill due to table fragment leak
2135718 SAP HANA Database: Columnstore table cannot be loaded – Mismatch between RowID dict-size 2 and inverted index-size 3
2135596 Column Store table corrupted and cannot be accessed due to orphaned timestamp
2135446 SAP HANA database: Column Store Table with schema flexibility option cannot be accessed
2135443 SAP HANA Database: Corrupt redo log prevents Indexserver startup
2135158 Unable to access certain values of in-memory lob columns after upgrade to HANA Database SPS09
2135097 SAP HANA Database: Inconsistent MVCC information prevents Indexserver startup
2134881 Wrong results or query execution failures in MDX prior to SAP HANA Database Revision 93
2134844 SAP HANA DB: Database crashes and cannot be started after rollback of ADD COLUMN
2133638 HANA indexserver crashes due to an OOM in UndoHandler during the startup
2132353 503 Service not available for XS Engine after conversion to multi-tenant database container system
2131662 Transparent Huge Pages (THP) on SAP HANA Servers
2130083 Successive DML operations on Global Temporary Column Table may cause wrong results or crashes
2129651 Indexserver crash caused by inconsistent log position when startup
2128188 SAP HANA DB: hdbrename fails with a configured storage connector on SPS 09
2127582 SAML SSO between HANA SP09 and BI fails with error: Assertion is not intended for this service provider
2126469 table move fails with ContainerName … – $container$ not found.
2125399 SAP HANA: table consistency check returns error “Maximum rowid in table is larger than max rowid in runtime data”
2122908 Indexserver crash during database startup due to duplicate entries in deltadictionary
2118842 Failing MDX statements on HANA revision 90 and 91.
2118527 HANA Database crashes at ptime::PageHeader::isValidSlot on Revision 90 or lower
2115978 SAP HANA: MDX limitation with non-aggregatable measures
2112732 Pool/RowEngine/MonitorView allocates large amount of memory
2106836 Potential Data Loss During Table Reload
2105764 Data Inconsistency after Upgrade to SAP HANA SPS09 Revision 90
2104798 concat attributes with float / double data types to be recreated in SPS9
2101737 Recovery of a Multitenant Database Container fails
2099486 Repository migration might fail when updating from SPS7 revisions to Revision 90
2089847 Indexserver tracefile shows “expect non-empty top-k info” error

 

SAP HANA DB: Known issues detected in SPS10

2184218 SAP HANA system replication & SAP HANA multitenant database containers – no SQL connect of tenant databases possible after takeover
2187769 Rowstore index can be inconsistent after upgrade to SPS10
2193235 SAP HANA system replication is not working after a change of the master key
2198150 SAP HANA DB: SELECT on column table hangs
2205345 SAP HANA DB Client: “SQL code: -9300” occurred while accessing table
2206359 SAP HANA DB: SELECT on tables with data aging enabled returns incorrect result
2206354 SAP HANA DB: High System CPU Consumption Caused by Plan Trace

 

2194396 After upgrade or SSFS key change wrong SSFS key on System Replication secondary site

 

The post SAP HANA DB: Known issues detected in SPS09 and SPS10 appeared first on Sap Hana Wiki.

SAP HANA – Some Useful Replication Notes

$
0
0

SAP Note 2057595 FAQ: SAP HANA High Availability

SAP Note 1999880 FAQ: SAP HANA System Replication

SAP Note 2012564 HANA Support for VLAN Trunking Protocol (VTP) based on IEE 802.1Q

SAP Note 2063657 HANA System Replication takeover decision guideline

SAP Note 2033624 System replication: Secondary system hangs during takeover

SAP Note 2053504 System replication: Hanging client processes after a takeover

SAP Note 2105185 System Replication Stopped On Statistics Server Due To OOM On Primary System

SAP Note 2053629 HANA System Replication stops after restart

SAP Note 2050830 Registering a secondary system via HANA Studio fails with error ‘remoteHost doe

SAP Note 1984882 Using HANA System Replication for Hardware Exchange with minimum/zero Downtime

SAP Note 1876398 Network  configuration for System Replication in HANA SP6

SAP Note 1995412 Secondary site of System Replication runs out of disk space due to closed data

SAP Note 1945676 Correct usage of hdbnsutil -sr_unregister

SAP Note 2081563 secondary system’s replication mode and replication status changed to “UNKNOWN”

SAP Note 1834153 HANA high availability disaster tolerance config

SAP Note 2165547 FAQ: SAP HANA Database Backup & Recovery in an SAP HANA System Replication Landscape

Sap Note 1917506 HANA system replication backward compatibility

Sap Note 1913302 Connectivity suspend of Appserver during takeover

Sap Note 2219497 Landscape ID mismatch between nameserver.ini and topology.ini in HANA

The post SAP HANA – Some Useful Replication Notes appeared first on Sap Hana Wiki.

2220948 – SAP HANA SPS 10 Database Maintenance Revision 102.01

$
0
0

Symptom

This is the SAP Release Note for SAP HANA Database Maintenence Revision 102.01 of the SAP HANA platform software Support Package Stack (SPS 10).

The DSP (Datacenter Service Point) phase of SAP HANA SPS 10 started with Revision 102 and can be used by customers running SAP HANA in production environments.
Past this Datacenter Service Point point, SAP strongly recommends allways using the latest SAP HANA Maintenence Revision of the same Support Package Stack, available on SAP Service Marketplace.

Please refer to the SAP HANA Revision and Maintenance Strategy document on SAP Service Marketplace for an overview regarding the SAP HANA revision and maintenance strategy.

For details about the SAP HANA revision and maintenance strategy and recommended upgrade paths between SAP HANA Revisions, Support Package Stacks, Maintenance Revisions and Datacenter Service Points, see “SAP Note  2021789: SAP HANA Revision and Maintenance Strategy“.

Please refer also to the following SAP Notes:

  • 2165826: SAP HANA Platform SPS 10 Release Note
    This is the central entry point for all information around SAP HANA Platform Support Package Stack (SPS 10), the comprised SAP HANA Revisions and subsequent SAP HANA Maintenance Revisions.
  • If you upgrade to this Revision and you’re using SAP BW on HANA, please check the ABAP-related notes which can be found in the attachments of this note.

Installation/Upgrade

  • Before you start the upgrade from Revision 64, 65, and 66. Please check out SAP Hot News Note 1918267.
  • As part of the installation or the upgrade to Revision 101 or higher, the master key of the secure store in the file system (SSFS) is changed from the initial default key to an installation-specific individual key. Please check out SAP Note 2183624. To change the master key of SSFS of the hdbuserstore, please see also SAP Note 2210637.
  • Please check “SAP Note 1948334 : SAP HANA Database Update Paths for Maintenance Revisions” for the possible update paths from Maintenance Revisions to SPS Revisions.
  • If you upgrade from a Revision prior to SPS 06, first upgrade your system to the latest SPS 06 Revision (Revision 69.07) before upgrading to this Revision.
  • If you encounter a problem during the upgrade phase “Importing Delivery Units”, please check out SAP Note 1795885.
  • If you are using SAP HANA live, please check Release Note 1778607 before you start upgrading to SPS 10.
  • Before you start the upgrade to this Revision, please check out SAP Note 2159899Release Notes for SAP HANA Application Lifecycle Management for SAP HANA SPS 10.
  • In order to run the SAP HANA Database Revision 80 (or higher) on SLES 11, additional operating system software packages are required.
    Before you start the upgrade or the installation of Revision 80 or higher, please check out SAP Note 2001528.
  • If the SAP HANA installation or one of its AFL plugins fails, please check out SAP Note 2183500
  • If you encounter the problem: “HALM Process Engine: Data cannot be updated alert message”, please check out SAP Note2184376.
  • If you get the issue that the Win8 HANA studio upgrade via update site failed, please check out SAP Note 2182779.
  • For a troubleshooting guide for the SAP HANA platform lifecycle management tool hdblcm, please check out  SAP Note 2078425.

 

General

  • With SAP HANA SPS 09, several inconsistencies in handling the data types double / float have been fixed. This leads to a change in the string representation of float / double data types. Please check out SAP Note 2092868.
  • With SAP HANA SPS 09, several inconsistencies in handling the data types double / float have been fixed. This leads to a change in the string representation of float / double data types and may cause inconsistencies in concat attributes (such as primary keys, unique constraints) that contain float / double data type columns. Please check out SAP Note 2104798.
  • As of Revision 93 the Embedded Statistics Server is the default setting. If you still running the old statistics server, the upgrade triggers the migration to the Embedded Statistics Server automatically. Please check out SAP Note 2091313 : HANA Statistics Server – changed standard setting of the statistics server as of Revision 94.
  • Please note that when the Embedded Statistics Server is used, some program changes are required in the ABAP monitoring tools. See SAP Note 1925684 for details.
  • If the recovery of data without using the backup catalog fails with incorrect syntax near “USING”, please check out SAP Note2157184.
  • If you are using SAP Web Dispatcher as the HTTP load balancer, additional configuration is required. Please see SAP Note2146931 for details.
  • If you have problems creating or modifying a user in the SAP HANA web-based development workbench, please check out SAP Note 2183760.
  • If you are using the SAP HANA cockpit delivery unit “HANA_ADMIN Version 1.3.9”, the start, killing, and stopping of services do not work properly, even if successfully reported. Please check out SAP Note 2182831.
  • Please note that the virtual table function is currently not supported on multitenant databases. Please check out SAP Note2182597.
  • The version of libnuma1 (library for Non-Uniform Memory Access) delivered with SLES 11 SP1 up to version libnuma1-2.0.3-0.4.3 contains a bug which causes the name server to crash. Please check out SAP Note 2132504.
  • You can find information about SHINE (SAP HANA Interactive Education) demo content in SAP Note 1934114.
  • You can find documentation corrections information for the SAP HANA option Advanced Data Processing in SAP Note2183605.

The post 2220948 – SAP HANA SPS 10 Database Maintenance Revision 102.01 appeared first on Sap Hana Wiki.

How to use SAP HANA HDBSQL to execute SQL commands at OS level

$
0
0
HDBSQL is a command line tool for executing commands on SAP HANA databases. SAP HANA HDBSQL is also used to automate the HANA Database backups using cron scripts.

Requirement: You want to access SQL prompt using HDBSQL at OS level.

Prerequisites : You need password of <SID>ADM user and User with HANA database access, in our example we are connecting using SYSTEM.
Steps :
  • Logon to HANA host with <SID>adm user.
  • Once you are logged in as <SID>adm  you can directly execute the hdbsql command , or you can go to following path and execute the hdbsql command.
    • cd /hana/shared/<SID>/hdbclient
  • Now execute the command
    • hdbsql  -n localhost -i 00 -u SYSTEM -p Ina123
SAP HANA HDBSQL example
  • Once you get the command , enter \s to get the system information you are connected to.
  • In the above screenshot we execute SQL “Select * from Schemas
  • Exit HDBSQL by entering the command: exit or quit or \q
You can also log on with user credentials for the secure user store (hdbuserstore) with -U <user_key>.
HDBSQL Examples :
hdbsql  -n localhost -i 00 -u SYSTEM -p Ina123;
hdbsql -S DEV -n localhost:30015 -u SYSTEM -p In123 ;
hdbsql -n localhost -i 00 -u myuser -p myuserpassword “select * from sys.users”;
hdbsql -U myUserSecureStore “Select table_name, schema_name from m_tables” ;

2000002 – FAQ: SAP HANA SQL Optimization

$
0
0
Symptom
SQL statements run for a long time or consume a high amount of resources in terms of memory and CPU.

1. Where do I find information about SQL statement tuning provided by SAP?

The SAP HANA Troubleshooting and Performance Analysis Guide contains detailed information about tuning SQL statements.

Most frequently Used Commands For SAP HANA Administrators

Also see

Top 50 SAP HANA Interview questions and Answers

2. Which indications exist for critical SQL statements?

The following SAP HANA alerts indicate problems in the SQL are:

Alert Name SAP Note  Description
39 Long-running statements 1977262 Identifies long-running SQL statements.

SQL: “HANA_Configuration_MiniChecks” (SAP Notes 1969700, 1999993) returns a potentially critical issue (C = ‘X’) for one of the following individual checks:

Check ID Details
1110 SQL using in average > 1 connection (last day)
1112 SQL using in average > 1 thread (last hour)
1113 SQL using in average > 1 thread (last day)
1115 Longest running current SQL statement (s)
1120 Exp. stmt. trace: SQL running > 1 h (last day)

3. Which prerequisites are helpful for SQL statement tuning?

See SAP Note 2000000 for more information about an optimal SAP HANA performance configuration and useful prerequisites for performance and SQL statement optimization.

4. How can I identify a specific SQL statement?

Quite obviously a specific SQL statement can be identified by its SQL text. Due to the fact that the text can be quite long and there can be different similar SQL texts it is useful to identify a SQL statements based on a hash value that is derived from the SQL text. This hash value is called statement hash and can be found in column STATEMENT_HASH of SAP HANA performance views like M_SQL_PLAN_CACHE, M_EXPENSIVE_STATEMENTS or M_SERVICE_THREAD_SAMPLES.
The statement ID (column STATEMENT_ID) can’t be used for that purpose as it is based on a connection ID and not on a SQL statement text.

5. What is an expensive SQL statement?

An expensive SQL statement is a database access that shows high values in areas like:

  • High execution time: Usually most important in the area of performance analysis are SQL statements with a particular high overall runtime. Usually the overall runtime counts, so it doesn’t matter if a statement is executed 1 time with a duration of 1000 seconds or if it is executed 1 million times with an average duration of 1 ms, because in both cases the statement is responsible for a total duration of 1000 seconds.
  • High memory utilization
  • High CPU consumption
  • High lock wait time

6. How can time information in the SQL cache (M_SQL_PLAN_CACHE) be interpreted?

Tables like M_SQL_PLAN_CACHE, M_SQL_PLAN_CACHE_RESET and HOST_SQL_PLAN_CACHE contain several time related columns and it is not always clear how to interpret them:

Time column type Description
CURSOR Contains the overall cursor time including SAP HANA server time and client time
If the client performs other tasks between fetches of data, the cursor time can be much higher than the SAP HANA server time.
This can result in MVCC issues because old versions of data need to be kept until the execution is finished.
EXECUTION Contains the execution time (open + fetch + lock wait + close) on SAP HANA server side, does not include table load and preparation time
EXECUTION_OPEN Contains the open time on SAP HANA server side
Includes the actual retrieval of data in case of column store accesses with early materialization
EXECUTION_FETCH Contains the fetch time on SAP HANA server side
Includes the actual retrieval of data in case of row store accesses or late materialization
EXECUTION_CLOSE Contains the close time on SAP HANA server side
TABLE_LOAD Contains the table load time during preparation, is part of the preparation time
PREPARATION Contains the preparation time
LOCK_WAIT Contains the transaction lock wait time, internal locks are not included

Usually long EXECUTION_OPEN or EXECUTION_FETCH times are caused by retrieving the data.
From a SQL tuning perspective the most important information is the total elapsed time of the SQL statement which is the sum of preparation time and execution time.

7. How can I determine the most critical SQL statements?

For a pure technical SQL optimization it is useful to determine the most expensive SQL statements in terms of elapsed time and check the result list top down. The most expensive SQL statements in the SQL cache can be determined in the following ways:

  • M_SQL_PLAN_CACHE / HOST_SQL_PLAN_CACHE
  • SAP HANA Studio -> Administration -> Performance -> SQL Plan Cache
  • SQL: “HANA_SQL_SQLCache” (SAP Note 1969700)
  • Solution Manager self service “SQL Statement Tuning” (SAP Note 1601951)

Additionally expensive SQL statements can be captured by trace features:

Trace Environment  SAP Note Detail
ST05 ABAP SQL trace
ST12 ABAP Single transaction analysis
SQLM ABAP 1885926 SQL monitor
SQL trace SAP HANA Studio, SQL 2031647 SQL trace
Expensive statement trace SAP HANA Studio 2180165 Expensive statement trace

If you are interested in the top SQL statements in terms of memory consumption, you can activate both the expensive statement trace and the statement memory tracking (SPS 08 or higher, see SAP Note1999997 -> “Is it possible to limit the memory that can be allocated by a single SQL statement?”) and later on run SQL: “HANA_SQL_ExpensiveStatements” (SAP Note 1969700) with ORDER_BY = ‘MEMORY’.
The currently running SQL statements can be determined via SQL: “HANA_SQL_ActiveStatements”(SAP Note 1969700).
In all cases you get a list of SQL statements / STATEMENT_HASH values that subsequently be analyzed in more detail.

8. How can I determine and interpret basic performance information for a particular SQL statement?

The following SQL statements available via SAP Note 1969700 can be used to collect further details for a given STATEMENT_HASH (to be specified in “Modification section” of the statements):

SQL command Details
SQL: “HANA_SQL_StatementHash_BindValues” Captured bind values from SQL cache in case of long running prepared SQL statements (SPS 08 and higher)
SQL: “HANA_SQL_StatementHash_KeyFigures” Important key figures from SQL cache, for examples see below
SQL: “HANA_SQL_StatementHash_SQLCache” SQL cache information (current and / or historic) like executions, records, execution time or preparation time
SQL: “HANA_SQL_StatementHash_SQLText” SQL statement text
SQL: “HANA_SQL_ExpensiveStatements” Important key figures from expensive statement trace (SAP Note 2180165)
SQL: “HANA_SQL_ExpensiveStatements_BindValues” Captured bind values from expensive statement trace (SAP Note 2180165)

Below you can find several typical output scenarios for SQL: “HANA_SQL_StatementHash_KeyFigures”.

Scenario 1:  Transactional lock waits

We can see that nearly the whole execution time is caused by lock wait time, so transactional lock (i.e. record or object locks) are responsible for the long runtime. Further transactional lock analysis can now be performed based on SAP Note 1999998.

Scenario 2: High number of executions

An elapsed time of 0.55 ms for 1 record is not particularly bad and for most of the SQL statements no further action is required. In this particular case the number of executions is very high, so that overall the execution time of this SQL statement is significant. Optimally the number of executions can be reduced from an application perspective. If this is not possible, further technical analysis should be performed. Really quick single row accesses can be below 0.10 ms, so there might be options for further performance improvements (e.g. index design or table store changes).

Scenario 3: High elapsed time

An execution time of 284 ms for retrieving one row from a single table is definitely longer than expected and it is very likely that an improvement can be achieved. Further analysis is required to understand the root cause of the increased runtime.

Scenario 4: High elapsed time for DML operations, no lock wait time

An elapsed time of 10 ms for inserting a single record is quite high. If DML operations have an increased elapsed time it can be caused by internal locks that are e.g. linked to the critical phase of savepoints. Further internal lock analysis can now be performed based on SAP Note 1999998.

Scenario 5: Many records

Reading about 200,000 records in less than 2 seconds is not a bad value. In the first place you should check from an application perspective if it is possible to reduce the result set or the number of executions. Apart from this there are typically also technical optimizations available to further reduce the elapsed time (e.g. delta storage or table store optimizations).

Scenario 6: High Preparation Time

Preparation time is linked to the initial parsing and compilation. You can reduce this overhead by using bind variables rather than literals so that several SQL statements with the same structure need to be parsed only once. Furthermore you can perform a more detailed analysis to understand why the preparation step is taking longer than expected.

9. Which options exist to understand the execution of a SQL statement in detail?

In order to understand how a SQL statement can be tuned it is essential to know how it is processed. For this purpose you an use the following tools:

Tool Details
Explain High-level information for SQL execution (e.g. joins, used indexes)
  • DBACOCKPIT: Diagnostics -> Explain
  • SAP HANA Studio: SQL console -> Explain plan (right mouse click)
  • SQL: “EXPLAIN PLAN FOR …” and subsequent evaluation of explain plan table via SQL: “HANA_SQL_ExplainPlan” (SAP Note 1969700)
PlanViz Detailed graphical insight in SQL execution
  • DBACOCKPIT: Diagnostics -> Explain -> Execution trace
  • SAP HANA Studio: SQL console -> Visualize plan (right mouse click) -> Execute
Thread sample analysis High-level thread state and lock type information (e.g. useful in case of waits for internal locks which are not reflected in the “lock wait time”)
  • SQL: “HANA_Threads_ThreadSamples_FilterAndAggregation” (SAP Note1969700)
Performance trace Detailed insight in SQL execution including SQL plan and function profiling.
See SAP Note 1787489 for more details.
User-specific trace Granular trace information for configurable components of the SQL statement execution
  • SAP HANA Studio: Administration -> Trace configuration -> User-Specific Trace

Trace components depend on individual scenario, usually only one component is activated, most common components are:

  • trex_qo: Query optimizer trace (see the SAP HANA Troubleshooting and Performance Analysis Guide)
  • join_eval: Join trace (see the SAP HANA Troubleshooting and Performance Analysis Guide)
  • udivmgr: Trace of MVCC related activities

10. What are typical approaches to tune expensive SQL statements?

Depending on the results of the analysis of an SQL statement the following optimizations can be considered.

Possible symptoms Optimization Optimization Details
High number of executions Reduce high number of executions Check from application perspective if the number of executions can be reduced, e.g. by avoiding identical SELECTs or adjusting the application logic.
High number of selected records Reduce number of selected records Check from application perspective if you can restrict the number of selected records by adding further selection conditions or modifying the application logic.

Check if you can reduce the amount of relevant data by archiving or cleanup of basis tables (SAP Note 706478).

High lock wait time due to record locks Reduce record lock contention Check from application perspective if you can reduce concurrent changes of same records.

Check from application perspective if you can reduce the critical time frame between change operation and next COMMIT.
See SAP Note 1999998 for more information.

High lock wait time due to object locks Reduce object lock contention Check if you can schedule critical offline DDL operations less often or at times of lower workload.

Check if you can use online instead of offline DDL operations.
See SAP Note 1999998 for more information.

High total execution time, significant amount of thread samples pointing to internal lock waits Reduce internal lock contention Check if you can reduce the internal lock wait time (SAP Note 1999998).
Execution time higher than expected, optimal index doesn’t exist
Table and column scan related thread methods and details like searchDocumentsIterateDocidsParallel, IndirectScanBvOutJob<range>, IndirectScanVecOutJob<range>, sparseSearch, SparseScanRangeVecJob
Synchronize index and application design Check from an application perspective if you can adjust the database request so that existing indexes can be used efficiently.

Create a new index or adjust an existing index so that the SQL statement can be processed efficiently:

  • Make sure that selective fields are contained in the index
  • Use indexes that don’t contain more fields than specified in the WHERE clause
Execution time higher than expected, negative impact by existing partitioning Synchronize partitioning and application design Make sure that existing partitioning optimally supports the most important SQL statements (e.g. via partition pruning, load distribution and minimization of inter-host communication). See SAP Note 2044468 for more information.
If it is technically possible to disable partitioning, you may check if the performance is better without it.
Long runtime with OR condition having selection conditions on both tables Avoid OR in this context If an OR concatenation is used and the terms reference columns of more than one table, a significant overhead can be caused by the fact that a cartesian product of both individual result sets is required in some cases.
If you face performance issues in this context, you can check if the problem disappears with a simplified SQL statement accessing only one table. If yes, you should check if you can avoid joins with OR concatenated selection conditions on both tables.
Execution time higher than expected, significant portion for accessing delta storage Optimize delta storage Make sure that the auto merge mechanism is properly configured. See SAP Note 2057046 for more details.
Consider smart merges controlled by the application scenario to make sure that the delta storage is minimized before critical processing starts.
Execution time slightly higher than expected Change to row store In general the number of tables in the row store should be kept on a small level, but under the following circumstances it is an option to check if the performance can be optimized by moving a table to the row store:
  • Involved table located in column store and not too large (<= 2 GB)
  • Many records with many columns selected or a very high number of quick accesses with small result sets performed
Execution time sporadically increased Avoid resource bottlenecks Check if peaks correlate to resource bottlenecks (CPU, memory, paging) and eliminate bottleneck situations.
Execution time higher than expected, significant portion for sorting (trex_qo trace: doSort) Optimize sorting Sort operations (e.g. related to ORDER BY) are particularly expensive if all of the following conditions are fulfilled:
  • Sorting of a high number of records
  • Sorting of more than one column
  • Leading sort column has rather few (but more than 1) distinct values

In order to optimize the sort performance you can check from an application side if you can reduce the number of records to be sorted (e.g. by adding further selection conditions) or if you can put a column with a high amount of distinct values at the beginning of the ORDER BY)

Long BW query runtime
Long execution time of TREXviaDBSL calls
Long execution time of TREXviaDBSLWithParameter calls
Optimize BW queries If you aren’t aware about the actual query, yet, get in touch with the application team to understand which actual application operations result in long running TREXviaDBSL calls and start the analysis from that perspective. When you are able to reproduce the issue, more detailed traces can be activated as required.
BW allows executing queries in different execution modes which control the utilization of SAP HANA internal engines. Furthermore it is possible and usually recommended to convert infocubes to the HANA optimized type. See BW on HANA and the Query Execution Mode and SAP Notes 1682992 and 1734002 for further information and check if modifying the execution mode and / or the infocube type improves the performance of BW queries. See SAP Note 2016832 that describes how to adjust the query execution mode.
Long BW DTP and transformation runtime using SAP HANA execution mode Optimize BW DTP and transformation setup See SAP Notes 2033679 (BW 7.40 SP 05 – 07) and 2067912 (BW 7.40 SP 08 – 10) in order to make sure that all recommended fixes are implemented. For example, SAP Note 2133987 provides a coding correction for SAPKW74010 in order to speed up delta extractions from a DSO.
See SAP Note 2057542 and consider SAP HANA based transformations available as of BW 7.40 SP 05.
Long runtime of FOR ALL ENTRIES query Adjust FOR ALL ENTRIES transformation If a FOR ALL ENTRIES selection in SAP ABAP environments takes long and consumes a lot of resources, you can consider adjusting the way how it the database requests are generated.
  • If multiple columns in the WHERE condition reference the FOR ALL ENTRIES list and a SQL statement based on OR concatenations is generated, you can use the DBSL hint dbsl_equi_join as described in SAP Notes 1622681 and 1662726. Be aware that this option will only work if references to the FOR ALL ENTRIES list are on consecutive columns in the WHERE clause and that only “=” references are allowed. If these conditions are not fulfilled, a termination with a short dump will happen.
  • In order to take optimal advantage of the dbsl_equi_join hint in BW, you have to make sure that SAP Notes 2007363 (7.40 SPS 09) and 2020193 (7.40 SPS 08) are implemented.

See SAP Note 2142945 for more information regarding SAP HANA hints.

Long runtime of query on SAP HANA DDIC objects Assign CATALOG READ If the CATALOG READ privilege is not assigned to a user, queries on SAP HANA DDIC objects like TABLES, INDEXES or TABLE_COLUMNS can take much longer, because SAP HANA needs to filter the relevant (own) data and suppress the display of information from other schemas. Make sure that CATALOG READ is assigned (either directly or indicrectly via roles) to users having to access SAP HANA DDIC objects. You can use SQL: “HANA_Security_GrantedRolesAndPrivileges” (SAP Note 1969700) to check if this privilege is already assigned or not.
Long runtime of queries on monitoring views Use fewer and larger selections

Select information from other sources

For technical reasons accesses to monitoring views like M_TABLE_LOCATIONS or M_TABLE_PERSISTENCE_LOCATIONS often scan the complete underlying structures regardless of the WHERE clause. Thus, you should avoid frequent selections of small amounts of data (e.g. one access per table) and use fewer selections reading larger amounts of data (e.g. for all tables of a schema at once).
Alternatively check if you can select the required information from other sources, e.g. monitoring view M_CS_TABLES.
Wrong join order Update join statistics An obviously wrong join order can be caused by problems with join statistics. Join statistics are created on the fly when two columns are joined the first time. Up to SPS 08 the initially created join statistics are kept until SAP HANA is restarted. This can cause trouble if join statistics were created at a time when the involved tables had a different filling level, e.g. when they were empty. In this case you can restart the indexserver in order to make sure that new join statistics are created. Starting with SPS 09 SAP HANA will automatically invalidate join statistics (and SQL plans) when the size of an involved table changed significantly.
If you suspect problems with join statistics, you can create a join_eval trace (SAP Note 2119087) and check for lines like:
JoinEvaluator.cpp(01987) : getJoinStatistics: SAPSR3:CRMM_BUAG (-1)/BUAG_GUID<->SAPP25:CRMM_BUAG_H (-1)/BUAG_GUID
JoinEvaluator.cpp(01999) : jStats   clSizesL:0 clCntAL:0 cntSJL:0 TTL:0 clSizesR:0 clCntAR:0 cntSJR:0 TTR:0

Zero values in the second line can indicate that join statistics were created at a time when one table was empty.
Join statistics are a SAP HANA internal concept and so they can’t be displayed or adjusted.

High runtime, not reproducible in SAP HANA Studio / DBACOCKPIT Recompile execution plan If the runtime of a query is longer than expected and you can’t reproduce the long runtime with SAP HANA Studio or DBACOCKPIT (if bind variables are used: using a prepared statement with proper bind values), the issue can be caused by an inadequate execution plan (e.g. generated based on the bind values of the first execution or based on statistical information collected during the first execution). In this case you can check if an invalidation of the related SQL cache entry can resolve the issue:
ALTER SYSTEM RECOMPILE SQL PLAN CACHE ENTRY '<plan_id>'

You can identify the plan ID related to a statement hash by executing SQL: “HANA_SQL_SQLCache” (STATEMENT_HASH = ‘<statement_hash>’, AGGREGATE_BY = ‘NONE’, DATA_SOURCE = ‘CURRENT’) available via SAP Note 1969700.
Depending on the factors considered during next parsing (e.g. set of bind values, dynamic statistics information) a better execution plan may be generated. It can happen that you have to repeate the RECOMPILE command until a good set of bind values is parsed.
Even if the problem is resolved after the RECOMPILE, there is no guarantee that it is permanently fixed, because after every eviction or restart a new parsing happens from scratch. If the problem is supposed to be linked to bind values, you can consider to adjust the application so that different classes of bind values are executed with slightly different SQL statements, so that SAP HANA can parse each statement individually.
As of SPS 09 more frequent reparses happen and so the situation can improve.
As of SPS 09 you can also think about the IGNORE_PLAN_CACHE hint as a last resort (see SAP Note 2142945). Be aware that this will make the performance more predictable, but due to the permanent parsing requirements the quick executions can significantly slow down. So it should only be used in exceptional cases.

Long preparation times Optimize parsing See SAP Note 2124112 and check if the amount and duration of preparations can be reduced (e.g. by increasing the SQL cache or using bind variables).
High runtime with range condition on multi column index Avoid selective ranges on multi column indexes Range conditions like BETWEEN, “<“, “>”, “>=”, “<=” OR LIKE can’t be used in order to restrict the search in a multi column index. As a consequence the performance can be significantly worse than expected. Possible solutions are:
  • Use a single column index rather than a multi column index if possible.
  • Use “=” or “IN” instead of a range condition.
High runtime with multiple OR concatenated ranges on indexed column Decompress column Due to a design limitation with SPS <= 100 SAP HANA doesn’t use an index on an advanced compressed column if multiple OR concatenated range conditions exist on the same column. As a consequence statemens like
SELECT ... FROM COSP
WHERE ... AND ( OBJNR BETWEEN ? AND ? OR OBJNR BETWEEN ? AND ? )

can have a long runtime and high resource consumption. As a workaround you can only use DEFAULT compression for the table. See SAP Note 2112604 for more information.

High runtime although good single column index exists Make sure that index column isn’t compressed with SPARSE or PREFIXED If a column is compressed with type SPARSE or PREFIXED, an index on this column doesn’t take effect. In order to use a different compression for the column, you can run the following command:
UPDATE "<table_name>" WITH PARAMETERS ('OPTIMIZE_COMPRESSION' = 'FORCE', 'COLUMNS' = ('<column_name>') )

See SAP Note 2112604 for more information related to SAP HANA compression.

High runtime with join and TOP <n> Avoid TOP selections on unselective joins SAP HANA performs a significant amount of join activities on the overall data set before finally returning the TOP <n> records. Therefore you should consider the following options:
  • Provide selective conditions in the WHERE clause so that the amount of joined data is limited.
  • Avoid TOP <n> selections in unselective joins
  • Optimize the join processing, e.g. by defining optimal indexes on the join columns
High runtime of MIN and MAX searches Avoid frequent MIN and MAX searches on large data volumes Indexes in SAP HANA can’t be used to identify the maximum or minimum value of a column directly. Instead the whole column has to be scanned. Therefore you avoid frequent MAX or MIN searches on large data volumes (e.g. by using additional selective conditions or by maintaining the MIN and MAX values independently in a separate table.
High runtime of COUNT DISTINCT Use SPS 09 or OLAP engine if SAP HANA <= SPS 08 is used and a COUNT DISTINCT is executed on a column with a high amount of distinct values, a rather larger internal data structure is created regardless of the actual number of records that have to be processed. This can significantly increase the processing time for COUNT DISTINCT. As a workaround you can check if the SQL statement is processed more efficiently using the OLAP engine by using the USE_OLAP_PLAN hint (see SAP Note 2142945). As a permanent solution you have to upgrade to SPS 09 or higher, so that the size of the internal data structure takes the amount of processed records into account.
High mass UPDATE runtime Use primary key for updating Updating a high amount of records in a single command (e.g. “UPDATE … FROM TABLE” in ABAP systems) is more efficient than performing individual UPDATEs for each record, but it still can consume significant time. A special UPDATE performance optimization is available when the UPDATE is based on a primary key. So if you suffer from long mass UPDATE runtimes you can check if you can implement an appropriate primary key. A unique index is not sufficient for this purpose, a real primary key constraint is needed.
High runtime of TOP 1 requests Check TOP 1 optimization If a query with a TOP 1 restriction (e.g. “SELECT TOP 1 …”) runs much longer than expected and call stacks (e.g. via SQL: “HANA_Threads_Callstacks”, SAP Note 1969700) indicate that most of the time is spent in UnifiedTable::MVCCObject::generateOLAPBitmapMVCC, you can check if disabling the TOP 1 optimization feature can be used as a workaround:
indexserver.ini -> [search] -> qo_top_1_optimization = false
Sporadically increased runtimes of calculation scenario accesses Check calculation engine cache size If accesses to calculation scenarios are sometimes slow, cache displacements can be responsible. You can use SQL: “HANA_Configuration_MiniChecks” (SAP Note 1999993, check ID 460) and SQL: “HANA_CalculationEngine_CalculationScenarios”(SAP Note 1969700) to find out more. If the cache is undersized, you can increase it using the following parameter:
indexserver.ini -> [calcengine] -> max_cache_size_bytes = <byte>
Unexplainable long runtime Optimize internal processing If you experience a runtime of a SQL statement that is much higher than expected, but none of the above scenarios apply, a few more general options are left.
See SAP Note 2142945 and test if the performance of SQL statements improves with certain hints (e.g. USE_OLAP_PLAN, NO_USE_OLAP_PLAN), because this can provide you with more ideas for workarounds and underlying root causes.
Check if SAP Notes exist that describe a performance bug for similar scenarios.
Check if the problem remains after having implemented a recent SAP HANA patch level.
Open a SAP incident on component HAN-DB in order to get assistance from SAP support.

11. Are secondary indexes required to provide an optimal performance?

SAP HANA is able to process data efficiently so that often a good performance is possible even without the use of indexes. In case of frequent, selective accesses to large tables it is nevertheless useful to create additional secondary indexes. As a rule of thumb column store indexes should be created on single columns whenever possible, because single column indexes require much less memory compared to multi column indexes.
See SAP Note 2160391 for more details.

12. Which advanced features exist for tuning SQL statements?

The following advanced features exist to optimize SQL statements:

Feature SAP Note Details
Hints 2142945 If the long runtime is caused by inadequate execution plans, you can influence them by using SAP HANA hints.
Result cache 2014148 If complex queries on rather static tables have to be executed repeatedly for the same result, you can consider using the query result cache.

13. Are there standard recommendations for specific SQL statements available?

In general it is hard to provide standard recommendations because optimizations often depend on the situation on the individual system. Nevertheless there are some exceptions where the underlying root cause is of a general nature. These SQL statements are collected in the following table:

Statement hash Statement description Optimization
eb82038136e28e802bd6913b38a7848c CALL of BW_CONVERT_CLASSIC_TO_IMO_CUBE This procedure is executed when a classic infocube is converted to an in-memory optimized infocube using transaction RSMIGRHANADB. Increased load is normal when large infocubes are converted. After the conversion of the existing infocubes is finished, executing this procedure is no longer required, so it is only a temporary activity.
651d11661829c37bf3aa668f83da0950
42bf4a47fbf7eabb5e5b887dffd53c7c
CALL of COLLECTOR_HOST_CS_UNLOADS
INSERT into HOST_CS_UNLOADS_BASE
Disable the data collection for HOST_CS_UNLOADS as described in SAP Note 2084747. This action is also recommended in SAP Note 2147247.
2c7032c1db3d02465b5b025642d609e0
b8b6f286b1ed1ef2e003a26c3e8e3c73
5ec9ba31bee68e09adb2eca981c03d43
5f42d3d4c911e0b34a7c60e3de8a70d2
SELECT on M_BACKUP_CATALOG_FILES and M_BACKUP_CATALOG These SELECTs are regularly issued by the backup console in SAP HANA Studio. In order to minimize the impact on the system you should open the backup console only when required and not permanently.
If you repeatedly need to open the backup console, you can increase the refresh interval (default: 3 seconds):

Check if the backup catalog is unnecessarily large and delete old catalog entries if possible using:

BACKUP CATALOG DELETE ALL BEFORE BACKUP_ID ...
68f35c58ff746e0fe131a22792ccc1b5 CALL of BW_F_FACT_TABLE_COMPRESSION It is normal to see a significant cumulated runtime, because it is a central procedure call for all F fact table compressions.
This procedure performs BW compression on F fact tables (i.e. elimination of requests and transition of data into E fact table or dedicated F fact table partition). If F fact tables with a significant amount of records are processed, a significant runtime is expected. Otherwise you can use SQL: “HANA_Threads_ThreadSamples_FilterAndAggregation” (SAP Note 1969700) in order to check for the THREAD_DETAIL information related to this statement hash, which contains the involved table names. Based on this information you can check if you can optimize BW F fact table compression for the top tables. Be aware that the THREAD_DETAIL information is only available when the service_thread_sampling_monitor_thread_detail_enabled parameter is set to true (see SAP Note 2114710).
91d7aff2b6fb6c513293d9359bf559a6 CALL of DSO_ACTIVATE_PERSISTED It is normal to see a significant cumulated runtime, because it is a central procedure call for all DSO activations.
This procedure is the HANA server side implementation of DSO activation in BW environments. During this activation data is moved between three involved tables (/BIC/A*40, /BIC/A*00 and /BIC/B000*). It is much more efficient than the previous, SAP application server based approach, because moving a high amount from the database to the application and back is no longer required.
If the DSO activation in distributed environments takes longer than expected, you should check if the partitioning of the /BIC/A*40 and /BIC/A*00 tables is consistent, i.e. same partitioning and same hosts for the related partitions. You can check this DSO consistency via SQL: “HANA_BW_InconsistentDSOTables” (SAP Note 1969700).
For SAP HANA optimized DSOs you can check SAP Note 1646723 for DSO activation parameters.
Apart from this you can also apply optimizations on BW side, e.g. smaller requests or smaller keys.
bb45c65f7cc79e50dbdf6bbb3c1bf67e
b0f95b51bcece71350ba40ce18832179
SELECT on M_CONNECTIONS This SELECT is executed when a SAP ABAP work process connects to SAP HANA. Implement a sufficiently new DBSL patch level according to SAP Note 2207349 in order to replace this query with a more efficient approach based on session variables.
fcdf8f8383886aaf02c3a45e27fbd5f2 SELECT on M_CONNECTIONS and M_ACTIVE_STATEMENTS This SELECT is executed if performance monitoring is scheduled via transaction /SDF/MON and the checkbox “SQL statements” is flagged. Disable this checkbox if you want to eliminate the load introduced by this statement hash.
09214fa9f6a8b5322d329b156c86313b
28996bd0243d3b4649fcb713f43a45d7
7439e3c53e84b9f849c61d672da8cf79
84e1f3afbcb91b8e02b80546ee57c537
9ce26ae22be3174ceaffcfeee3fdd9b7f32de4a673809ad96393ac60dfc41347
SELECT on M_STATISTICS_LASTVALUES

UPDATE on STATISTICS_LASTVALUES

This SELECT is executed by the Solution Manager in order to read history data from the statistics server. It is particularly critical if the standalone statistics server is used. The UPDATE is executed by the standalone statistics server in order to update statistical information.
Implement SAP Note 2147247 (“How can the memory requirements of the statistics server be minimized?”), so that the collection and processing of statistical data is optimized. A switch to the embedded statistics server is typically already sufficient to resolve this issue.
0ad1a4c1c1a5844d01595f5b3cdc2977 SELECT on M_TRACEFILE_CONTENTS This SELECT is regularly issued by the backup console in SAP HANA Studio. In order to minimize the impact on the system you should open the backup console only when required and not permanently.
6b1d10732401fe82992d93fa91f4ae86 SELECT FOR UPDATE on NRIV
Check if you can reduce the critical time frame between SELECT FOR UPDATE on NRIV and the next COMMIT. Identify the involved number range objects from application perspective and optimize their number range buffering.
See SAP Note 1398444 for buffering RF_BELEG and SAP Note 1524325 for buffering RV_BELEG.
Make sure that the table NRIV is located in row store, because the frequent UPDATE operations can be handled more efficiently in row store compared to column store.
Be aware that bad performance for the SELECT FOR UPDATE on NRIV can also be a consequence of some underlying general issues, so if it happens occasionally and in combination with some other issues, it can be a symptom for another problem and not an issue itself.
a77c6081dd733bd4641d4f42205f6c84 SELECT FOR UPDATE on QIWKTAB This SELECT FOR UPDATE can suffer from row store garbage collection activities if the QRFC scheduler issues a very high amount of updates on a few records of this table. This typically happens if there are STOP entries in transaction SMQ2 that never can be processed, but result in a looping scheduler check. Go to transaction SMQ2 and check if there are STOP entries in any client which can be removed. After cleanup the update frequency should significantly reduce and the row store garbage collection is able to keep up without problems. See SAP Note 2169283 for more information related to garbage collection.
SAP Note 2125972 provides an application correction to reduce the amount of updates on QIWKTAB.
6662b470dc055adfdd4c70952de9d239
93db9a151f16022e99d6c1c7694a83b0
SELECT on RSDDSTATEVDATA
Consider reducing the execution frequency of the related Solution Manager extraction “WORKLOAD ANALYSIS (BI DATA)”.
See SAP Note 706478 and delete / archive old records from RSDDSTATEVDATA. The minimum STARTTIME in table RSDDSTATINFO is from 30 days ago, so the actual retention time is 30 days. You could reduce it via parameter TCT_KEEP_OLAP_DM_DATA_N_DAYS (SAP Note891740).
0e30f2cd9cda54594a8c72afbb69d8fd
a115fd6e8a4da78f0813cfe35ffc6d4200218f32b03b44529da647b530bf4e6d8a8ec2baa7873e835da66583aa0caba2b51046ad3685e1d45034c4ba78293fd8
ec91f8ecc5030996f1e15374f16749a8
f0a4785a6ada81a04e600dc2b06f0e494fe60bbfbfa2979b89e8da75f5b2aac7

ca8758b296bd2321306f4fee7335cce5
2aeb4f7ffd47da8917a03f15a57f411a

SELECT on RSICCONT

SELECT on RSSELDONE

SELECT on RSMONICDP

SELECT on RSSTATMANPART

SELECT on RSSTATMANSTATUS

SELECT on TESTDATRNRPART0

If theses queries return a significant amount of records (> 100), it might be caused by a high number of existing requests. In this case you can check the following:
  • Run SQL: “HANA_BW_DataTargets” (MIN_REQUESTS = 10000) available via SAP Note 1969700 or check table RSMDATASTATE_EXT manually for data targets with REQUESTS_ALL > 10000.
  • See SAP Note 2037093 and reduce the request lists of data targets with more than 10,000 requests using programs like RSSM_AUTODEL_REQU_MASTER_TEXT and RSSM_REDUCE_REQUESTLIST.
8437943036a2a9fd82b290b2c6eafce7 SELECT on RSR_CACHE_FFB and RSR_CACHE_VARSH See SAP Note 706478 and check if you can clean up old entries in RSR_CACHE tables.
e5332b10f3a1a4215728857efc0f8eda SELECT on SOURCE_ALERT_65_BACKUP_CATALOG This SELECT is linked to the statistics server action Alert_Backup_Long_Log_Backup. See SAP Note 2147247 (“How can the runtime and CPU requirements of the statistics server actions be analyzed and optimized?” -> “Alert_Backup_Long_Log_Backup”) for details about optimizations.
5ef81b3843efbe961b19fdd79c2bd86b

a3bbe3e5447bc42eb3f0ece820585294

CALL of STATISTICS_PREPARE_CALL_TIMER

UPDATE of STATISTICS_SCHEDULE

The UPDATE on STATISTICS_SCHEDULE is executed from within the procedure STATISTICS_PREPARE_CALL_TIMER. High runtimes in combination with record locks (ConditionalVariable Wait) on table STATISTICS_SCHEDULE can be caused by inefficient COMMIT handling. This problem is resolved as of Rev. 101.
ac08e80aa8512f9669eaf56dedc02202
c6c59150977220ea4fdea412cac7d1ce
0800928edd9d43764935a4e02abbfa15
16c226e204b54efc280091df267152fd
2b7fee2d2b95e792d3890051cbb97ec9
55bb62c4b9ff3f0b32de3a4df248e18c
SELECT on STATISTICS_CURRENT_ALERTS

SELECT on STATISTICS_LAST_CHECKS

See SAP Notes 2147247 (“How can the runtime and CPU requirements of the statistics server actions be analyzed and optimized?” -> STATISTICS_ALERTS_BASE) and make sure that the size of the underlying table STATISTICS_ALERTS_BASE remains on a reasonable level.
dc571bf5eb7cad9c313c20de904ab709
d6fd6678833f9a2e25e7b53239c50e9a
CALL of STATISTICS_SCHEDULABLEWRAPPER This procedure is a wrapper for all embedded statistics server actions like history collections and alert checks. See SAP Note 2147247 (“How can the runtime and CPU requirements of the statistics server actions be analyzed and optimized?”) for more information how to analyze and optimize the actions.
ba8161b6133633e2984828ce38aa669a
7f55e72c22694541b531e83d8bab8557
2b459e0d42037fe4d6880b238018a6f7
70efe3b4e470438543e6ecec418e4f02
905dbaa93a672b087c6f226bc283431d
CALL of ALERT_DEC_EXTRACTOR_STATUS

CALL of COLLECTOR_GLOBAL_DEC_EXTRACTOR_STATUS
SELECT on GLOBAL_DEC_EXTRACTOR_STATUS_BASE
SELECT on TABLES

This CALL and SELECT is executed by the statistics server under the _SYS_STATISTICS user. It is linked to DXC (see M_EXTRACTORS) and looks for tables with names like ‘/BIC/A%AO’. Tables following this naming convention are technical tables to control the DXC DSO activation process. If the CATALOG READ privilege is not granted to _SYS_STATISTICS, the query can take quite long and return limited results. You can massively improve the runtime by granting CATALOG READ to _SYS_STATISTICS.
371d616387247f2101fde31ed537dec0
aaff34b83d10188d08109cb52a0069ae
3f96e3762412e8d4cf4c670f2993aa8a
51bffaafaddd0ec825ecb9f5bf1b5615
5d0b4a21b0c17c08a536bec04f2825cd
a3f19325031503e9e4088a8e57515cd3
1cbbe08ef2bf075daecb9e383ae74deb
a5539c73611c1d0ba9e4a5df719329b8
a382ffaeafac914c422752ab9c2f9eab
81d83c08828b792684d4fd4d2580038e
1ce978ccb4cfed10e1ef12dc554c2273
54c8e15427f41b9074b4f324fdb07ee9
da7ef0fee69db516f1e048217bca39e7
c197df197e4530b3eb0dcc1367e5ba4b
DELETE on TESTDATRNRPART0

DELETE on TESTDATRNRPART1

DELETE on TESTDATRNRPART2

DELETE on TESTDATRNRPART3

DELETE on TESTDATRNRPART4
DELETE on TESTDATRNRPART5
DELETE on TESTDATRNRPART6
INSERT into TESTDATRNRPART0
INSERT into TESTDATRNRPART1
INSERT into TESTDATRNRPART2

SAP Note 1964024 provides a correction that reduces the DML operations on the TESTDATRNRPART<id> tables for all types of databases. The optimization is available as of BW support packages SAPKW73012, SAPKW73112 and SAPKW74007.
Also make sure that no data targets exist with a high number of requests. See SAP Note2037093 for more information.
e1cdd703df87fc61ce8163fa107162a9 CALL  SYS.TREXviaDBSL TREXviaDBSL is used to execute queries that are not possible with plain SQL (e.g. BW queries with execution modes > 0). In the SAP HANA SQL cache both the content and the bind values are hidden, so a direct analysis and optimization is not possible.
SAP Note 2125337 provides a coding correction that allows to trace bind values via an expensive statements trace.
See “What are typical approaches to tune expensive SQL statements?” -> “Long runtime of TREXviaDBSL calls” for more information.

14. Is it required to create optimizer statistics in order to support optimal execution plans?

No, it is not necessary to create optimizer statistics. SAP HANA determines optimal execution plans by certain heuristics (e.g. based on unique indexes and constraints) and by ad-hoc sampling of data.

15. Are all database operations recorded in the SQL cache (M_SQL_PLAN_CACHE)?

Standard operations like SELECT, INSERT, UPDATE or DELETE are recorded, but some more specific DML (e.g. TRUNCATE) and DDL operations (e.g. ALTER) are not contained.
Starting with Revision 74 also accesses from stored procedures and SQLScript to temporary tables are no longer stored in the SQL cache (see SAP Note 2003736, “Changed the implementation of SQL queries…”).

16. Can sorting be supported by an appropriate index?

No, due to the technical structure it is not possible that a sort operation is supported by an index.

17. Is it possible to capture bind values of prepared SQL statements?

Often SQL statements are prepared with bind variables in order to avoid frequent reparses. Rather than literals you will then see question marks in the WHERE clause, e.g.:

SELECT
  *
FROM
  "A004"
WHERE
  "MANDT" = ? AND
  "KNUMH" = ? AND
  "KAPPL" = ?

In order to understand selectivities and correlation and to be able to reproduce a problem with a SQL statement it is important to know which actual bind values are used. This information can be determined based on specific traces (e.g. the ST05 SQL trace in ABAP environments).
On SAP HANA side the expensive statement trace captures bind values which can e.g. be evaluated viaSQL: “HANA_SQL_ExpensiveStatements_BindValues” (SAP Note 1969700).
Additionally SAP HANA is able to capture the bind values of critical SQL statements in the SQL plan cache per default as of SPS 08. This capturing is controlled by the following parameters:

Parameter Details
indexserver.ini -> [sql] -> plan_cache_parameter_enabled true: Activate capturing of bind values (for non-LOB columns) for long running SQL statements (default)
false: Deactivate capturing of bind values
indexserver.ini -> [sql] -> plan_cache_parameter_sum_threshold Minimum threshold for the total execution time of a SQL statement before first set of bind values is captured (in ms, default: 100000)
indexserver.ini -> [sql] -> plan_cache_parameter_threshold After having captured the first set of bind values for a certain SQL statement, it will capture further sets of bind values if the single execution time exceeds the parameter value (in ms, default: 100) and is higher than the single execution time of the previously captured bind values.
indexserver.ini -> [sql] -> plan_cache_parameter_for_lob_enabled true: Activate capturing of bind values for LOB columns for long running SQL statements, can result in signifcant data volume
false: Deactivate capturing of bind values for LOB columns (default)

The captured values are stored in view M_SQL_PLAN_CACHE_PARAMETERS and can be evaluated via SQL: “HANA_SQL_StatementHash_BindValues” (SAP Note 1969700).

18. How can the performance of INSERTs and data loads be tuned?

INSERTs and data loads write new data, while other operations like SELECT, UPDATE or DELETE have to work on existing data. Therefore typical performance factors are partially different. If you want to improve the performance of INSERTs and data loads, you can consider the following areas:

Area Details
Lock waits See SAP Note 1999998 and optimize lock wait situations if required. Typical situation when an INSERT has to wait for a lock are:
  • Critical savepoint phase
  • Concurrent INSERT of same primary key
  • SAP HANA internal locks
  • DDL operation on same table active
Columns During an INSERT every column has to be maintained individually, so the INSERT time significantly depends on the number of table columns.
Indexes Every existing index slows down an INSERT operation. Check if you can reduce the number of indexes during mass INSERTs and data loads. SAP BW provides possibilities to automatically drop and recreate indexes during data loads. Primary index normally mustn’t be dropped.
Bulk load If a high number of records is loaded, you shouldn’t perform an INSERT for every individual record. Instead you should take advantage of bulk loading options (i.e. inserting multiple records with a single INSERT operation) whenever possible.
Parallelism If a high number of records is loaded, you should consider parallelism on client side, so that multiple connections to SAP HANA are used to load the data.
Commits Make sure that a COMMIT is executed on a regular basis when mass INSERTs are done (e.g after each bulk of a bulk load).
Delta merge Usually it is of advantage to disable auto merges when a mass load is performed. See SAP Note 2057046 for more details.
An extremely large delta storage can reduce the load performance and increase the memory footprint, so it is usually good to trigger individual delta merges when really large loads are done, e.g. after every 100 million records.
Avoid repeated merges of small delta storages or with a high amount of uncommitted data in order to avoid unnecessary overhead.

Typical INSERT throughputs are:

Constellation Typical throughput
Problem situations like long critical savepoint phases or other locks < 500 records / second
Normal, sequential single-row INSERTs 1,000 – 10,000 records / second
Highly parallelized bulk loads 1,000,000 records / second

 

SAP HANA Security

$
0
0

1. Where do I find information about security topics in SAP HANA environments?

See the SAP HANA Security Guide for more information.
SAP Note 2089797 provides information about delivered SAP HANA content and related security aspects.
The whitepaper SAP HANA Security – An Overview provides general information about security aspects in SAP HANA environments.

2. Which indications exist for SAP HANA security issues?

The following SAP HANA alerts indicate problems in the security area:

Alert Name SAP Note  Description
57 Secure store file system (SSFS) consistency 1977221 Determines if the secure storage file system (SSFS) is consistent regarding the database.
62 Expiration of database user passwords 2082406 Identifies database users whose password is due to expire in line with the configured password policy. If the password expires, the user will be locked. If the user in question is a technical user, this may impact application availability. It is recommended that you disable the password lifetime check of technical users so that their password never expires (ALTER USER <username> DISABLE PASSWORD LIFETIME).
63 Granting of SAP_INTERNAL_HANA_SUPPORT role 2081857 Determines if the internal support role (SAP_INTERNAL_HANA_SUPPORT) is currently granted to any database users.
64 Total memory usage of table-based audit log 2081869 Determines what percentage of the effective allocation limit is being consumed by the database table used for table-based audit logging.

SQL: “HANA_Configuration_MiniChecks” (SAP Notes 1969700, 1999993) returns a potentially critical issue (C = ‘X’) for one of the following individual checks:

Check ID Details
1310 Secure store (SSFS) status
1330 Number of users with expiration date
1335 Number of SAP users with password expiration
1340 CATALOG READ privilege granted to current user
1342 Users with SAP_INTERNAL_HANA_SUPPORT role
1345 DATA ADMIN privilege granted to users or roles
1350 SQL trace including results configured
1360 Size of audit log table (GB)

3. Which tools exist to analyze security topics?

The following analysis commands are available in SAP Note 1969700:

SQL statement Details
SQL: “HANA_Security_CopyPrivilegesAndRoles_CommandGenerator” Generates GRANT commands to copy privileges and roles from one grantee to another
SQL: “HANA_Security_GrantedRolesAndPrivileges” Displays roles and privileges granted to roles and users (either directly or indirectly via roles)
SQL: “HANA_Security_Roles” Overview of defined SAP HANA roles
SQL: “HANA_Security_Users” Overview of SAP HANA users and schemas

The following monitoring views and dictionary tables provide information about security related topics:

  • EFFECTIVE_APPLICATION_PRIVILEGES
  • EFFECTIVE_PRIVILEGES
  • EFFECTIVE_ROLES
  • EFFECTIVE_STRUCTURED_PRIVILEGES
  • GRANTED_PRIVILEGES
  • GRANTED_ROLES
  • PRIVILEGES
  • ROLES
  • STRUCTURED_PRIVILEGES
  • USERS

4. Which SAP HANA privileges are required for the SAP ABAP database user?

SAP Note 2101316 lists the required SAP HANA privileges for the SAP ABAP database user.
Normally the required privileges are automatically granted.

5. How can I make sure that only administrative users can work on SAP HANA?

SAP Note 1986645 provides a tool set that can be used to prevent business users from connecting to the SAP HANA database. This can be useful for certain maintenance activities.

6. What is the effect of the CATALOG READ privilege?

CATALOG READ controls to what extent a user can access data in SAP HANA dictionary tables (e.g. TABLE_COLUMNS or INDEXES). If CATALOG READ is granted, all information is visible. If CATALOG READ isn’t granted, only the information for own objects is shown. At the same time the performance of these dictionary queries can be worse due to the required security checks.
Unlike on other databases a missing CATALOG READ right doesn’t result in an error, it just restricts the result set of dictionary queries.

7. Which security checks are performed by standard SAP services?

SAP Note 863362 describes the security related checks that are executed by SAP services like Early Watch (EW), Early Watch Alert (EWA) or Going Live sessions (GL).

8. Which SAP component addresses SAP HANA security topics?

SAP HANA security topics are addressed by component HAN-DB-SEC. So you can check for SAP Notes or open SAP incidents on this component when you have security related issues.

9. Where do I find a reference for SQL statements related to SAP HANA security?

Security related SQL statements can be found in the SAP HANA SQL reference at “SQL statements” -> “Access control statements”.

10. Which configuration is required for the SAP HANA database user of transaction DBACOCKPIT?

See SAP Note 1640741 for more information. Among others it suggests to define a role called DBA_COCKPIT with the appropriate privileges for DBACOCKPIT operations.

11. How can tracing be activated for security topics like authentication and login?

SAP Note 2083682 describes which database trace options can be activated in order to collect information about authentication and login procedures.

12. Which errors indicate authorization issues?

Among others, the following errors are symptoms for authorization issues:

transaction rolled back by an internal error: insufficient privilege: Not authorized
search table error: [2950] user is not authorized

13. How can authorization errors be analyzed in SAP HANA environments?

SAP Note 1809199 describes how to perform a root cause analysis in case of authorization issues.

14. What kind of privileges are required for SAP consultants when processing SAP incidents or delivering SAP services?

SAP Note 1747042 provides recommendations about the roles and privileges required for SAP support consultants.

15. Are there best practices to define standard roles for SAP HANA?

The document How to Define Standard Roles for SAP HANA Systems provides best practices for defining standard roles for SAP HANA.

16. How can single sign-on based on Kerberos be implemented?

SAP Note 1837331 provides an how-to guide for Kerberos, SPNEGO and Active Directory. SAP Note1813724 provides tools for configuring and checking Kerberos in SAP HANA environments.

17. What is the performance impact of enabling data volume encryption?

Data volume encryption only incurs an overhead when data is decrypted during read from disk and encrypted when writing to disk. Data in memory is always decrypted and therefore there is no performance penalty associated with access to in-memory data.
Scenarios that involve access to data volumes and therefore have a performance impact are:

Area SAP Note
Column loads 2127458
Savepoints and database snapshots 2100009
Data backups 1642148
Hybrid LOBs 1994962

These scenarios are dominated by I/O and the encryption related CPU overhead is minor. Usually the overall performance impact isn’t higher than a medium single-digit percentage.

18. What can I do if I have forgotten the password of the SYSTEM user?

SAP Note 1925267 and the section “Reset the SYSTEM User’s Password” in the SAP HANA Administration Guide.
describes the necessary steps to define a new password for the SYSTEM.

19. How can I determine the authentication types used by connections to SAP HANA?

The authentication method used by a connection can be determined via column AUTHENTICATION_METHOD of monitoring view M_CONNECTIONS.

20. How can I check for security related SAP Notes for SAP HANA?

At http://service.sap.com/securitynotes you can find SAP Notes relevant for security. In order to find SAP HANA related security SAP Notes, you can filter by ‘HAN*’.

21. How can a user be copied including roles and privileges?

It is not easily possible to copy a user with the related catalog roles. The procedure to copy a user including repository roles is described in the section “Copy a User Based on SAP HANA Repository Roles” of the SAP HANA Administration Guide.

22. How can the SAP HANA internal network be configured in a secure manner?

SAP Note 2183363 provides recommendations for a secure configuration of the SAP HANA internal network.

23. Is there something specific to consider related to GRANT and REVOKE of privileges and roles in SAP HANA environments?

In SAP HANA, privileges and roles can be granted to a user (grantee) by different users (grantors). Each grant, if successful, is persisted in the database catalog and uniquely identified by grantor, grantee, and the role or the privilege. This leads to following behavior during the revoke of the role or privilege:

  • When a role or privilege is revoked from a user, this user can still have the same role or privilege if granted by other users.
  • A REVOKE statement executes successfully, even if the executing user (revoker) did not grant any role or privilege to the user, from whom the statement tries to revoke the role or privilege. See SAP Note 2210758 for more details.

24. What is the purpose of the RESOURCE ADMIN privilege?

RESOURCE ADMIN is required for administration tasks like resetting SAP HANA monitoring views (ALTER SYSTEM RESET MONITORING VIEW) or creating a runtime dump (SAP Note 1813020). Originally this privilege wasn’t granted to the DBA_COCKPIT role but as of 2016 it will be included. If required you can manually grant this privilege to the DBA_COCKPIT role:

GRANT RESOURCE ADMIN TO DBA_COCKPIT

2112732 – Pool/RowEngine/MonitorView allocates large amount of memory

$
0
0

Symptom

Large amount of memory is allocated by Pool/RowEngine/MonitorView and never be reduced until system restart, when searching M_EXPENSIVE_STATMENTS monitoring view collected by use_in_memory_tracing feature.

Reason and Prerequisites

Since Revision70, a new feature of M_EXPENSIVE_STATEMENTS monitoring view was introduced. This feature is a performance enhancement by keeping the information of expensive statements in memory. It can reduce overhead of tracing/querying expensive statements. This feature can be turned on/off by ‘use_in_memory_tracing’ parameter in section [expensive_statement] of global.ini. Its default value is ‘true’.
  • global.ini
    [expensive_statement]
    use_in_memory_tracing = true

When information of expensive statements is stored in memory and M_EXPENSIVE_STATEMENTS monitoring view is searched with filter conditions (WHERE …) for one of following columns;

  • CONNECTION_ID
  • TRANSACTION_ID
  • STATEMENT_ID
  • DB_USER
  • APP_USER
  • START_TIME
  • OBJECT_NAME

Then, memory allocated by “Pool/RowEngine/MonitorView” for M_EXPENSIVE_STATEMENTS monitoring view won’t be released due to programming error.
You can check the memory size allocated by Pool/RowEngine/MonitorView with following sql

  • select exclusive_size_in_use from m_heap_memory where category = ‘Pool/RowEngine/MonitorView’

Solution

This programming error was fixed from Rev85.1 and higher version.
In Revision70 ~ 85, you can avoid this memory leak by turning off use_in_memory_tracing
  • alter system alter configuration (‘global.ini’, ‘system’) set (‘expensive_statement’, ‘use_in_memory_tracing’) = ‘false’;

2222185 – HANA Troubleshooting Tree – System Hang

$
0
0

 

Sap Reference Note: 2222185

 

2193110 – SAP HANA SPS 10 Database Revision 102

$
0
0

Sap Note : 2193110

Please refer to the SAP HANA Revision and Maintenance Strategy document on SAP Service Marketplace for an overview regarding the SAP HANA revision and maintenance strategy.
For details about the SAP HANA revision and maintenance strategy and recommended upgrade paths between SAP HANA Revisions, Support Package Stacks, Maintenance Revisions and Datacenter Service Points, see “SAP Note  2021789: SAP HANA Revision and Maintenance Strategy“.

2233148 – SAP HANA SPS 11 Database Revision 110

2265765 – SAP HANA SPS 11 Database Revision 111

2257500 – SAP HANA SPS 10 Database Maintenance Revision 102.04

2242726 – SAP HANA SPS 10 Database Maintenance Revision 102.03

2220948 – SAP HANA SPS 10 Database Maintenance Revision 102.01

2220929 – SAP HANA SPS 09 Database Maintenance

Please refer also to the following SAP Notes:

Please note the following:

  • If you upgrade to this Revision and you’re using SAP BW on HANA, please check the ABAP-related notes which can be found in the attachments of this note.

Installation/Upgrade

  • Before you start the upgrade from Revision 64, 65, and 66. Please check out SAP Hot News Note 1918267.
  • As part of the installation or the upgrade to Revision 101 or higher the master key of the secure store in the file system (SSFS) is changed from the initial default key to an installation specific individual key. Please checkout SAP Note 2183624. To change the master key of SSFS of the hdbuserstore please additional check out Note 2210637.
  • Please check “SAP Note 1948334 : SAP HANA Database Update Paths for Maintenance Revisions” for the possible update paths from Maintenance Revisions to SPS Revisions.
  • If you upgrade from a Revision prior to SPS 06, first upgrade your system to the latest SPS 06 Revision (Revision 69.07) before upgrading to this Revision.
  • If you encounter a problem during the upgrade phase “Importing Delivery Units”, please check out SAP Note 1795885.
  • If you are using SAP HANA live, please check Release Note 1778607 before you start upgrading to SPS 10.
  • Before you start the upgrade to this Revision, please check out SAP Note 2159899Release Notes for SAP HANA Application Lifecycle Management for SAP HANA SPS 10.
  • In order to run the SAP HANA Database Revision 80 (or higher) on SLES 11, additional operating system software packages are required.
    Before you start the upgrade or the installation of Revision 80 or higher, please check out SAP Note 2001528.
  • If the SAP HANA installation or one of its AFL plugins fails, please check out SAP Note 2183500
  • If you encounter the problem: HALM Process Engine: Data cannot be updated alert message, please check out SAP Note 2184376.
  • If you get the issue that the Win8 HANA studio upgrade via update site failed, please check out SAP Note 2182779.
  • Troubleshooting guide for the SAP HANA platform lifecycle management tool hdblcm. Please checkout  SAP Note 2078425.

Most frequently Used Commands For SAP HANA Administrators –Link

 

General

  • With SAP HANA SPS 09, several inconsistencies in handling the data types double / float have been fixed. This leads to a change in the string representation of float / double data types. Please check out SAP Note 2092868.
  • With SAP HANA SPS 09, several inconsistencies in handling the data types double / float have been fixed. This leads to a change in the string representation of float / double data types and may cause inconsistencies in concat attributes (such as primary keys, unique constraints) that contain float / double data type columns. Please check out SAP Note 2104798.
  • As of Revision 93 the Embedded Statistics Server is the default setting. If you still running the old statistics server, the upgrade triggers the migration to the Embedded Statistics Server automatically. Please check out SAP Note 2091313 : HANA Statistics Server – changed standard setting of the statistics server as of Revision 94.
  • Please note that when the Embedded Statistics Server is used, some program changes are required in the ABAP monitoring tools. See SAP Note 1925684 for details.
  • If the recovery of data without using the backup catalog fails with incorrect syntax near “USING”, please check out SAP Note 2157184.
  • If you are using SAP Web Dispatcher as the HTTP load balancer, additional configuration is required. Please see SAP Note 2146931 for details.
  • If you have problems creating or modifying a user in the SAP HANA web-based development workbench, please check out SAP Note 2183760.
  • If you are using the SAP HANA cockpit delivery unit “HANA_ADMIN Version 1.3.9”, the start, killing, and stopping of services do not work properly, even if successfully reported. Please check out SAP Note 2182831.
  • Please note that the virtual table function is currently not supported on multitenant databases. Please check out SAP Note 2182597.
  • The version of libnuma1 (library for Non-Uniform Memory Access) delivered with SLES 11 SP1 up to version libnuma1-2.0.3-0.4.3 contains a bug which causes the name server to crash. Please check out SAP Note 2132504.
  • You can find information about SHINE (SAP HANA Interactive Education) demo content in SAP Note 1934114.
  • You can find documentation corrections information for the SAP HANA option Advanced Data Processing in SAP Note 2183605.
  • Introduced a new security feature to configure if the SYSTEM user is locked due to too many failed connect attempts.
    The new parameter “password_lock_for_system_user” in section [password_policy] of file indexserver.ini (or nameserver.ini for the system database of a multiple-container system) has value “true”, which means the SYSTEM user will be locked for <password_lock_time> after <maximum_invalid_connect_attempts> failed attempts. To keep the old, unsafe behaviour, set parameter “password_lock_for_system_user” to “false” (not recommended).
  • Introduced a new feature to configure error messages for invalid connect attempts.
    The new parameter “detailed_error_on_connect” in section [password_policy] of file indexserver.ini (or nameserver.ini for the system database of a multiple-container system) has value “false”, which means only the information “authentication failed” will be returned in such a case. If set to “true” (not recommended), the specific reason for failed logon is returned.

 

Other Terms

NewDB, in-memory database, Hybrid database, update, installation, SPS 10

 

Reason and Prerequisites

  • Running the installation and the update is only supported on a validated SAP HANA appliance and SAP HANA Tailored Datacenter Integration setup.

 

Solution

If you upgrade from a lower SAP HANA SPS, it is recommended to update ALL other components (Studio, Modeler, DB Clients, DBSL, SLT, DS, …) to at least the minimal version of SAP HANA SPS 10.

Documentation

Installation and Update

  • For the installation of the SAP HANA client on SUSE Linux Enterprise Server 9 and 10, use the package “Linux SUSE 9 on x86_64 64bit”.
  • Fixed a problem with the Hardware Check Tool (HCCT) that could have caused VMware-virtualized systems to hang or report inconsistent data (hyperthreading was deactivated even if it had been activated).
    For more information, see SAP Note 2161344.

Issues solved with this revision

Backup & Recovery

  • Fixed an issue that caused tenants to be recovered on an arbitrary host instead on the host they were created on.

BW/OLAP

  • Fixed an issue in Epsilon Reaction in order to adapt the BW disaggregation behaviour.
  • Fixed a bug which could lead to an indexserver crash during out of memory situations.
    In indexserver crashdump contained the following call stack:
    “[CRASH_STACK] Stacktrace of crash: (0000-00-00 00:00:00 000 Local)
    —-> Symbolic stack backtrace <—-
    0: Evaluator::StackVector<Evaluator::ExpressionTree::McpStackEntry, 64ul>::enlarge(unsigned long) + 0x5c
    1: Evaluator::ExpressionTree::mergeCommonParts() + 0xa05
    2: AttributeEngine::Evaluator::compile() + 0x349”.

Database Client

  • Fixed an issue in the SQLDBC client which caused ABAP application servers to connect to the wrong host after a HANA master node restart or indexserver crash, leading to the error: Connect failed (no reachable host left).
  • Fixed an issue on the HANA client when DEFERRED_LOB_WRITING were used, which could lead to a short dump on application side with the error message: “SQL code: -9300” occurred while accessing table <table_name>. Please refer to SAP Note 2205345 for details.
  • Fixed an issue in HANA ODBC driver that when using ADO call “SQLGetData” with a 0-length buffer to determine the length, the driver did not return a truncation warning because there was no room in the buffer for the NULLCHAR.
  • Fixed an issue in the HANA ADO.net driver that IListSource.ContainsListCollection did not match the MSDN documentation. After the fix the property will always returns false since the HanaDataReader does not return a collection of IList objects.
  • Fixed an issue where connections failed to connect when using Kerberos SSO to authenticate users through the SAP HANA MDX driver.

Data Provisioning

  • Fixed the table type of extended storage tables which could return CREATE ROW TABLE even if CREATE COLUMN TABLE <table_name> USING EXTENDED STORAGE was used.

Database Client

  • Corrected a client bug that prevented a receive timeout after the configured CONNECTIONTIMEOUT threshold.

High Availability

  • Fixed an issue where the takeover flag was not removed from the topolgy (/topology/datacenters/takeover/*), when a takeover was performed on a secondary system where not all tenants could be taken over (e.g. because they were not initialized yet).
  • Fixed a crash that could occur in a secondary site of system replication in combination with Live Cache.
    The crash stack was as follows:
    “1: TransactionManager::TransactionControlBlockSPI::precommit
    2: DataAccess::ReplicationProtocolSecondaryHandler::createTCBforRestartSessions
    3: DataAccess::ReplicationProtocolSecondaryHandler::buildOpenSessionList
    4: DataAccess::ReplicationProtocolSecondaryHandler::preloadTables”.
  • Fixed the issue that takeover to the standby node did not happen and the landscape was broken until a failed host was working again when a tenant catalog master was not located on the master node and a none master node failed in a MultiDB scenario.

Languages (SQL Script, R, River)

  • Fixed an issue that prevented a table variable within a cursor to be passed directly.
  • Fixed a regression bug in the SQLScript parser tht could occur an array index position was referenced with a variable value in the WHERE clause of a SQL statement, like:
    “lArray bigint array;
    v_index int;
    lArray[1]:=1;
    v_index :=1;
    select * from dummy where :lArray [:v_index]=1;”
    This caused the following error: “incorrect syntax near “)” “.

MDX

  • Fixed an issue in SAP HANA MDX where the members could have unexpected results when they were not requested according to their hierarchy order.
  • Fixed an issue in HANA MDX when filters on parent-child hierarchy for non-leaf members were used.
  • Optimized the performance and memory consumption of HANA MDX for property lookups that could involve expensive join operations.
  • Fixed a bug that caused an MDX query on an analytical view with hierarchies to fail with the following error:
    “SQL: feature not supported: not allowed over OLAP VIEW : search without aggregation, grouping by expression other than view column or user defined function with string type”.

Security

  • Fixed a role performance issue for GRANT.

Supportability

  • Fixed a bug where SAP HANA studio could not correctly display the thread overview and session overview monitor.
    This was caused by invalid characters in the monitoring view M_CONNECTIONS.
    Selecting this view resulted in the error message “invalid character encoding”.

Text search

  • Fixed an issue where preprocessor threads would hang and never complete with the following call stack:
    “1: SAP::TextAnalysis::DocumentAnalysis::StructureAnalysis::NormalizationBuffer::addChar
    2: SAP::TextAnalysis::DocumentAnalysis::StructureAnalysis::XMLTextParser::insertBlankLine
    3: SAP::TextAnalysis::DocumentAnalysis::StructureAnalysis::XMLTextParser::startElement”.

XS Engine

  • Fixed an issue where the XS engine caused continuous memory growth in Pool/SQLScript/Execution allocator.
  • Fixed an issue, where it was not possible to add an Identity Provider via Internet Explorer or Firefox.
    Additional information is provided in SAP Note 2169923.
  • Fixed an issue with the password recovery functionality using a profile to see the security questions.

General

  • Fixed an issue that could cause an indexserver crash with null string handling, showing the following start to the callstack:
    “1: expr::(anonymous namespace)::LCodeArgumentWrapper::~LCodeArgumentWrapper()
    2: expr::Evaluator::Dispatch<46>()”.
  • Fixed an indexserver crash. The error stack was: ptime::NBaseString:_clone.
  • Fixed an issue that could cause a corruption leading to a crash with the following crashstack during table access:
    “exception 1: no.1000000
    Orphaned timestamp TS[tcbidx=2147483647,ssn=any]; $condition$=tcb
    1: TransactionManager::TransactionControlBlockFactory::getTimestampState
    2: TransactionManager::ConsistentView::isTransactionCommitted
    3: TransactionManager::ConsistentView::isTimestampRangeVisible
    4: DataContainer::FileIDMapping::isVisible”.
  • Fixed a bug where the Table Consistency Check could consume too much memory and result in a OOM situation and a performance problem.
  • Improved performance for inserts into global temporary column tables.
  • Fixed an indexserver crash with the following call stack:
    “joinEvaluator::SearchAlgorithm::expand()”.
  • Fixed an issue in NUMA awareness. The issue was visible with the following trace message, but did not harm the system:
    “libnuma: Error: mbind: Invalid argument”.
  • Fixed an issue with delta merge handling, which could have resulted in high amount of no longer needed UDIV/MVCC data in the main memory of column store attributes not being released, even if the number of rows in the main-memory was not high.
  • Fixed a crash by terminating the corresponding SQL statement instead of crashing the indexserver. The corresponding call stack was the following:
    “1:MemoryManager::BigBlockAllocator::addFreeBlock()”.
  • Reduced compilation time for SQL statements with large/complex expression lists.
    Most of the time was spend in the check for duplicate aggregation using trex expression.
  • Fixed insufficient performance for SELECT queries on SYS.GRANTED_PRIVILEGES.
  • Fixed an out of memory situation that could occur when a primary key or a secondary index on existing, large column store tables was created. A typical scenario where this could have happened was a part of a data migration.
  • Fixed an issue where an UPDATE statement would result in the following runtime error:
    “SAP DBTech JDBC: [3]: fatal error: ColDicVal (1000001,4) not found”.
  • Fixed a crash that could occur when performing an UPDATE FROM. The crashdump contained:
    “1: ptime::Qc2QoConverter::make_rid_field()
    2: ptime::Qc2QoConverter::convert_relation_project_record()”.
  • Fixed a bug which could lead to obsolete memory and cpu resource consumption baused by a query on the calculation model because a stacked qoPop was generated.
  • Fixed the inconsistent status of the result view of procedures after an upgrade.
  • Fixed an issue that original snapshots failed and inconsistencies were reported when the snapshot directory was cleared incorrectly after a  snapshot was opened for secondary.
  • Reduced CPU usage on function calls like “JoinEvaluator jeReadIndex” or “SQL optimizer estimations (prefetchSampleData)”. Kernel Profiler traces revealed that a high amount of CPU time was spent on “generateOLAPBitmapMVCC”.
  • Fixed an issue in the XS Job Dashboard that did not allow to add parameter values including a “:” to a schedule.
  • Fixed an issue in the XS Job Dashboard that preveted show schedules for jobs that are referencing .hdbprocedure objects from being shown.
  • Fixed a bug that allowed invalid date values to be inserted into HANA tables. These values could lead to the error
    “AttributeStoreFile error ‘TABLEattribute_204.bin’: attribute value is not a date or wrong syntax; failed with rc 6931”
    when loading the table subsequently.
  • Fixed a crash that could occur when a query was run against a calcview.
    The crash stack was as follows:
    “—-> Symbolic stack backtrace <—-
    0: OlapEngine::BwDocids::prepareMdDocIdsForMain()
    1: OlapEngine::BwDocids::getDocids()
    2: OlapEngine::BwPopJoin12::executePop”.
  • Fixed a crash with the following call stack:
    “[CRASH_STACK] stacktrace of crash: (0000-00-00 11:11:11 111 Local)
    —-> Pending exceptions (possible root cause) <—-
    exception 1: no.1000000 (Basis/Container/SafePointer.hpp:91)
    Assertion failed: (oldValue & RESET_BIT) == 0
    exception throw location:
    1: 0x00007fbb97499c38 in Container::SafePointerHolder<ptime::Statement>::reset()
    2: 0x00007fbb974fb53b in ptime::EntryStatementInfo::set
    3: 0x00007fbb974fb592 in ptime::Connection::setEntryStatement_
    4: 0x00007fbb97521bfa in ptime::EntryStatementScope::reset()
    5: 0x00007fbb97537dc1 in ptime::Statement::compile_”.
  • Fixed an issue that caused an error during query execution when analytic views with join engine deployment were involved.
    The error was as follows:
    “column store error: search table error: [34051] Error during plan execution;Cannot find column DISCOUNT_AMOUNT$sum$ in input itab”.
  • Fixed a performance issue caused by slow filtering on DDIC_NUMC fields due to non optimized search method was used.
  • Fixed a bug which caused an indexserver crash when SQLScript procedures with DDL statements were called in parallel.
    The following assertion could be found in the crashdump:
    “Pure virtual function called, block=MemoryBlock “block”: Deallocated block …
    1:  __cxa_pure_virtual
    2: TRexCommonObjects::InternalTableBase::size() const
    3: ptime::Query::Plan::clear_temp_indexes()”.
  • Fixed an indexserver crash that was triggered by a query on top of a calculation view. The crash stack contained the following entries:
    2ptime::qo_Exp::clone_subtree
    ptime::qo_Normalizer::getExpForPlaceholders
    TrexCalculationEngine::QueryEntryToQoTranslator::getExpFromQueryEntryConstVal
    TrexCalculationEngine::QueryEntryToQoTranslator::translateTerm
    TrexCalculationEngine::QueryEntryToQoTranslator::translateQueryEntry
    TrexCalculationEngine::QueryEntryToQoTranslator::translate
    TrexCalculationEngine::ceQOBuilder::buildPostFilter
    TrexCalculationEngine::ceQOBuilder::buildNode
    TrexCalculationEngine::ceQOBuilder::build
    TrexCalculationEngine::ceQOExecutor::execute
    TrexCalculationEngine::ceQoPop::executePopInternal
    TrexCalculationEngine::ceQoPop::executePop”.
  • Fixed the crash caused by the query that used GROUPING SETS over UNION ALL on OLAP views.
  • Fixed an out of memory issue when a prepared “like” filter was used.
  • Fixed a bug that prevented query execution with an error such as:
    2qo_col_dic.cc(00550) : Error in the generated plan. DicVal(1000020,3) not found”.
  • Fixed a bug in the modeler that caused the properties to be generated only for the first hierarchy defined
  • Fixed a crash with the following call stack:
    “1: 0x00007fddaa81e6e9 in AttributeEngine::BTreeAttribute<TrexTypes::IntAttributeValue>::bwGetValueIdsFromValues()”.
  • Fixed an issue in the area of data aging which could lead to incorrect results returned from a SELECT query with WHERE conditions on key attributes.
    Please refer to SAP Note 2206359 for details.
  • Fixed a bug in the OLAP engine that could have resulted in the column store error:
    “search table error: [2620] executor: plan operation failed;GroupByColumnSetter skip,BwPopJoin1Inwards”.
  • Improved tracing to analyze the error 2434:  “join index inconsistent; Invalid schema: Invalid table for index”.
  • Introduced a proper error message when the default schema specified in the JDBC connection string does not exist.
  • Fixed an indexserver crash with the following call stack:
    “[CRASH_STACK]  Stacktrace of crash: ()
    —-> Symbolic stack backtrace <—-
    0: qo3::OptimizerImpl::initializeInListNode()
    1: bool qo3::OptimizerImpl::initialTree<TRexCommonObjects::QueryEntry>()
    2: qo3::OptimizerImpl::prepare()
    3: qo3::Optimizer::prepare()”.
  • Tables containing colums with string-like types (VARCHAR, NVARCHAR, VARBINARY), where all the data in these columns is of the same length, could take long to merge due to suboptimal coding. The coding has been improved and merges are now fast.
  • Removed incorrect/ignorable trace entries related to metadata handling. These could have resulted in index server trace file entries during startup like:
    “catalog.cc(04301) : [OID] wrong oid is set from objectinfo – oid :72057594044083712”.
  • Fixed the “fatal error: ColDicVal (0,0) not found” error, which was caused by wrong column information being generated for column aggregation during query plan generation.
  • Improved the startup time of the index server service when many in-memory sequences needed to be initialized with new startup values.
    Runtime dumps showed that a lot of “time was spent on calls like:
    ptime::SequenceManager::getResetValue()
    ptime::SequenceManager::restart()
    ptime::SequenceManager::resetSequences()”.
  • Fixed a bug which could have resulted in JobWorker threads hanging endlessly while executing thread method “HashDict worker” and waiting on “conditional variable” wait. The parent SQL Executer thread / connection could not be canceled. The call stack of the hang JobWorker threads looked like:
    “1: syscall+0x15 (libc.so.6)
    2: Synchronization::CondVariable::wait()
    3: TRexUtils::Parallel::__indexHash<IndexHash<TrexTypes::StringAttributeValue>…
    4: TRexUtils::Parallel::__indexHash<IndexHash<TrexTypes::StringAttributeValue>…
    5: TRexUtils::Parallel::JobBase::run()
    6: Execution::JobObjectImpl::run()
    This hang only happened when the corresponding SQL statement was running into an OOM before by running either into GAL or a user defined statement memory limit”.
  • Crash in Execution::JobWorker::enterWait: Fix avoids false detection of sysWaiting worker threads when there’s lock contention
    between setting this flag and actually calling the run method, for example transaction handling.
  • TrexViaDbsl: rethrow caught exceptions. An OLAP query leads to metadata inconsistency. If the TrexViaDbsl does not ignore
    the caught exception, commit will not be issued, and the metadata will not be corrupted.
    TrexViaDbsl continues to catch exceptions in order to write error messages in trace file, but throws them again now.
  • Fixed a sporadic crash when using Smart Data Access and it attempts to create an attribute definition for a rowid column.  The crashstack will contain, “ptime::FederationQuery::convertCIStoITab”.

 

References

This document refers to:

SAP Notes

2210637 Change the encryption key of hdbuserstore
2184635 SAP HANA SPS 10 – Download procedure for the SAP HANA database component
2184536 Selected wizard could not be started error while Paste special and move function
2184376 HALM Process Engine: Data cannot be updated alert message
2183648 Recovery Leads to Unexpected Distribution of Tenants in a Multitenant Database Container Scale-Out Landscape
2183624 Potential information leakage using default SSFS master key in HANA
2183605 SAP HANA Option Advanced Data Processing: Documentation Corrections
2183500 SAP HANA Installation Fails with: Resource temporarily unavailable
2183363 Configuration of SAP HANA internal network
2183246 Column Table or Row Table can be corrupted after the database restart
2182831 HANA Cockpit known issues in SP10
2182779 On Win8 Hana Studio upgrade failed via update site
2182597 HANA Virtual table function does not work on multitenant databases
2159899 Release Notes for SAP HANA Application Lifecycle Management for SAP HANA SPS 10
2157184 recover data without catalog fails with: incorrect syntax near “USING”
2135596 Column Store table corrupted and cannot be accessed due to orphaned timestamp
2135443 SAP HANA Database: Corrupt redo log prevents Indexserver startup
2135097 SAP HANA Database: Inconsistent MVCC information prevents Indexserver startup
2132504 HANA nameserver crashes caused by memory corruption (SLES 11 SP1 only)
2130083 Successive DML operations on Global Temporary Column Table may cause wrong results or crashes
2119442 Revision 100 Enhancements for Planning Functions
2106836 Potential Data Loss During Table Reload
2104798 concat attributes with float / double data types to be recreated in SPS9
2101737 Recovery of a Multitenant Database Container fails
2099820 MOPZ missing product instances::MOPZ does not find HANA Addon Products
2099661 While configuring LMDB, I do not get an option to select SDA (SAP HANA SPS06 update to SAP HANA SPS09)
2099489 Adding additional server like scriptserver for AFL usage and the usage of SYS_DATABASES views can crash the additional server
2099478 Backup fails with error message: crypto provider ‘commoncrypto’ not available
2078425 Troubleshooting note for SAP HANA platform lifecycle management tool hdblcm
2072211 Upgrade of the Embedded Statistics Service (ESS) from Revision 74.02 to Revision 81 or 82 fails with “unknown catalog object”
2068807 SAP HANA Dynamic Tiering SPS 09 Release Note
2066903 SAP HANA DB: STRING_AGG() with subselect does not return expected result
2066313 SAP HANA DB: Possible columnstore table corruption after point-in-time recovery
2056079 Planning : new check ‘ALL_IN_FILTER’
2054883 Enabling Data Volume Encryption in a Running System
2053116 UDF in view definition causes HANA indexserver crash
2052914 ODBC driver issue after upgrade to Revision >= 82
2045050 SAP HANA DB: Enterprise search query may return too few results
2039810 Table does not get merged automatically
2039085 upgrade fails with Retcode 1: SQL-error -2048-column store error PARCONV_UPG
2037509 SAP HANA DB: Hanging Executor::X2::calculate threads after Query cancellation or OOM
2035443 SAP HANA DB: Disconnect of connections after reaching the idle_connection_timeout in distributed landscapes
2032600 Upgrade of SAP HANA studio from Revision 80 to a later revision
2025702 SAP HANA Live Authorization Assistant will not work in HANA Rev 80
2023669 SAP HANA SPS 08 offline and online help are out of synch
2023163 Downloading multispanning TAR archives
2022779 Technical Change of SAP HANA Studio Version String
2022747 HANA Studio does not start after update from SPS5 to SPS8
2021789 SAP HANA Revision and Maintenance Strategy
2014334 Migration from SAP HANA AFL (SPS 07 or earlier) to Product-Specific AFLs (SPS 08)
2001528 Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 11
1998966 Release Note SAP HANA Application Lifecycle Management SPS 08
1963779 Reaching the 768 GB limit of rowstore can cause data loss
1962287 Planning Functions: set default connection name
1957136 Revision 71 Enhancements for Planning Functions
1952701 DBSL supports new SAP HANA SP9 version number
1948334 SAP HANA Database Update Paths for Maintenance Revisions
1934114 SAP HANA DEMO MODEL – SHINE Release & Information Note
1925684 ABAP adjustments for the new Embedded Statistic Server
1918267 SAP HANA DB: corrupt deltalog of column store table
1898497 Versioning and delivery strategy of application function libraries
1795885 HANA Upgrade: problems with “Importing delivery units”
1778607 SAP HANA Live for SAP Business Suite
1710832 HANA BW: I_RESULT_INDEX_NAME with TREX_EXT_AGGREGATE
1666976 uniqueChecker usage description
1523337 SAP HANA Database 1.00 – Central Note
1514967 SAP HANA: Central Note

1914560: Time difference between user adm and root in SAP HANA

$
0
0

Note : 1914560

Symptom

You find that the time of user “root” and “<SID>adm” in HANA server are different.

Environment

HANA database 1.0 all releases

Cause

You have set the TZ in the environment variables for the user <sid>adm. To check this you can use the following command:

env | grep TZ
==> TZ = CST

 

Resolution

You can go to /usr/sap/<SID>/home –> execute “vi .sapenv.sh” –> find statement “export TZ=***” and comment it.

 

 

 

1779221 -Recovery of SAP HANA database fails

$
0
0

Note : 1779221

Symptom

A recovery of SAP HANA database fails. Entries of the following type appear in the file backup.log:
… ERROR   RECOVERY RECOVER DATA finished with error: [449] recovery strategy could not be determined, volume 1, reached log position 0, [111004] Ambiguous sequence of log backups, overlapping log backups: Log backup log_backup_1_0_381184_384512 found starting with redo log position 381184, but expected it to start with redo log position 381248

Other terms

 

newdb, hdb, SAP HANA database,
HANA,
backup, recovery, overlapping log backups

Reason and Prerequisites

 

The failed recovery should restore the database to the most recent state or to a specific point in time.

Furthermore, in the history of the database, there was already at least one recovery with a selected data backup without log backups being implemented (recover … to a specific data backup). After this recovery, the database started a new log sequence whose log items overlapped with an existing log.

As of revision 46, this problem should be solved by selecting the required log backup sequence in the SAP HANA Studio. Otherwise, you must proceed as described below.

The log backups of Revision 25 and older do not have properties that allow the selection of the required log backup sequence in the SAP HANA Studio. You must proceed as described below in this case also.

Solution

 

The selection of the required log backup sequence must be made manually. For this, unwanted log backup files must be moved in the file system, the backup catalog must be regenerated, and the recovery must be repeated.

1. Moving the unwanted log backups

 

You can use only a log that was generated either before the first recovery described above or that was generated after the recovery. Each unwanted log backup file must be moved.

Refer to the file backup.log for the point in time of the recovery. There the recovery command “RECOVER DATA USING FILE (‘<backupprefix>’) CLEAR LOG” is logged with a time stamp.

The point in time of the recovery helps you to locate the interruption of the log backup sequence.

If there were several recoveries that interrupted the log, then there are more incompatible sequences in the log accordingly. For two recoveries, there is one sequence that occurred before the first recovery; one sequence between the first and the second recovery and one sequence after the second recovery.

Alternatively, the switch of log sequences can be determined in the file system. The log backup files contain the log items in the file name and the sequence of the log items was interrupted by the recovery. The log backup files are named according to the model log_backup_<volume>_0_<pos1>_<pos2>.  The files of a volume produce a sequence in which <pos2> of a file occurs as <pos1> of a successor file. If the log sequence was interrupted, this sequence is also interrupted.

The log backups are in a directory that is configured by the parameter basepath_logbackup.

Example (only log backups of the name server):
> ls -ltr log_backup_1*
-rw——- 1 r25adm sapsys   20480 2012-10-23 15:35 log_backup_1_0_381056_381248
-rw——- 1 r25adm sapsys  221184 2012-10-23 15:36 log_backup_1_0_381184_384512
-rw——- 1 r25adm sapsys   12288 2012-10-23 15:36 log_backup_1_0_384512_384576
-rw——- 1 r25adm sapsys   45056 2012-10-23 15:41 log_backup_1_0_384576_385152
-rw——- 1 r25adm sapsys 1744896 2012-10-23 16:00 log_backup_1_0_385152_412288
-rw——- 1 r25adm sapsys  376832 2012-10-23 16:17 log_backup_1_0_412288_418048

In the example, the log was interrupted at 15:36. The file log_backup_1_0_381056_381248 does not have a successor. The example shows only the log backups of the name server or of volume 1. The log backups of all services or volumes must be taken into account. The example shows only one interruption of the log.

Decide which log backup sequence you want to restore and move the unwanted files to another directory. This directory must not be a subdirectory to a directory that contains the required log backup files.

2. Generating the backup catalog

 

As of Version SP 5, SAP HANA database uses an existing backup catalog for the recovery. If you use Version SP 5 of SAP HANA database or higher, you must regenerate the backup catalog on the basis of the required backups.

Skip this step if you use SP 4 or lower.

Use the program hdbbackupdiag to regenerate the backup catalog. Use the following options:

 

  • -d : Value of the parameter basepath_logbackup as the directory to which the backup catalog is to be saved
  • -c : log_backup_0_0_0_0.00 as the name for the backup catalog
  • –dataDir : Directory in which the required data backup is located
  • –logDirs : List of directories (separated by commas or spaces) in which the required log backups are located

Example:
hdbbackupdiag -d $DIR_INSTANCE/backup/log -c log_backup_0_0_0_0.00 –generate –dataDir $DIR_INSTANCE/backup/data –logDirs $DIR_INSTANCE/backup/log

3. Repeating the recovery

Now repeat the recovery. If you have chosen the newest log backup sequence in step 1, you can freely select the option “initialize log area” for the recovery. Otherwise, the option must always be selected because the log area of the database is not compatible with the selected log backups sequence.

 

[Solved] Topology Mismatch Leads To Recovery Failure in HANA scale-out landscape

$
0
0

Symptom

ON a scale-out HANA system, recovery failed, and you could see the following errors in backup.log:

RECOVERY RECOVER DATA finished with error: [110092] Recovery failed in
nameserver startup: convert topology failed, mismatch: 1
statisticsservers configured in source topology, 0 standby in source
topology, 2 in destination system, mismatch: 1 xsengines configured in
source topology, 0 standby in source topology, 2 in destination system

Meanwhile, if you check the daemon.ini file for different nodes on HANA server, you’ll see there’re 2 nodes having the same content in their daemon.ini, for example:

cat hananode01/daemon.ini
[sapwebdisp]
instances = 1
[statisticsserver]
instances = 1
[xsengine]
instances = 1
cat hananode02/daemon.ini
[sapwebdisp]
instances = 1
[statisticsserver]
instances = 1
[xsengine]
instances = 1

 

Environment

HANA database 1.0 all releases

 

Reproducing the Issue

On a HANA scale-out landscape, when the master node and all slave nodes are not running, you tried to restart HDB server on the master node.

 

Cause

The recovery will always start on the master host which’s the first node in the configuration (nameserver.ini/[landscape]/master).

However on your system the last active master node was not the first node (hananode01) but the second node (hananode02). This mean a failover had taken place before and it had moved the 3 services sapwebdisp, statisticsserver and xsengine from hananode01 to hananode02 by changing the daemon.ini files. There are duplicated daemon.ini files in two nodes.
During the recovery with master node hananode01, HANA database will write the entries for the 3 services into its daemon.ini, but in fact, those services will not be removed from hananode02. This will result in error “topology mismatch”.

 

Resolution

You need to remove the daemon.ini on all SAP HANA nodes that are different from the master node, and then you can recover HANA database from backup.

Additionally please make sure that no file daemon.ini exists in /usr/sap//SYS/global/hdb/custom/config. Otherwise it will be considered for all SAP HANA nodes.

 

See Also

SAP Note 1642148 – FAQ: SAP HANA Database Backup & Recovery

 

2018947 – Crash during hana database start up

$
0
0

Symptom

The indexserver or other database services crash during the database start up with following call stack:
0: ptime::Transaction::Transaction
1: ptime::TraceUtil::getOidsFromObjects
2: ptime::ExpensiveStatementTraceFilter::load
3: ptime::ExpensiveStatementTracerImpl::loadConfig
4: ptime::Config::startup
5: TRexAPI::TREXIndexServer::startup
6: nlsui_main

Other Terms

SAP HANA, Crash, start up, expensive statement, trace

Reason and Prerequisites

The expensive statement trace is turned on and there is a filter on a specific object defined. During start up the tracer tries to use a transaction but the object has not been initialized yet.

Solution

Solution: The crash is fixed with SAP HANA revision 81.
Workaround: to be able to start up the database the filter on the objects needs to be removed. Therefore, please remove the related entry in the global.ini. The complete section about expensive statement trace in the global.ini looks similar to this:
[expensive_statement]
threshold_duration = 1
user = system
object = sys.m_connections
To solve the situation, the entry “object = …” needs to be removed.

2194396 – After Hana upgrade or SSFS key change wrong SSFS key on System Replication secondary site

$
0
0

Symptom

After upgrading the HSR secondary site to SAP HANA Revision 101 or Revison 97.01 or SSFS key change on primary site, the system may not start and the following message is written into the trace file:
RootKeyStore.cpp(00272) : RSecSSFs: SSFS-3600: Record “HDB_SERVER/PERSISTENCE/ROOTKEY” was inserted with an encryption key that was different from the current one; when you still know the old one, you can try the “migrate” operation of the “rsecssfx” utility [/sapmnt/ld7272/a/HDB/jenkins_prod/workspace/FA_CO_LIN64GCC47_rel_fa~newdb100_maint_rel/sys/src/spine/src/krn/rsec/rsecssfs.c 3757]
[66679]{-1}[-1/-1] 2015-07-15 11:04:22.160394 w Crypto           RootKeyStore.cpp(00412) : Error during reading of SSFS: exception  1: no.301103  (Crypto/SecureStore/RootKeyStore.cpp:676)
RootKeyStoreReader::read(): SSFS-3600: Record “HDB_SERVER/PERSISTENCE/ROOTKEY” was inserted with an encryption key that was different from the current one; when you still know the old one, you can try the “migrate” operation of the “rsecssfx” utility

 

Reason and Prerequisites

Due to a programming error only the SSFS data file but not the corresponding key file gets updated during restart and topology reload on secondary site.
If this happens, the SSFS cannot be correctly decrypted and the message above will be written into the trace file.
The problem shows up sporadically
1.) after upgrade to SAP HANA Revision 101 or Revision 97.01
2.) or after changing the SSFS key on primary site

Solution

As a short term solution, SSFS key and data file from secondary site should be compared with the files on primary site. They can be found in the following directory: $(DIR_GLOBAL)/hdb/security/ssfs.
SSFS data file is expected to be the same for both sites, but the key files should be different in this case.

1.) If the key file exists on both sites and there is a difference, the SSFS key from primary site has to be copied to the secondary site.
2.) If there is no SSFS key available on primary, the key file on seconday site has to be deleted
When the problem has been resolved, it can re-appear only after the SSFS key has been changed again on primary site.
The programming error will be corrected in one of the upcoming revisions.


2202893 – HANA database Fail to start up during upgrade to Rev100 or Rev101

$
0
0

Symptom

If HANA database started from before than Rev60, and it was upgraded to Rev100 or Rev101, startup of any HANA services can fail with following error.
[00000]{0}[0/0] 0000-00-00 00:00:00.000000 e Service_Startup  md_conv_util.cc(…) ……
[00000]{0}[0/0] 0000-00-00 00:00:00.000000 e assign TREXIndexServer.cpp(…) : assign failed with ltt exception. stopping service… :
……
[00000]{0}[0/0] 0000-00-00 00:00:00.000000 i Service_Shutdown TrexService.cpp(…) : Preparing for shutting service down
Or
[00000]{0}[0/0] 0000-00-00 00:00:00.000000 e Row_Engine msglog.cc(…) :
10638[thr=109420]: Assign at

5: 0x00007f8a08ab2d31 in ptime::Catalog::runAutomatedMigration(bool)+0xb00 at catalog.cc:2197 (libhdbrskernel.so)
6: 0x00007f8a08ac589a in ptime::Catalog::convertDBImageAfterRestart()+0x6b6 at catalog.cc:596 (libhdbrskernel.so)
7: 0x00007f8a0a2b250f in ptime::PTimeFactory::checkAndCompensateMetadata()+0x1db at ptime_factory.cc:692 (libhdbrskernel.so)
8: 0x00007f8a2960813d in TRexAPI::TREXIndexServer::assign(NameServer::ServiceStartInfo&, bool, TREX_ERROR::TRexError&)+0x4b9 at TREXIndexServer.cpp:1256 (hdbindexserver)
9: 0x00007f8a29640ca5 in TRexAPI::AssignThread::run(void*)+0x31 at TREXIndexServer.cpp:537 (hdbindexserver)
10: 0x00007f8a1e5090eb in TrexThreads::PoolThread::run()+0x9b7 at PoolThread.cpp:284 (libhdbbasement.so)
11: 0x00007f8a1e50b1a0 in TrexThreads::PoolThread::run(void*&)+0x10 at PoolThread.cpp:134 (libhdbbasement.so)
12: 0x00007f8a05c2cc39 in Execution::Thread::staticMainImp(void**)+0x7e5 at Thread.cpp:492 (libhdbbasis.so)
13: 0x00007f8a05c2dafd in Execution::Thread::staticMain(void*)+0x39 at ThreadMain.cpp:26 (libhdbbasis.so)
[00000]{0}[0/0] 0000-00-00 00:00:00.000000 e Row_Engine msglog.cc(00082) : error during execution of Auto migration. (ecode: 16) detail : statement retry (at ptime/query/catalog/systables.cc:…)
……
[00000]{0}[0/0] 0000-00-00 00:00:00.000000 e assign TREXIndexServer.cpp(01724) : assign failed with ltt exception. stopping service… :
……
[00000]{0}[0/0] 0000-00-00 00:00:00.000000 i Service_Shutdown TrexService.cpp(00784) : Preparing for shutting service down

Important Terms:
Platform components: SAP HANA database
Technical terms: start up failure, upgrade failure

Reason and Prerequisites

Reason:
Due to programming error, the result of metadata migration of Rev60 can lead a failure of metadata migration of Rev100.
Prerequisites:
SAP HANA database started before than Revision 60 and it was upgraded to Revision 100 or 101. If it started from or after than Rev 60, this issue won’t happen.
In example, the database has been upgraded through RevXX -> Rev6x -> Rev69.07 (-> RevXX) -> Rev100 / Rev101, then this issue can happen.
This issue won’t happen on a upgrade path Rev 60 -> Rev69.07 (-> RevXX) -> Rev100 / Rev101, because metadata migration of Rev60 was not executed.

Solution

This issue is fixed in Rev102. If you have HANA database started before than Revision 60 and has a plan to upgrade to SPS10, please use Revision 102 or later.
Once this issue happens, recover system from BACKUP again.

[Solved] Statistics server on the secondary system is not starting after Rev97 Upgrade

$
0
0

Symptom

After upgrading HANA system to Revision97 and higher on an HA environment, you could see the statistics server was migrated to ESS.However you still saw the statistics server on the secondary system with RED status. Also you could see the setting below in daemon.ini file on the secondary system.

[statisticsserver]
instances = 1

Even through you changed it to 0 it was changed to 1 automatically after restart.

Root cause-

The root cause is that statisticserver node in topology was not removed correctly during ESS migration.
You can see an empty statistics server information in topology.txt file.

Solution

Solution Note : 2207797
Remove the SAP HANA Statitisticsserver service entry from the topology on PRIMARY system.
You can use the following command for each <hostname> which had a statisticserver service before upgrade.
ALTER SYSTEM ALTER CONFIGURATION (‘topology.ini’, ‘system’) UNSET (‘/host/<host name>’, ‘statisticsserver’) WITH RECONFIGURE

Note :2164506 – Registering SAP HANA system replication in the context of the migration to the Embedded Statistics Server could lead to crashes

How to Install HANA Multitenant Database Container System

$
0
0

What is SAP HANA Multitenant Database?
A multitenant database container SAP HANA system is a system that contains one system database and
multiple tenant databases.

Steps to Installing a Multitenant Database Container SAP HANA System

1. Change to the following directory on the installation medium:
<installation medium>/DATA_UNITS/HDB_LCM_LINUX_X86_64
2. Start the SAP HANA database lifecycle manager interactively in the graphical user interface:
./hdblcmgui
The SAP HANA database lifecycle manager graphical user interface appears.
3. Select a detected software component or add a software component location by selecting Add
Component Location. Then select Next.
4. Select Install New System, then select Next.
5. Select the components you would like to install, then select Next.
6. Choose whether your SAP HANA system is a single-host or a multiple-host system, then select Next.
7. Specify the SAP HANA system properties. Select the multiple_containers value for the Database Mode property to configure the system to support multitenant database containers
8. After specifying all system properties, review the summary, and select Install.

Viewing all 18 articles
Browse latest View live