The Basics of an Effective Test Environment

Page 1

The Basics of an Effective Test Environment

Karen Morton (karen.morton@hotsos.com) Hotsos Enterprises, Ltd. Hotsos Symposium 2005 6-10 March 2005

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

1

Agenda

• The ideal test environment • Case study

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

2


The ideal test environment

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

3

Testing should occur in a standalone environment first.

• Allows the tester to see how different design and coding methods effect resource usage • Test without competing for – Resources • CPU • I/O • Latches

– Schema or instance modification effects • New/modified indexes or constraints • Changes to instance parameters

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

4


A simple, yet “all-inclusive”, test environment increases your ability to effectively and efficiently optimize SQL. • The test environment should – Collect and store key instance and schema information in effect at the time of the test (ensures repeatability) – Collect and store key performance metrics • Including V$ view snapshots, extended SQL trace data, optimizer trace data, etc.

– Eliminate the need for DBA intervention to retrieve key information, particularly trace data files – Allow testers access only to trace data they create – Allow comparison of multiple tests over time

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

5

To ensure repeatability, it is important to capture the current state of the environment before a test. • Parameter settings (V$PARAMETER) – settings of all parameters that effect the optimizer • Indexes (ALL_INDEXES, ALL_IND_COLUMNS) – indexes on columns used in the query • Constraints (ALL_CONSTRAINTS, ALL_CONS_COLUMNS) – constraints on columns used in the query • Statistics (ALL_TABLES, ALL_INDEXES, ALL_TAB_COLUMNS) – system, table, column, and index statistics • Data skew (ALL_TAB_HISTOGRAMS) – distribution of values in columns used in the query

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

6


Each test should capture snapshots of several key metrics. • • • • • •

Session resource statistics (V$SESSTAT) Latch acquisitions (V$LATCH) Timing (V$TIMER, dbms_utility.get_time) Execution plan (V$SQL_PLAN, V$SQL_PLAN_STATISTICS) Extended SQL trace data (10046 event) Optimizer trace data (10053 event)

www.hotsos.com

7

Copyright © 2005 by Hotsos Enterprises, Ltd.

Session statistics show how many times a resource is used or how much of a resource is used by your code. • The V$SESSTAT view contains statistics on a per session level • Snapshot values taken immediately before and after a test can be compared to determine overall resource usage STATISTIC BEFORE consistent gets 100 db block gets 5 physical read 120 physical writes 0 session logical reads 105 sorts (disk) 0 sorts (memory) 0

www.hotsos.com

AFTER 200 5 300 0 205 2 4

Copyright © 2005 by Hotsos Enterprises, Ltd.

USAGE 100 0 180 0 100 2 4

8


Latch statistics can help assess the scalability of a SQL statement. • The V$LATCH view contains instance-wide latch acquisition statistics • A latch is an internal Oracle mechanism used to protect data structures in the SGA from simultaneous access – Atomic hardware instructions like TEST-AND-SET are used to implement latches – Latches are more restrictive than locks in that they are exclusive (changed in version 9 to provide some shared latches) – Latches are not queued, but will spin or sleep until they obtain a resource, or time out – Fewer latch acquisitions translates to better scalability

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

9

The execution plan shows the detailed step-by-step path by which the result set was retrieved. • The V$SQL_PLAN view contains the actual execution plan of previously executed SQL statements • The ADDRESS, HASH_VALUE and CHILD_NUMBER columns join with V$SQL, V$SQLAREA or V$SQLTEXT to get specific SQL • V$SQL_PLAN_STATISTICS contains actual statistics for each plan step (STATISTICS_LEVEL must be set to ALL) -------------------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost | CR | PR | TIME | -------------------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 1 | 33 | 2 | | | | | 1 | TABLE ACCESS BY INDEX ROWID| EMP | 1 | 33 | 2 | 3 | 0 | 75 | |* 2 | INDEX RANGE SCAN | ENAME_IDX | 1 | | 1 | 2 | 0 | 46 | -------------------------------------------------------------------------------------------Predicate Information (identified by operation id): --------------------------------------------------2 - access("EMP"."ENAME"='KING')

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

10


Extended SQL (10046) trace data provides the detail of everything that occurred during query execution. • Trace file contents – Preamble – Session id and timestamp – First action shown is an EXEC—the one that activated tracing – Then the PARSE, EXEC, FETCH for the SELECT – A few WAIT lines – Then a STAT line showing the execution plan for the SELECT • Details what actually did happen (not just what might happen)

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

11

The 10053 event details the choices made by the CBO in evaluating the execution path for a query. • The trace consists of multiple sections: – Query – Parameters used by the optimizer – Base Statistical Information – Base Table Access Cost – General Plans – Recosting for special features • Details the calculations and choices evaluated by the CBO

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

12


Occasionally, additional information may be needed to pinpoint specific problems. • Event 10032 and 10033 – sort operations • Event 10104 – hash join operations • Event 10393 and 10390 – PX operations • Event 10241 and 10079 – remote SQL execution • . . . and others. . .

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

13

Test information should be stored for later review.

• Create a set of tables in the test schema to hold snapshots of collected information – Each test should be uniquely identifiable – Insertions into these tables should not intrude upon the statistics for the test itself • Execution of a test should automatically collect and store the snapshot data – Access to trace data can be automated with procedures to copy information from USER_DUMP_DEST directly to the tester’s schema • Reviewing collected test data then simply requires selecting from a table

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

14


The main idea is to have everything needed to review and compare performance available in one place. • The schema where testing occurs should own the objects where test results are stored • The DBA should not need to intervene to provide access to trace data generated during testing • Simple SQL*Plus scripts can be used to simplify executing tests and retrieving stored test results

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

15

Look for approaches that require lower resource consumption and have plans that scale well. • Execute different ways of writing the statement that satisfy the functional requirement • Create, modify or remove indexes and constraints • Create, modify or remove histograms • Compare approaches and review – logical IO – latch acquisitions – sorting (both in memory and disk) – redo generation – execution plans

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

16


Focus on a few primary metrics and use others to fill in the gaps as needed. • There are over 250 different statistics captured in V$SESSTAT and over 240 different latches captured in V$LATCH (v9) • Don’t spend time trying to understand every single one • Focus on key metrics – Statistics: consistent gets, db block gets, physical reads, physical writes, sorts (disk), sorts (memory), buffer is pinned count, redo size – Latches: cache buffers chains, enqueues, library cache, row cache objects, shared pool

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

17

Copyright © 2005 by Hotsos Enterprises, Ltd.

18

Case study

www.hotsos.com


Case study: Testing a poorly performing UPDATE.

• • • •

Statement executes frequently There is an index on the status column Table being updated increases in size by 10-15% weekly Performance is growing worse over time

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

19

Step 1: Setting up the test.

• Review – statistics for all objects used in the statement – indexes and constraints – distribution of values for columns in predicate • Create individual files containing each statement to test

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

20


The problem statement. . .

UPDATE t SET flag = 'B' WHERE status = 2;

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

21

Review the table statistics. . . ======================================================= Table Statistics ======================================================= TABLE_NAME : T LAST_ANALYZED : 07-MAR-2005 00:36:02 DEGREE : 1 PARTITIONED : NO NUM_ROWS : 100000 CHAIN_CNT : 0 BLOCKS : 244 EMPTY_BLOCKS : 12 AVG_SPACE : 2010 AVG_ROW_LEN : 13 MONITORING : NO SAMPLE_SIZE : 100000

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

22


Review the column statistics. . . =================================================== Column Statistics =================================================== Name Null? NDV Density # Nulls =================================================== PKEY NOT NULL 100000 .000010 0 STATUS 3 .333333 0 FLAG 2 .500000 0 ===================================================

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

23

Review the index statistics. . . =========================================================== Index Information =========================================================== INDEX_NAME SYS_C007306 T_N1 T_N2 T_N3 BLEVEL 1 1 1 1 LEAF_BLOCKS 187 315 291 346 DISTINCT_KEYS 100000 3 2 6 AVG_LEAF_BLOCKS_PER_KEY 1 105 145 57 AVG_DATA_BLOCKS_PER_KEY 1 68 204 68 CLUSTERING_FACTOR 204 206 408 412 Index Flags Height Column Name --------------- ------- ------ -----------SYS_C007306 U 2 PKEY T_N1 2 STATUS T_N2 2 FLAG T_N3 2 STATUS . FLAG www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

24


Review the distribution of data. . .

FLAG STATUS ---- -------A 2 B 2

Block Block Row Row selectivity count selectivity count (pb = b/B) (b) (pr = r/R) (r) -------------- ------- ------------- -------64.84% 166 1.60% 1,600 64.84% 166 78.40% 78,400

SELECT count(*) FROM t WHERE status = ‘2’ and flag = ‘A’; SELECT count(*) FROM t WHERE status = ‘2’ and flag = ‘B’;

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

25

Step 2: Execute a baseline test for the “bad” statement.

• Test execution will run the query and gather – Snapshots of V$SESSTAT, V$LATCH, V$TIMER – Extended SQL trace data (10046) – Optimizer trace data (10053) – Execution plan (V$SQL_PLAN) • Review collected information

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

26


Step 3: Execute a test for a rewritten version of the statement.

UPDATE SET WHERE AND

www.hotsos.com

t flag = 'B' status = 2 flag != 'B';

Copyright © 2005 by Hotsos Enterprises, Ltd.

27

Step 4: Compare the results of the baseline test and the rewritten statement test. • Check differences in – Overall resource usage (statistics and latching) – Execution plans • Does the rewritten statement show improvement?

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

28


Continue to test different approaches until results meet performance requirements. • Each test may – attempt different approaches to solving the query – add/change/delete indexes and constraints – change parameters to check how they effect optimizer plan choice

• Take the winner and test it under “stress”

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

29

Recap

• An efficient test environment can be created with simple scripts and objects • Testers gain the ability to execute and compare multiple tests and maintain their results for comparison over time • Test statements requiring fewest overall resources are typically the most scalable and best performing in the long run • “Winning” test statements can then be moved to a load testing environment to further examine their efficiency

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

30


Thanks!

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

31

Hotsos: Come see us…

• Thought leadership – Optimizing Oracle Performance – Oracle Insights – Method R

• Products – Hotsos Profiler – Laredo – Interceptor technologies

• Services – 1-week performance assessment – On-site consulting and education – Remote consulting

• Education – Oracle performance curriculum – Hotsos Symposium

www.hotsos.com

Copyright © 2005 by Hotsos Enterprises, Ltd.

32


References

Hotsos Enterprises, Ltd. 2004-2005. Optimizing Oracle SQL, Part I. www.hotsos.com/courses/OP101.php This course uses a script-based test environment to teach SQL optimization concepts and techniques. Kyte, T. 2003. Effective Oracle by Design. Emeryville, CA: McGraw-Hill/Osborne. This book provides an excellent example of how to create a simple test environment. Still, J. 2003. A Novel Use for External Tables. www.dbazine.com/still1.shtml. This article provides an overview of how external files may be used to locate and view udump and bdump trace files.

www.hotsos.com

Copyright Š 2005 by Hotsos Enterprises, Ltd.

33


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.