Wednesday, February 25, 2009

Safari - Performance

Apple came out with its new browser Safari 4 Beta and as a performance guys, I found something really interesting. Apple claims "Safari’s Nitro Engine executes JavaScript up to 30 times faster than Internet Explorer 7 and more than 3 times faster than Firefox 3. "


As I said earlier here, most of performance testing tools doesn't have ability to report end user response time and one of the reason being different Javascript engine used by different browser vendor.

For more information about Safari and its performance, check here and here

As usual, comments are always welcome.


Friday, February 20, 2009

Oracle 11g - Memory Management Made Simple

As and when Oracle release new version of database, DBA gets new bunch of features.
Before Oracle 9i, DBA has to maintain the whole bunch of memory parameters like DB_BLOCK_BUFFERS, SHARED_POOL_SIZE, set of PGA parameters *_AREA_SIZE,
LOG_BUFFER and so on. Real complexity before 9i is assigning correct values to *_AREA_SIZE
(PGA) parameters as these values is not for whole instance; it is for EACH oracle user process(Oracle user process count can grow upto maximum of PROCESSES settings).

In 9i, Oracle introducted a new parameters called PGA_AGGREGATE_TARGET and WORKAREA_SIZE_POLICY which helps users to assign a single value for the whole instance's PGA and Oracle instance itself will do self mangement for PGA memory. In earlier version, DBA has to assign value for _AREA_SIZE parameters which assign equal size for all sessions.

Oracle 10g has gone to next level from 9i wherein it automates the memory management of whole SGA. Instead of DB_CACHE_SIZE, SHARED_POOL_SIZE, JAVA_POOL_SIZE,
LOG_BUFFER parameter; user has to manage just SGA_TARGET and SGA_MAX_SIZE(see diagram).

Oracle 11g has gone to the next level from 10g of autmatic memory management where in the whole instance can be controlled by initialization parameter MEMORY_TARGET and a maximum memory size initialization parameter MEMORY_MAX_TARGET. Oracle Database then tunes to the target memory size, redistributing memory as needed between the system global area (SGA) and the instance program global area (instance PGA).

Oracle doc says

MEMORY_TARGET specifies the Oracle system-wide usable memory. The database tunes memory to the MEMORY_TARGET value, reducing or enlarging the SGA and PGA as needed.

MEMORY_MAX_TARGET specifies the maximum value to which a DBA can set the MEMORY_TARGET initialization parameter.

Here is how internally the memory management parameters control the memory structure if MEMORY_TARGET is set to a non-zero value:

  • If SGA_TARGET and PGA_AGGREGATE_TARGET are not set, it will distribute the total server memory in a fixed ratio as 60% and 40% and assign it to SGA_TARGET and PGA_AGGREGATE_TARGET respectively.
  • If SGA_TARGET and PGA_AGGREGATE_TARGET are set, they will be considered the minimum values for the sizes of SGA and the PGA respectively (But sum of SGA_TARGET and PGA_AGGREGATE_TARGET should be less than or equal to MEMORY_TARGET).
  • If SGA_TARGET is set and PGA_AGGREGATE_TARGET is not set, we will still auto-tune both parameters. PGA_AGGREGATE_TARGET will be initialized to a value of (MEMORY_TARGET-SGA_TARGET).
  • If PGA_AGGREGATE_TARGET is set and SGA_TARGET is not set, we will still auto-tune both parameters. SGA_TARGET will be initialized to a value of min(MEMORY_TARGET-PGA_AGGREGATE_TARGET, SGA_MAX_SIZE (if set by the user)).
Note: MEMORY_TARGET can be dynamically increased until MEMORY_MAX_TARGET without bouncing the instance. Regarding reducing the size of MEMORY_TARGET, it is just a request to Oracle server which will be honoured in the course of time.

Monitoring Automatic Memory Management in Oracle 11g using views V$MEMORY_DYNAMIC_COMPONENTS (current size of all memory components) and V$MEMORY_RESIZE_OPS (circular history list of the last 800 SGA resize requests).

C:\Documents and Settings\tchettia>sqlplus

SQL*Plus: Release - Production on Mon Jul 13 11:43:05 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

sys@ORCL> set pagesize 50
sys@ORCL> col COMPONENT format a30

------------------------------ ------------
shared pool 192937984
large pool 4194304
java pool 12582912
streams pool 0
SGA Target 536870912
DEFAULT buffer cache 318767104
KEEP buffer cache 0
RECYCLE buffer cache 0
DEFAULT 2K buffer cache 0
DEFAULT 4K buffer cache 0
DEFAULT 8K buffer cache 0
DEFAULT 16K buffer cache 0
DEFAULT 32K buffer cache 0
Shared IO Pool 0
PGA Target 318767104
ASM Buffer Cache 0

16 rows selected.

sys@ORCL> col PARAMETER format a30

------------------------------ ------------- ------------ ----------- ---------- ---------
shared_pool_size STATIC 0 192937984 192937984 COMPLETE
db_cache_size INITIALIZING 318767104 318767104 318767104 COMPLETE
java_pool_size STATIC 0 12582912 12582912 COMPLETE
streams_pool_size STATIC 0 0 0 COMPLETE
sga_target STATIC 0 536870912 536870912 COMPLETE
db_cache_size STATIC 0 318767104 318767104 COMPLETE
db_keep_cache_size STATIC 0 0 0 COMPLETE
db_recycle_cache_size STATIC 0 0 0 COMPLETE
db_2k_cache_size STATIC 0 0 0 COMPLETE
db_4k_cache_size STATIC 0 0 0 COMPLETE
db_8k_cache_size STATIC 0 0 0 COMPLETE
db_16k_cache_size STATIC 0 0 0 COMPLETE
db_32k_cache_size STATIC 0 0 0 COMPLETE
pga_aggregate_target STATIC 0 318767104 318767104 COMPLETE
db_cache_size STATIC 0 0 0 COMPLETE
large_pool_size STATIC 0 4194304 4194304 COMPLETE

16 rows selected.

For more information on Oracle database refer documentation and support check here and here.

Quite obvious, performance tuning will be quite easy.

As usual, comments are always welcome.


Added by Thiru: Almost all parameter available in 9i and 10g is available in 11g

Tuesday, February 17, 2009

vmstat - Virtual Memory Statistics

If someone asks me to check how Linux/UNIX system is performing now, first think I do it vmstat. Lot of people just checks for CPU and memory utilization statistics in vmstat. But in real, it gives more information that just CPU and memory information. In this posting, I am going to explain the detail of vmstat.

vmstat stands for virtual memory statistics; it collects and displays summary information about memory, processes, interrupts, paging and block I/O information. By specifying the interval, it can be used to observe system activity interactively.

Most commonly people will use 2 numeric arguments in vmstat; first is delay or sleep between updates and the second is how many updates you want to see before vmstat quits. Please note this is not the full syntax of vmstat and also it can vary between OSs. Please refer your OS man page for more information.

To run vmstat with 7 updates, 10 seconds apart type

#vmstat 10 7

Please note, in some systems, reported metrics might be slightly diffent. The heading that I am writing now are reported in Oracle Linux (Unbreakable Oracle Linux)

Process Block: Provides details of the process which are waiting for something (it can be CPU, IO etc; can be potentially for any resource)
r  -->  Processes waiting for CPU. More the count we observe, more processes waiting to run. If we just observe a spike in the count, we shouldn’t treat them as bottleneck. If value is constantly high (most people treats 2 * CPU count  as high), it hints that CPU is the bottleneck.
b  -->  Uninterruptible sleeping processes, also known as “blocked” processes. These processes are most likely waiting for I/O but could be for something else too
w  -->  number of processes that can be run but have been swapped out to the swap area. This parameter gives hint about memory bottleneck. Please remember, only some system reports this count

Memory Block: Provides detailed memory statistics
Swpd  -->  Amount of virtual memory or swapped memory used
Free  -->  Amount of free physical memory (RAM)
Buff  -->  Amount of memory used as buffers. This memory is used to store file metadata such as i-nodes and data from raw block devices
Cache  -->  Amount of physical memory used as cache (Mostly cached file).

Note: Most of the systems report memory block value in KB. Remember I said most; not all. So check man page.

Swap Block: Provided memory swap information
si  -->  Rate at which the memory is swapped back from the disk to the physical RAM
so  -->  Rate at which the memory is swapped out to the disk from physical RAM

Note: Most of the systems reports swap block value in KB. Check man page

I/O Block: I/O related information
bi  -->  Rate at which the system sends data to the block devices (in blocks/sec)
bo  -->  Rate at which the system reads the data from block devices (in blocks/sec)

System Block: System information
in  -->  Number of interrupts received by the system per second
cs  -->  Rate of context switching in the process space (in number/sec)

CPU block: Most used CPU related information
Us  -->  Shows the percentage of CPU spent in user processes. Most of the user/application/database processes comes under user processes category
Sy  -->  Percentage of CPU used by system processes, such as all root/kernal processes
Id  -->  Percentage of free CPU
Wa  -->  Percentage spent in “waiting for I/O”

Lot of people have problems here. Some people say us + sy +id + wa=100 and some other says us + sy +id =100. I stick to second (I/O doesn’t consume CPU). 

Interpretation with respect to performance:

The first line of the output is an average of all the metrics since the system was restarted. So, ignore that line since it does not show the current status. The other lines show the metrics in real time.

Ideally r/b/w values under procs block with close to 0 or 0 itself. If one or value counter values are constantly reporting high values, it means that system may not have sufficent CPU or Memory or I/O bandwidth.

If value of swpd under swap is too high and it keeps changing, the it means that system is running short of memory.

The data under “io” indicates the flow of data to and from the disks. This shows how much disk activity is going on, which does not necessarily indicate some problem(obviously data has to go to disk in order to be persistent). If we see some large number under “proc” and then “b” column (processes being blocked) and high I/O, the issue could be a I/O contention.

Rule of thumb in Performance
Adding more CPU, Memory or I/O bandwidth to the system is not the solution to the problem always; this is just postment of problem to future and it can blow anytime. The real solution is tune the application(every compoent in the architecture) as far as possible. Adding hardware capacity or buying a powerful hardware should be the last option.

As usual, comments are always welcome.


Tuesday, February 10, 2009

High Performance Oracle Database Export using Data Pump

Oracle offers high performance database not only in DDL and DML. Not only this that, but also in data backup and recovery. Oracle 10g offers new export/import utilities called data dump. The biggest advance of using data dump is it can push the system’s hardware capabilities to roof to achieve performance.

Oracle Data Pump is the replacement for the original Export and Import utilities. Available starting in Oracle Database 10g, Oracle Data Pump enables very high-speed movement of data and metadata from one database to another.

The Data Pump Export and Import utilities have a similar look and feel to the original utilities, but they are much more efficient and give you greater control and management of your import and export jobs.

Unlike original export/import utilities, data pump runs as jobs primarily on the server using server processes and stores the data in database server itself avoiding network latency(Except db link).

Data pump utilities uses direct file manipulation on the directory objects which helps us to achieve the speed. Since directory object can be managed from Oracle, security can be controlled by DBA.

Sample Practical example for export using data pump (expdp)
1. Creating directory 
Data pump works on direct file manipulation on database object directory. So before invoking data pump utilities, we need to create directory object and grant required permission. See the sample below.

create directory hr_backup_dir as '/u02/dbbackup';
grant read, write on directory hr_backup_dir to scott;

2. decide on parallelism
If you the whole system resource is dedicated to database backup, then PARALLEL can be set to number of processor available in that system. If you want to assign only limited processor for backup, then set PARALLEL count accordingly. Based on the PARALLEL count, data dump will span that many number of child Oracle processes.
Parallelism also depends on the number of DUMPFILE specified in expdp. Data pump thread needs exclusive access to DUMPFILE. So if you specify PARALLEL to 8 and give only 1 DUMPFILE, then only 1 data pump process will do export and all other processes will wait for first process to finish and release the dump file. So plan for both PARALLEL and DUMPFILE parameter.

3. run expdp
Here we go

expdp hr/hrworks SCHEMAS=HR DIRECTORY= hr_backup_dir DUMPFILE=hr.thiru.10-feb-2009%U.dmp LOGFILE=hr.thiru.10-feb-2009.log PARALLEL=8 JOB_NAME=HRDataPumpByThiru

If you specify %U in DUMPFILE, data pump will automatically create required number of files (most probably number of files created will be equal to PARALLEL).

4. check log files
Once export got completed, we can check the details in hr.thiru.10-feb-2009.log file.

Interactive Mode
While data pump export operation is going, user can press ctrl + c; it will stop displaying message in screen, but it will not stop the export operation. Instead it display Export prompt which is interactive. From here user can query to see the status of export operation and also alter data pump parameters such as parallel and so on.

Database Monitoring
DBA or any other user have permission can view the status of the export operation from the database views, too. The main view to monitor the jobs is DBA_DATAPUMP_JOBS, which tells how many worker processes (column DEGREE) are working on the job. The other view that is important is DBA_DATAPUMP_SESSIONS, which when joined with the previous view and V$SESSION gives the SID of the session of the main foreground process.

select sid, serial# from v$session s, dba_datapump_sessions d where s.saddr = d.saddr;

For more information refer Oracle documentation here

Post your valuable feedback.

Wednesday, February 4, 2009


I have worked in databases including IBM's DB2 (infact I am at IBM certified DB2 DBA for version 8.1), Open source MYSQL, Oracle (From Version 8). As a performance tester, I used to wonder how Oracle along can provide such wide range of monitoring facility that no other vendor is able to match. From System wide monitoring using statspack/AWR to session level trace, Oracle's monitoring capability is amazing. This article I am going to cover couple of basic things about Oracle’s very own monitoring utility Statspack. Statspack is the build in tool and installation script comes along with the database itself. (No need to pay even an extra penny from your pocket). All you need is just to install the same and start using the same.

Quote from Oracle Database Documentation about statspack:
"The Statspack package is a set of SQL, PL/SQL, and SQL*Plus scripts that allow the collection, automation, storage, and viewing of performance data. Statspack stores the performance statistics permanently in Oracle tables, which can later be used for reporting and analysis. The data collected can be analyzed using Statspack reports, which includes an instance health and load summary page, high resource SQL statements, and the traditional wait events and initialization parameters."

To invoke the Statspack setup, all you need to do is call the spcreate.sql script which is available in ORACLE_HOME/rdbms/admin. PERFSTAT user own all PL/SQL code, database objects including tables, sequence, constrains etc., In windows, you can login into Oracle user (using SQL* Plus) which has enough privilege to install and run the following 
In Linux/Unix OS, run the following

During installation it will ask for PERFSTAT schema’s password (usually people use perfstat as password), permanent tables and temporary tablespace. SPCREATE.SQL install script in turn automatically calls the following scripts.
SPCUSR.SQL: Creates the PERFSTAT user and grants privileges required to collect the performance data from V$ tables.
SPCTAB.SQL: Creates the tables which are going to stores performance data.
SPCPKG.SQL: Creates the package required for monitoring, data purging, reporting

Installation script dumps errors (if any) into SPCUSR.LIS, SPCTAB.LIS, and SPCPKG.LIS output files. 

How it works:
Snapshot of database’s performance was taken, stored in the PERFSTAT tables and will be assigned a unique SNAP_ID for the INSTANCE. Typically we can take snapshots for every pre-defined interval and between snapshots, we can generate performance report. If instance is restarted between snapshots, then reports will be meaning less. Snapshot can be taken at various levels depending up level of monitoring data required. Snapshot level can range from 1 to 10. Default level will be 5. More the level, more performance data (consumes bit high resource also).

Taking snapshot
Login as PERFSTAT user or user which can has execute privilege on statspack package and call statspack.snap function. Few examples.

exec statspack.snap
exec statspack.snap(I_SNAP_LEVEL=>7)

Statspack report
Statspack report will give instance-wide statistics between two snapshots. Between snapshots, we can generate reports (but if instance is restarted between snapshot, report will not be meaningful). Just call SPREPORT.SQL and provide begin SNAP_ID, end SNAP_ID and report file name. 

Oracle 10g and above has come up with something (feature which combines Statspack & ADDM) called Automatic Workload Repository(AWR). But statspack is still supported.

Monday, February 2, 2009

Introducing Measurement Lab

When an Internet application doesn't work as expected or your connection seems flaky, how can you tell whether there is a problem caused by your broadband ISP, the application, your PC, or something else?

So Google and some other folks found something called Measurement Lab to learn more about the broarband, ISP etc of broadband networks. Measurement Lab (M-Lab) is an open, distributed server platform for researchers to deploy Internet measurement tools. The goal of M-Lab is to advance network research and empower the public with useful information about their broadband connections. By enhancing Internet transparency, M-Lab helps sustain a healthy, innovative Internet.

Researchers are already developing tools that allow users to, among other things, measure the speed of their connection, run diagnostics, and attempt to discern if their ISP is blocking or throttling particular applications. These tools generate and send some data back-and-forth between the user's computer and a server elsewhere on the Internet. Unfortunately, researchers lack widely-distributed servers with ample connectivity. This poses a barrier to the accuracy and scalability of these tools. Researchers also have trouble sharing data with one another.

Over the course of early 2009, Google will provide researchers with 36 servers in 12 locations in the U.S. and Europe. All data collected via M-Lab will be made publicly available for other researchers to build on. M-Lab is intended to be a truly community-based effort, and we welcome the support of other companies, institutions, researchers, and users that want to provide servers, tools, or other resources that can help the platform flourish.

Wanna test, check some tools listed on measurement lab? Click here.

To know reason for why people/researchers look for these kind of tools, click here.