Tuesday, June 24, 2008

Performance Testing Process/Phases

Performance testing is testing that is performed to determine how fast some aspect of a system performs under a particular workload. To become a successful Performance Engineer one should have
  1. Basic concept of how things work (HTTP, Network, LB, Web Server, App server, DB server, LDAP, SSL Accelerator etc.,)
  2. Eagerness to learn things
  3. patience
  4. In sting (list continues based on what you do)
Purpose
  1. Check whether it met predefined SLA
  2. Compare 2 systems and find out which is performing better
  3. Find bottleneck and tune to make the system to perform better

Performance Testing/Engineering is not a science, it is rather art.
Phases in Performance Testing/Engineering

  1. Requirement collection
  2. Feasibility study
  3. Test Plan
  4. Scripting
  5. Monitoring Design & Deployment
  6. Smoke Test
  7. Test Execution & Data Collection
  8. Analysis and Reporting
  9. Closure

Requirement Collection

  • Understand want customer really wants
  • Tell customer what performance team do and make sure they understand what it means.
  • Ask customer what to do
  • While selecting business flow for simulation, select flow(s) which obey the following rule. “20% of total business flow carried out by 80% of users”. (Traversal document)
  • Understand the architecture (this has to done in detail if monitoring and tuning is involved)
  • Stress to customer that

a. We are not here to do function testing.

b. We don’t/can’t performance test everything.

c. We don’t/can’t simulate 100% of production transactions.

Feasibility Study

  • Can we do what customer wants? If not, ask customer for help. Better to say no rather than doing something wrong or giving false commitment. This will make sure we get repeated customer.
  • How fast can we do this task? (this helps in planning)
  • Does tool by native support this or do we need to build work around?
    Note: In the same service based company, marketing team can’t say no most of the times (May be due to aggressive targets set by management or earn commission and much more); in that case, always insist for employee training whenever possible.

Test Plan

  • Form a team. Know their skills
  • When to do what?
  • Deadlines. Fix deadline based on team’s skill(not based on TL's skill).
  • Get approval from client

Scripting

  • Simulate as per traversal document
  • Remember we simulate only core portion; so don’t make mistake in that.
  • Build agenda/driver script based on mix.
  • We are testing something to find issues. Make sure your script doesn’t have issues.

Monitoring design & Deployment

  • Get the list of servers that needs to be monitored (including versions)
  • If you haven’t worked on that specific version of servers, spend sometime in going though ‘new features’
  • Design & deploy the monitoring for OS(perfmon, vmstat, netstat etc.,) and then corresponding servers specific monitors(perfservlet, server-status, statspack etc.,).
    Smoke Test
  • Smoke Test is the test to make sure script and monitoring are working as required
  • Make sure entire team is there during smoke (scripting team and monitoring design/deployment team)
  • Have you detected something suspicious? Check and/or correct the same. Mistakes here will lead to failure of whole engagement.
  • Redo smoke test if some changes were made
  • Make sure enough time is allotted for smoke. Don’t do smoke just before 1 hour of test. Atleast give 1 or 2 days for smoke depends on complexity of engagement.
  • Make sure you have check list

Test Execution & Data collection

  • Should be only started if everything was gone fine in smoke.
  • Make sure check list is correctly followed.
  • After test execution, collect simulation tool logs, monitoring logs, configuration files, log files and whatever is required by analysis team.
  • Most importantly timesync information of all servers.

Analysis and Reporting

  • Nothing wrong in Newbie doing analysis. But?? (read the last point)
  • Reporting should focus and answer for every question in requirement document. Nothing wrong in providing value add. Also nothing wrong in missing value add. But make sure customer gets what he wants. There is no point is giving value addition without addressing customers basic requirements.
  • Recommendation should have substantiation or else say best practice.
  • Review the report. After customer is expecting only this from us.

Closure

  • Anything gone wrong? Why? Make sure that it is not getting repeated in future project
  • Appreciate who ever deserves. After all PE guys doesn’t only work for money.
  • Appreciation must be given for fresher who did excellent job in scripting inspite of this being his/her first project.

Whatever phase I listed below has to happen ideally; not necessarily always.

Comments are always welcome.

2 comments:

Ajay said...

Very nice and detailed writeup. This helped me as a checklist to evaluate if I am doing most of the ideal things correctly. Thanks for the article.

Software Development Company said...

Hello,
The Article on Performance Testing Process is informative. It gives detailed information about it.. Thanks for Sharing the information about the Performance Testing Process. For More information check the detail on the Performance Testing here Software Testing Company