The best part our minds is that it always wants something new to do. I had an opportunity to do some performance testing which was something different from my daily routine.
Performance testing is considered to be part of non-functional testing group. The term itself suggests that in this type of technique, testers measure the performance application, server or the database. This process is similar to any other testing process, only difference is the requirements and goals.
My approach was to keep it simple. Start with a plan. Since I did not have any specific requirements, I had to define a baseline requirements based on conversation with server architect & database architect.
My first step was to define a baseline requirement which was my performance goal. Now few questions arise in my mind
Are these goals for the whole application? or for specific components? I made it clear again by discussion with all stakeholders that this was for the whole application. This helps in moving forward because you need to clear with what you are doing.
Next step was to choose a tool for performance testing. what tools must I use open-source or commercial? so, I started analyzing performance test tools that will fulfill my goals. since the project did not have a budget to afford a commercial tool at this stage of project, I had a very easy decision, I had to choose a open-source tool. After some analysis, I chose Jmeter for several reasons.
-open source & free
-easy to use
-has lot of features
-most important it fulfills my goal
Do we have an environment for performance test? because if you do a performance test anywhere, then it will have consequences like results may not be correct, others may get affected who are using the same environment..etc.
I choose a environment that was under control so that it neither affects the results or any other persons.
We are entering the execution phase now. we have set up the environment and we need to do several runs to determine the results and information during runs.
All runs may not be successful but each run will give important information even if it fails.
After every run there were some steps needed in the environment setup to get the application back to what we call the initial or clean state.
For ever run a report was generated that captured all the details and it was presented to all stakeholders. The architect's did some tuning to see that we achieve the performance goals.
Finally, Performance goals for the sprint were achieved. During this experience I learned a lot of things that I like to share.
1.Baseline requirements are very important, they act as a goal for the performance test
2.Environment must be set up correctly.
3.Identify factors that can affect the test results
4.Load test tool can simulate a limited no of users from a client machine. we need to know that number.
5.The tests run within an intranet will give different results. This may not be the case in real world scenario
6.Discuss with architect's based on information in test reports, what tuning they are doing for removing the bottlenecks, so that you can focus more in those areas
7.Jmeter has both GUI and NON-GUI interfaces. It makes a huge difference
8.While running tests you need to be observing the activity in server and db.Logs is one of the sources.
I am writing this article as my thoughts flow. It may not be as per best writing guidelines or rules, apologies for that. I will try to improve with each of my write ups.
No comments:
Post a Comment