Top 27 Performance Testing Interview Questions You Must Prepare 19.Mar.2024

To improve the system performance we follow a mechanism, known as Performance tuning.

To improve the systems performance there are two types of tuning performed:

  1. Hardware tuning: Optimizing, adding or replacing the hardware components of the system and changes in the infrastructure level to improve the systems performance is called hardware tuning.
  2. Software tuning: Identifying the software level bottlenecks by profiling the code, database etc. Fine tuning or modifying the software to fix the bottlenecks is called software tuning.

Yes, I faced many challenges like defining the scope of application, break points which I over came by studying the historical data of application and based on them I decided the values, setting up the performance environment including proxy bypassing, connecting to Server under test.

The phases involved in automated performance testing are:

Planning/Design: This is the primary phase where team will be gathering the requirements of the performance testing. Requirements can be Business, Technical, System and Team requirements.

Build: This phase consists of automating the requirements collected during the design phase.

Execution: it is done in multiple phases. It consists of various types of testing like baseline, benchmarking testing.

Analyzing and tuning: During the performance testing we will be capturing all the details related to the system like Response time and System Resources for identifying the major bottlenecks of the system. After the bottlenecks are identified we have to tune the system to improve the overall performance.

  • Planning the Test
  • Developing the Test
  • Execution of the Test
  • Analysis of Results

The differences between baseline and benchmark testing are:

Baseline testing is the process of running a set of tests to capture performance information. This information can be used as a point of reference when in future changes are made to the application where as Benchmarking is the process of comparing your system performance against an industry standard that is given by some other organization.

Example: We can run baseline test of an application, collect and analyze results, and then modify several indexes on a SQL Server database and run the same test again, using the previous results to determine whether or not the new results were better, worse, or about the same.

Performance tuning is done for improving the system performance:

Hardware Tuning: Optimizing, adding or replacing hardware components of the system and changes done in the infrastructure level to improve system performance is called hardware tuning.

Software Tuning: Identifying the software level bottlenecks by profiling the code, database etc. Fine tuning or modifying the software to fix the bottlenecks is called software tuning.

We can calculate pacing by the formula as

No. of users = (Response Time in seconds + Pacing in seconds) * TPS

TPS is traction per Second.

Yes it can be possible when you have lots of CSS (Cascading Style Sheet) in your application which takes a lot of time to display. We can expect this type of situation where throughput will be increasing as well as the response time.

Some common performance bottlenecks include:

  • CPU Utilization
  • Memory Utilization
  • Networking Utilization
  • S limitation
  • Disk Usage

A protocol is a set of rules for information communication between 2 or more systems. There are many protocols such as Http/Https, FTP, Web Services, Citrix.Mostly used protocols are Http/Https and Web Services.

Following activities are performed during testing of application:

  • Create user scenarios
  • User Distribution
  • Scripting
  • Dry run of the application
  • Running load test and analyzing the result

Performance Testing is performed to determine response time of the some components of the system perform under a particular workload. It is generally measured in terms of response time for the user activity. It is designed to test the overall performance of the system at high load and stress condition It identifies the drawback of the architectural design which helps to tune the application.

It includes the following:

  • Increasing number of users interacting with the system.
  • Determine the Response time.
  • Repeating the load consistently.
  • Monitoring the system components under controlled load.
  • Providing robust analysis and reporting engines.

Performance testing:

In Performance testing, testing cycle includes requirement gathering, scripting, execution, result sharing and report generation. 

Performance engineering:

Performance Engineering is a step ahead of Performance testing where after execution; results are analyzed with the aim to find the performance bottlenecks and the solution is provided to resolve the identified issues.

Functional Testing

  • To verify the accuracy of the software with definite inputs against expected output, fuctional testing is done
  • This testing can be done manually or automated
  • One user performs all the operations
  • Customer, Tester and Development involvement is required
  • Production sized test environment is not necessary, and H/W requirements are minimal.

Performance Testing

  • To validate the behavior of the system at various load conditions performance testing is done.
  • It gives the best result if automated
  • Several user performs desired operations
  • Customer, Tester, Developer, DBA and N/W management team
  • Requires close to production test environment and several H/W facilities to populate the load.

Following are the sub-genres of Performance Testing:

  • Load Testing: it is conducted to examine the performance of application for a specific expected load. Load can be increased by increasing the number of user performing a specific task on the application in a specific time period.
  • Stress Testing: is conducted to evaluate a system performance by increasing the number of user more than the limits of its specified requirements. It is performed to understand at which level application crash.
  • Volume Testing: test an application in order to determine how much amount of data it can handle efficiently and effectively.
  • Spike Testing: what changes happens on the application when suddenly large number of user increased or decreased.
  • Soak Testing: is performed to understand the application behavior when we apply load for a long period of time what happens on the stability and response time of application.

Think time can be defined as the real time wait between 2 consecutive tractions. For Example a real time user waits to evaluate the data he received before performing the next step, that wait time he takes can be stated as think time.

Profiling is a process of pinpointing a bottleneck performance at minute levels. This is done by performance teams for engineering which includes developers or performance testers. You can profile in any application layer getting tested. If you need to do application profiling you might need to use tools for performance profiling of application servers. When profiling an application server, you identify issues at the level of code such as memory intensive API’s If the database is what you are profiling using the tools for database profiling, you can identify a number of things such as a full table scan query, high cost queries and the number of executed SQLs.

Previously Performance tester had to depend much on the development team to know about the protocol that application is using to interact with the server. Sometimes, it also used to be speculative.

However, LoadRunner provides a great help in form of Protocol Advisor from version 9.5 onwards. Protocol advisor detects the protocols that application uses and suggest us the possible protocols in which script can be created to simulate the real user.

Simultaneous users wait for other user to complete then it starts its activity whereas in concurrent users, it can be like 2 users log into the system and perform different activities at the same time.

The manual load testing drawbacks are:

  • It is very expensive to do Manual Testing, as real users charge by the hour.
  • With manual load testing, load testing for longer durations like for 7 days won’t be possible, as users really work a maximum of eight hours daily.
  • You will not get accuracy for results correlation as there are delays between actions of users.
  • It is hard to do results collection as the results capture each other.
  • It is hard to do.

  1. Performance testing is the process where we identify the issues in the system.
  2. Performance engineering is the process where we address the issues and rectify them.

Distributed load testing: in this we test the application for a number of users accessing the application at a same time. In distributed load testing test cases are execute to determine the application behavior. Now application behavior is monitored, recorded and analyzed when multiple users concurrently use the system. Distributed load testing is the process using which multiple systems can be used for simulating load of large number of users. The reason for doing the distributed load testing is that to overcome the limitation single system to generate large number of threads.

We can study the various graphs generated by the tool such as Response time, throughput graph, running Vusers graph etc. and also we can see the server logs to identify the issues in system.

  • Difficult to measure the performance of the application accurately.
  • Difficult to do synchronization between the users.
  • Number of real time users are required to involve in Performance Testing.
  • Difficult to analyze and identify the results & bottlenecks.
  • Increases the infrastructure cost.

Performance Bottlenecks can be identified by using different counters such as response time, throughput, hits/sec, network delay graph. We can analyze them and tell where the suspected performance bottleneck is.

IP spoofing is used to spoof the system so that each host machine can use many different IPs to create hypothetical environment where system believes that request are coming from different locations.

When the multiple users, without any time difference, hits on a same event of the application under the load test is called a concurrent user hit. The concurrency point is added so that multiple Virtual User can work on a single event of the application. By adding concurrency point, the virtual users will wait for the other Virtual users which are running the scripts, if they reach early. When all the users reached to the concurrency point, only then they start hitting the requests.