Top 17 Functional Testing Interview Questions You Must Prepare 19.Mar.2024

Enlisted below are few bug statuses along with their descriptions:

  • New: When the defect or bug is logged for the first time it is said as New.
  • Assigned: After the tester has logged a bug, his bug is being reviewed by the tester lead and then it is assigned to the corresponding developer team.
  • Open: Tester logs a bug in the Open state and it remains in the open state until the developer has performed some task on that bug.
  • Resolved/Fixed: When a developer has resolved the bug, i.e. now the application is producing the desired output for a particular issue, then the developer changes its status to resolved/fixed.
  • Verified/Closed: When a developer has changed the status to resolved/fixed then the tester now tests the issue at its end and if it’s fixed then he changes the status of the bug to ‘Verified/Close’.
  • Reopen: If a tester is able to reproduce the bug again i.e. the bug still exists even after fixing by the developer, it’s status is marked as reopen.
  • Not a bug/Invalid: A bug can be marked as invalid or not a bug by the developer when the reported issue is as per the functionality but is logged due to misinterpretation.
  • Deferred: Usually when the bug is of minimal priority for the release and if there is lack of time, in that case, those minimal priority bugs are deferred to the next release.
  • Cannot Reproduce: If the developer is unable to reproduce the bug at its end by following the steps as mentioned in the issue.

Equivalence partitioning also known as equivalence class partitioning is a form of black box testing where input data is being divided into data classes. This process is done in order to reduce the number of test cases, but still covering the maximum requirement.

Equivalence partitioning technique is applied where input data values can be divided into ranges. The range of the input values is defined in such a way that only one condition from each range partition is to be tested assuming that all the other conditions of the same partition will behave the same for the software.

For Example: To identify the rate of interest as per the balance in the account, we can identify the range of balance amount in the account that earn a different rate of interest.

Following are the steps that should be covered as a part of functional testing:

  • Understanding the Requirement document specification and clearing the doubts and queries in the form of review comments.
  • Writing the test cases with respect to the requirement specification by keeping in mind all the scenarios that should be considered for all the cases.
  • Identifying the test inputs and requesting the test data that is required to execute the test cases as well as to check the functionality of the application.
  • Determine the actual outcomes as per the input values to be tested.
  • Execute the test cases that determine whether application behavior is as expected or any defect has occurred.
  • Compare the actual result and the computed result to find out the actual outcome.

Listed below are the possible scenarios that can be performed to fully test the login feature of any application:

  • Check the input fields i.e. Username and password with both valid and invalid values.
  • Try entering valid email id with an incorrect password and also enter an invalid email and valid password. Check for the proper error message displayed.
  • Enter valid credentials and get logged in to the application. Close and reopen the browser to check if still logged in.
  • Enter the application after logging in and then again navigate back to the login page to check whether the user is asked again to login or not.
  • Sign in from one browser and open the application from another browser to verify whether you are logged into another browser also or not.
  • Change password after logging into the application and then try to login with that old password.
  • There are few other possible scenarios as well which can be tested.

Boundary value analysis method checks the boundary values of Equivalence class partitions. Boundary value analysis is basically a testing technique which identifies the errors at the boundaries rather than within the range values.

For Example: An input field can allow a minimum of 8 characters and maximum 12 characters then 8-12 is considered as the valid range and 13 are considered as the invalid range. Accordingly, the test cases are written for valid partition value, exact boundary value, and invalid partition value.

Volume testing is a form of performance testing which determines the performance levels of the server throughput and response time when concurrent users, as well as large data load from the database, are put onto the system/application under tests.

Data-driven testing is the methodology where a series of test script containing test cases are executed repeatedly using data sources like Excel spreadsheet, XML file, CSV file, SQL database for input values and the actual output is compared to the expected one in the verification process.

For Example: Test studio is used for data-driven testing.

Some advantages of data-driven testing are:

  • Reusability.
  • Repeatability.
  • Test data separation from test logic.
  • The number of test cases is reduced.

Automation testing is a testing methodology where automation tool is used to execute the test cases suite in order to increase test coverage as well speed to test execution. Automation testing does not require any human intervention as it executes pre-scripted tests and is capable of reporting and comparing outcomes with previous test runs.

Repeatability, ease of use, accuracy, and greater consistency are some of the advantages of Automation testing.

Some automation testing tools are listed below:

  • Selenium
  • Tellurium
  • Watir
  • SoapUI

Sanity testing is performed after receiving the build to check the new functionality/defects to be fixed. In this form of testing the goal is to check the functionality roughly as expected and determine whether the bug is fixed and also the effect of the fixed bug on the application under test.

There is no point in accepting the build by the tester and wasting time if Sanity testing fails.

User acceptance testing is usually performed after the product is thoroughly tested. In this form of testing, software users or say, client, itself use the application to make sure if everything is working as per the requirement and perfectly in the real world scenario.

UAT is also known as End-user testing.

By Risk-based testing of a project, it is not just to deliver a project risk-free but the main aim of risk-based testing is to achieve the project outcome by carrying out best practices of risk management.

The major factors to be considered in Risk-based testing are as follows:

  • To identify when and how to implement risk-based testing on an appropriate application.
  • To identify the measures that act well in finding as well as handling risk in critical areas of the application.
  • To achieve the project outcome that balances risk with the quality and feature of the application.

Smoke testing is performed on the application after receiving the build. Tester usually tests for the critical path and not the functionality in deep to make sure, whether the build is to be accepted for further testing or to be rejected in case of broken application.

A smoke checklist usually contains the critical path of the application without which an application is blocked.

Exploratory testing me testing or exploring the application without following any schedules or procedures. While performing exploratory testing, testers do not follow any pattern and use their out of box thinking and diverse ideas to see how the application performs.

Following this process covers even the smallest part of the application and helps in finding more issues/bugs than in the normal test case testing process.

Exploratory testing is usually performed in cases when:

  • There is a experienced tester in the testing team who can use their testing experience to apply all the best possible scenarios.
  • All critical paths have been covered and major test cases are prepared as per the requirement specifications that have been executed.
  • There is a critical application and no possible cases can be missed in any case.
  • New tester has entered the team, exploring the application will help them understand better as well as they will follow their own mind while executing any scenario rather than following the path as mentioned in the requirement document.

Accessibility testing is a form of usability testing what testing is performed to ensure that the application can be easily handled by people with disabilities like hearing, colour blindness, low visibility etc. In today’s scenario, the web has acquired the major place in our life in the form of e-commerce sites, e-learning, e-payments, etc.

Thus in order to grow better in life, everyone should be able to be a part of technology especially people with some disabilities.

Enlisted below are few types of software which helps and assist people with disabilities to use technology:

  • Speech recognition software
  • Screen reader software
  • Screen magnification software
  • Special keyboard

Stress Testing is a form of performance testing where the application is bound to go through exertion or stress i.e. execution of application above the threshold of the break to determine the point where the application crashes. This condition usually arises when there are too many users and too much of data.

Stress testing also verifies the application recovery when the work load is reduced.

Load Testing is a form of performance testing where the application is executed above various load levels to monitor peak performance of the server, response time, server throughput, etc. Through load testing process stability, performance and integrity of the application are determined under concurrent system load.

Adhoc testing, usually known as random testing is a form of testing which does not follow any test case or requirement of the application. Adhoc testing is basically an unplanned activity where any part of the application is randomly checked to find defects.

In such cases, the defects encountered are very difficult to reproduce as no planned test cases are followed. Adhoc testing is usually performed when there is a limited time to perform elaborative testing.

Requirement Traceability matrix (RTM) is a tool to keep a track of requirement coverage over the process of testing.

In RTM, all requirements are categorized as their development in course of sprint and their respective ids (new feature implementation/ enhancement/ previous issues, etc) are maintained for keeping a track that everything mentioned in the requirement document has been implemented before the release of the product.

RTM is created as soon as the requirement document is received and is maintained till the release of the product.