Top 50 Software Testing Interview Questions You Must Prepare 26.Apr.2024

  1. Equivalence Partitioning.
  2. Use Case Testing.
  3. Data Flow Analysis.
  4. Exploratory Testing.
  5. Decision Testing.
  6. Inspections.

Data Flow Analysis and Inspections are static; Equivalence Partitioning, Use Case Testing, Exploratory Testing and Decision Testing are dynamic.

A software development model that illustrates how testing activities integrate with software development phases.

Because they share the aim of identifying defects but differ in the types of defect they find.

The use of data on paths through the code.

Because configuration management assures that we know the exact version of the testware and the test object.

It helps prevent defects from being introduced into the code.

Triggered by modifications, migration or retirement of existing software.

The relationship between test cases and requirements is shown with the help of a document. This document is known as traceability matrix.

SDLC deals with developement/coding of the software while STLC deales with validation and verification of the software.

There are four test levels:

  1. Unit/component/program/module testing
  2. Integration testing
  3. System testing
  4. Acceptance testing

Testing performed by potential customers at their own locations.

It depends on the risks for the system being tested. There are some criteria bases on which you can stop testing.

  1. Deadlines (Testing, Release)
  2. Test budget has been depleted
  3. Bug rate fall below certain level
  4. Test cases completed with certain percentage passed
  5. Alpha or beta periods for testing ends
  6. Coverage of code, functionality or requirements are met to a specified point.

Semi-random test cases are nothing but when we perform random test cases and do equivalence partitioning to those test cases, it removes redundant test cases, thus giving us semi-random test cases.

Independent testers are unbiased and identify different defects at the same time.

Test design, scope, test strategies , approach are various details that Test plan document consists of:

  1. Test case identifier
  2. Scope
  3. Features to be tested
  4. Features not to be tested
  5. Test strategy & Test approach
  6. Test deliverables
  7. Responsibilities
  8. Staffing and training
  9. Risk and Contingencies

Exploratory testing is a hands-on approach in which testers are involved in minimum planning and maximum test execution. The planning involves the creation of a test charter, a short declaration of the scope of a short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to be used. The test design and test execution activities are performed in parallel typically without formally documenting the test conditions, test cases or test scripts. This does not mean that other, more formal testing techniques will not be used.

For example: the tester may decide to use boundary value analysis but will think through and test the most important boundary values without necessarily writing them down. Some notes will be written during the exploratory-testing session, so that a report can be produced afterwards.

Risk-based Testing is the term used for an approach to creating a test strategy that is based on prioritizing tests by risk. The basis of the approach is a detailed risk analysis and prioritizing of risks by risk level. Tests to address each risk are then specified, starting with the highest risk first.

Test Deliverables are set of documents, tools and other components that has to be developed and maintained in support of testing.

There are different test deliverables at every phase of the software development lifecycle:

  1. Before Testing
  2. During Testing
  3. After the Testing

Equivalence partitioning testing is a software testing technique which divides the application input test data into each partition at least once of equivalent data from which test cases can be derived.  By this testing method it reduces the time required for software testing.

Decision table testing is used for testing systems for which the specification takes the form of rules or cause-effect combinations. In a decision table the inputs are listed in a column, with the outputs in the same column but below the inputs. The remainder of the table explores combinations of inputs to define the outputs produced.

There are currently seven different agile methodologies that I am aware of:

  1. Extreme Programming (XP)
  2. Scrum
  3. Lean Software Development
  4. Feature-Driven Development
  5. Agile Unified Process
  6. Crystal
  7. Dynamic Systems Development Model (DSDM)

Because incremental integration has better early defects screening and isolation ability.

The purpose of test completion criterion is to determine when to stop testing.

Metrics from previous similar projects and discussions with the development team.

Component testing, also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable. Component testing may be done in isolation from the rest of the system depending on the context of the development life cycle and the system. Most often stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner. A stub is called from the software component to be tested; a driver calls a component to be tested.

To identify defects in any software work product.

Retesting:  It is a process of checking bugs that are actioned by development team to verify that they are actually fixed.

Data Driven Testing (DDT):  In data driven testing process, application is tested with multiple test data. Application is tested with different set of values.

Mutation testing is a technique to identify if a set of test data or test case is useful by intentionally introducing various code changes (bugs) and retesting with original test data/ cases to determine if the bugs are detected.

Inexpensive way to get some benefit.

In contrast to informal reviews, formal reviews follow a formal process. A typical formal review process consists of six main steps:

  1. Planning
  2. Kick-off
  3. Preparation
  4. Review meeting
  5. Rework
  6. Follow-up.

Failure is a departure from specified behaviour.

Testing the end to end functionality of the system as a whole is defined as a functional system testing.

The moderator (or review leader) leads the review process. He or she determines, in co-operation with the author, the type of review, approach and the composition of the review team. The moderator performs the entry check and the follow-up on the rework, in order to control the quality of the input and output of the review process. The moderator also schedules the meeting, disseminates documents before the meeting, coaches other team members, paces the meeting, leads possible discussions and stores the data that is collected.

Phantom is a freeware, and is used for windows GUI automation scripting language.  It allows to take control of windows and functions automatically.  It can simulate any combination of key strokes and mouse clicks as well as menus, lists and more.

The exit criteria is determined on the bases of 'Test Planning'.

  1. Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary.
  2. Provide ideas for test process improvement.
  3. Provide a vehicle for assessing tester competence.
  4. Provide testers with a means of tracking the quality of the system under test.