Top 50 Testing Tools Interview Questions You Must Prepare 21.Apr.2024

  • A common problem and a major headache
  • Work with the project's stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible.
  • It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch.
  • If the code is well-commented and well-documented this makes changes easier for the developers.
  • Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes.
  • The project's initial schedule should allow for some extra time commensurate with the possibility of changes.
  • Try to move new requirements to a 'Phase 2' version of an application, while using the original requirements for the 'Phase 1' version.
  • Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new requirements into future versions of the application.
  • Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management or the customers (not the developers or testers) decide if the changes are warranted - after all, that's their job.
  • Balance the effort put into setting up automated testing with the expected effort required to re-do them to deal with changes.
  • Try to design some flexibility into automated test scripts.
  • Focus initial automated testing on application aspects that are most likely to remain unchanged.
  • Devote appropriate effort to risk analysis of changes to minimize regression testing needs.
  • Design some flexibility into test cases (this is not easily done, the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans).
  • Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding of the added risk that this entails).

It may take serious effort to determine if an application has significant unexpected or hidden functionality, and it would indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only effects areas such as minor improvements in the user interface, for example, it may not be a significant risk.

The point at which some deliverable produced during the software engineering process is put under formal change control.

A metalanguage used to formally describe the syntax of a language.

Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort.

A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.

Quality Assurance ensures all parties concerned with the project adhere to the process and procedures, standards and templates and test readiness reviews.

A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.

Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.

1.Good Logic for programming.
@Analytical skills.
3.Pessimestic in Nature.

Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.

Testing tools can be used for :

  • Sanity tests(which is repeated on every build).
  • stress/Load tests(U simulate a large no of users,which is manually impossible).
  • Regression tests(which are done after every code change).

A graphical representation of inputs and the associated outputs effects which can be used to design test cases.

The process of exercising software to verify that it satisfies specified requirements and to detect errors. The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item . The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.

The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.

Severity: It is the impact of the bug on the application. Severity level should be set by tester. The Severity levels are: Low, Medium, and high, very high and Urgent. It is set by the tester and it can not be changed.
1. Bug causes system crash or data loss.
2. Bug causes major functionality or other severe problems; product crashes in obscure cases.
3. Bug causes minor functionality problems, may affect "fit and finish".
4. Bug contains types, unclear wording or error messages in low visibility fields.

Priority: How important is it to fix the bug is priority. Priority levels are set by the team lead or test manager and it can be changed as required.
1. Must fix as soon as possible. Bug is blocking further progress in this area.
2. Should fix soon, before product release.
3. Fix if time; some what trivial. May be postponed.

A document that describes in detail the characteristics of the product with regard to its intended features.

Analysis of a program carried out without executing the program.

RAD assumes the use of the RAD fourth generation techniques and tools like VB, VC++, Delphi etc rather than creating software using conventional third generation programming languages. The RAD works to reuse existing program components (when possible) or create reusable components (when necessary). In all cases, automated tools are used to facilitate construction of the software.

An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

  • Testing is inherent to every phase of the waterfall model.
  • It is an enforced disciplined approach.
  • It is documentation driven, that is, documentation is produced at every stage.

White box testing is based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths and conditions.

A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software

Testing work is unlimited, especially in large applications. The relatively enough testing is just to make application match product requirements and specifications very well, including functionality, usability, stability, performance and so on.

Black-box tests conducted once the software has been integrated.

Automate all the high priority test cases which needs to be executed as a part of regression testing for each build cycle.

Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.

A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.

Functional testing is a testing process where we test the functionality/behaviour of each functional component of the application. i.e. minimize button, transfer button, links etc.i.e we check what is each component doing in that application.

Regression testing is the testing the behaviour of the application of the unchanged areas when there is a change in the build.i.e we check whether the changed requirement has altered the behaviour of the unchanged areas. The impacted area may be the whole of the application or Some part of the application.

Extreme Programming (XP) is a software development approach for small teams on risk-prone projects with unstable requirements. It was created by Kent Beck who described the approach in his book 'Extreme Programming Explained' .Testing ('extreme testing') is a core aspect of Extreme Programming. Programmers are expected to write unit and functional test code first - before the application is developed. Test code is under source control along with the rest of the code. Customers are expected to be an integral part of the project team and to help develope scenarios for acceptance/black box testing. Acceptance tests are preferably automated, and are modified and rerun for each of the frequent development iterations. QA and test personnel are also required to be an integral part of the project team. Detailed requirements documentation is not used, and frequent re-scheduling, re-estimating, and re-prioritizing is expected.

  • This is a common problem in the software industry, especially in new technology areas. There is no easy solution in this situation, other than.
  • Hire good people.
  • Management should 'ruthlessly prioritize' quality issues and maintain focus on the customer.
  • Everyone in the organization should be clear on what 'quality' means to the customer.

Usability testing is testing for 'user-friendliness'. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions and other techniques can be used. Programmers and developers are usually not appropriate as usability testers.

Many of the process models currently used can be more generally connected by the ‘V’ model where the ‘V’ describes the graphical arrangement of the individual phases. The ‘V’ is also a synonym for Verification and Validation.
By the ordering of activities in time sequence and with abstraction levels the connection between development and test activities becomes clear. Oppositely laying activities complement one another (i.e.) server as a base for test activities. For example, the system test is carried out on the basis of the results specification phase.

From the testing point of view, all of the models are deficient in various ways:

  • The Test activities first start after the implementation. The connection between the various test stages and the basis for the test is not clear.
  • The tight link between test, debug and change tasks during the test phase is not clear.

Testing of individual software components (Unit Testing).

Phase of development where functionality is implemented in entirety bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

Test Configuration Managers maintain test environments, scripts, software and test data. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Test Configuration Manager.

As I know it, test case design is about formulating the steps to be carried out to verify something about the application under test. And this cannot be automated. However, I agree that the process of putting the test results into the excel sheet.

A technique used during planning, analysis and design creates a functional hierarchy for the software.

The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.

A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.

Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).

@The choice of automation tool for certain technologies.
@Wrong set of test automated.

Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers.Unit testing is performed after the expected test results are met or differences are explainable/acceptable.

High severity bugs affects the end users .testers tests an application with the users point of view, hence it is given as high severity. High priority is given to the bugs which affects the production. Project managers assign a high priority based on production point of view.

In the Spiral Model, a cyclical and prototyping view of software development is shown. Test are explicitly mentioned (risk analysis, validation of requirements and of the development) and the test phase is divided into stages. The test activities include module, integration and acceptance tests. However, in this model the testing also follows the coding. The exception to this is that the test plan should be constructed after the design of the system. The spiral model also identifies no activities associated with the removal of defects.

Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

A group review quality improvement process for written material. It consists of two aspects product (document itself) improvement and process improvement (of both document production and inspection).

Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.