Here are 100 software testing interview questions and answers, organized by category to cover fundamentals, methodologies, tools, automation, and real-world scenarios.
Software Testing Fundamentals
1. What is software testing?
The process of evaluating and verifying that a software product or application does what it’s supposed to do. It involves identifying defects, validating requirements, and ensuring quality before release.
2. What are the objectives of software testing?
To find defects, ensure the product meets requirements, build confidence in quality, provide information for decision-making, and prevent defects from reaching end users.
3. What is the difference between verification and validation?
Verification: “Are we building the product right?” (reviews, walkthroughs, inspections). Validation: “Are we building the right product?” (actual testing, dynamic testing). Verification is static; validation is dynamic.
4. What is the difference between error, bug, defect, and failure?
An error is a human mistake (coding or logic error). A defect/bug is the result of an error in the code or document. A failure occurs when the software doesn’t behave as intended due to a defect.
5. What are the seven principles of software testing?
- Testing shows presence of defects, not absence.
- Exhaustive testing is impossible.
- Early testing saves time and cost.
- Defect clustering (Pareto principle: few modules contain most defects).
- Pesticide paradox (same tests eventually stop finding new bugs).
- Testing is context-dependent.
- Absence-of-errors fallacy (just because no bugs were found doesn’t mean it’s ready).
6. What is the difference between quality assurance (QA) and quality control (QC)?
QA is process-oriented: preventing defects by improving processes. QC is product-oriented: identifying defects in the finished product through testing. Testing is a QC activity.
7. What is a test case?
A set of conditions, inputs, actions, and expected results developed to verify a particular functionality or requirement. It includes test case ID, description, steps, expected result, actual result, and status.
8. What is a test scenario?
A high-level description of what to test, usually derived from use cases or requirements. A scenario can have multiple test cases. Example: “Verify login functionality.”
9. What is a test plan?
A document outlining the testing strategy, objectives, schedule, resource allocation, scope, risks, and deliverables for a testing project. It guides the entire testing process.
10. What is a test suite?
A collection of test cases grouped together to test a logical set of functionalities or a specific feature of the software.
Testing Types & Levels
11. What are the different levels of testing?
Unit testing (individual components), Integration testing (interfaces between modules), System testing (complete application), Acceptance testing (by users/clients). Smoke and regression testing are sub-levels within these.
12. What is Unit Testing?
Testing of individual software components or modules, usually done by developers. Its purpose is to validate that each unit of code performs as designed, often using test frameworks like JUnit, NUnit.
13. What is Integration Testing?
Testing the interfaces and interactions between integrated modules to expose defects in their communication. Approaches: Big Bang, Top-Down, Bottom-Up, Sandwich.
14. What is System Testing?
Testing the complete, integrated application to verify it meets specified requirements. It’s a black-box testing technique conducted by a dedicated testing team in an environment that mirrors production.
15. What is User Acceptance Testing (UAT)?
The final testing phase where actual end users or client representatives validate the system against business requirements. Types: Alpha (internal) and Beta (external) testing.
16. What is the difference between functional and non-functional testing?
Functional testing verifies what the system does (features, operations). Non-functional testing verifies how the system behaves (performance, security, usability, reliability, scalability).
17. What is smoke testing?
A quick, shallow test to check whether the build is stable enough for further, in-depth testing. Also called “build verification testing” or “sanity check” (sometimes distinguished from sanity testing which is aimed at verifying fixed bugs).
18. What is regression testing?
Re-running previously executed tests on a modified application to ensure that new code changes haven’t broken existing functionality. It can be done partially or fully.
19. What is black-box testing?
Testing without knowledge of the internal code structure. Testers focus solely on inputs and expected outputs based on requirements. Techniques: equivalence partitioning, boundary value analysis, decision tables.
20. What is white-box testing?
Testing with full knowledge of the internal logic and code structure. It involves testing paths, loops, conditions. Requires programming skills. Techniques: statement coverage, branch coverage, path coverage.
21. What is grey-box testing?
A combination of black-box and white-box, where the tester has partial knowledge of the internal workings (like database schema, algorithms) but tests from a user perspective.
22. What is exploratory testing?
An unscripted approach where testers actively explore the application, design and execute tests on the fly based on their observations, experience, and intuition. Useful when requirements are thin.
23. What is ad-hoc testing?
Informal testing without any formal test design or documentation, often purely based on the tester’s gut feeling. Similar to exploratory but less structured.
24. What is end-to-end testing?
Testing the complete flow of an application from start to finish, including its interaction with external interfaces, databases, and other systems, to ensure data integrity and system integration.
25. What is sanity testing?
Narrow, deep testing after receiving a build with minor fixes or changes; its goal is to verify that the bug is fixed and no issues were introduced in related areas. Often a subset of regression.
Test Design Techniques
26. What is Equivalence Partitioning?
A black-box technique that divides input data into valid and invalid partitions, where each partition is expected to behave similarly. You test one representative value from each partition.
27. Give an example of Equivalence Partitioning.
For a field accepting ages 18-60: valid partition 18-60 (test e.g., 25), invalid below 18 (test 10), invalid above 60 (test 70). Also consider boundary values (17,18,60,61) but that’s boundary value analysis.
28. What is Boundary Value Analysis (BVA)?
Testing at the boundaries between partitions because defects tend to concentrate at the edges. For 18-60: test 17, 18, 60, 61, plus normal value.
29. What is Decision Table Testing?
A technique for functions that depend on combinations of logical conditions; it lists inputs (conditions) and corresponding outputs (actions) in a table, ensuring all combinations are covered.
30. What is State Transition Testing?
Used when a system can be in different states and transitions between them are triggered by events. You create a state diagram and test sequences of events to validate state changes.
31. What is Use Case Testing?
Testing derived from use cases (how actors interact with the system). Each use case scenario (basic flow and alternate flows) is a test scenario.
32. What is Error Guessing?
A technique relying on the tester’s experience and intuition to guess where defects are likely to cluster. Not formal but often used with other techniques.
Defect Management & Reporting
33. What is a bug life cycle (defect life cycle)?
The journey of a defect from discovery to closure: New → Assigned → Open (In Progress) → Fixed → Retest → Verified/Closed, or Reopened if not fixed. Some workflows include Rejected, Deferred.
34. What is a bug report? What should it contain?
A formal document detailing a defect. Essential fields: Defect ID, Title, Description, Steps to Reproduce, Actual Result, Expected Result, Environment (OS, browser, device), Severity, Priority, Attachments (screenshots/logs).
35. What is the difference between severity and priority?
Severity is the degree of impact the defect has on the system’s functionality (Critical, Major, Minor, Cosmetic). Priority is the order in which the defect should be fixed (High, Medium, Low) based on business need. A low-severity cosmetic bug can be high priority if it’s on a brand logo.
36. How do you handle a disagreement with a developer about a bug?
I ensure the bug report is clear with all evidence (steps, expected vs actual, logs). I discuss the requirement and user impact. If no agreement, I escalate to the lead/PM with data, but always keep it professional and focus on product quality.
37. What is the purpose of a defect triage meeting?
To review newly reported bugs, assign severity/priority, assign to developers, discuss rejections, and ensure the team is focused on the most critical issues.
38. What is a defect masking?
When a defect is not detected because it’s hidden by another defect that prevents the tester from reaching it. The first bug masks the second. Often found after fixing the first bug.
39. What is a latent defect?
A defect that has been present in the system for a long time but hasn’t been discovered because the conditions to trigger it were never met.
40. What is an escaped defect?
A defect that was missed during testing and reaches the end user.
Test Management & Process
41. What is a test strategy?
A high-level document that describes the testing approach for a specific project. It covers objectives, scope, test levels, roles, tools, risks, and entry/exit criteria.
42. What is a traceability matrix (RTM)?
A document mapping requirements to test cases, ensuring all requirements have corresponding tests. Helps with coverage analysis and impact assessment when requirements change.
43. What are entry and exit criteria for testing?
Entry criteria: conditions that must be met before testing can start (test environment ready, test cases written, build deployed). Exit criteria: conditions for concluding testing (all tests executed, critical bugs fixed, coverage met, test report approved).
44. How do you estimate testing effort?
Using techniques like Work Breakdown Structure, expert judgment, past project data, and function point analysis. Then factoring in risks, team capability, and tool support.
45. How do you handle a situation when there are no clear requirements?
I work with business analysts/stakeholders to clarify, use exploratory testing to understand expected behavior, look at similar existing systems, and document assumptions. I also raise the risk of gaps.
46. What is risk-based testing?
Prioritizing testing based on the probability and impact of failure. Test cases for high-risk areas are executed earlier and more thoroughly. It’s used when time/resources are limited.
47. What is the difference between test plan and test strategy?
Test plan is project-specific (detailed schedule, resources, tasks). Test strategy is a higher-level organization-wide approach (methodologies, standards, key principles). Often the strategy is part of the test plan.
48. How do you measure test coverage?
Requirements coverage (% of requirements tested), code coverage (for unit tests: statement, branch, path), risk coverage, and feature coverage. Focus should be on risk-based coverage, not just achieving a number.
49. What are the different test closure activities?
Check against exit criteria, archive testware, document lessons learned, generate test summary report, and handover to maintenance testing team.
50. How would you decide when to stop testing?
Based on exit criteria: high-priority bugs resolved, requirements covered, test pass rate, risk level acceptable, and management decision. I never rely solely on deadlines.
Agile & DevOps Testing
51. How does testing differ in an Agile environment?
Testing is continuous, iterative, and integrated into each sprint. Testers work closely with developers and business, participate in story grooming, and rely heavily on automation and regression testing.
52. What is Shift Left testing?
A practice of moving testing activities earlier in the software development lifecycle, starting from requirements analysis and design, to detect and fix defects sooner, reducing cost and time.
53. What are the four quadrants of Agile testing?
A model that helps identify tests: Q1 (unit tests, technology-facing, support team), Q2 (functional tests, story tests, business-facing, support team), Q3 (exploratory, usability, UAT, business-facing, critique product), Q4 (performance, security, technology-facing, critique product).
54. What is continuous testing in DevOps?
Automated testing integrated into the CI/CD pipeline where tests are run automatically on every code commit and deployment to provide rapid feedback on quality.
55. How do you define a test case in a sprint with very short deliverables?
I focus on high-risk scenarios, write lightweight test cases (perhaps just one-liner with key checks), use exploratory testing for edge cases, and pair with developers to define acceptance criteria that double as tests.
56. What is a Definition of Done (DoD) from a testing perspective?
Agreed criteria a story must meet to be considered complete: all acceptance tests pass, unit tests written, regression suites run, exploratory session done, bugs fixed or tracked, and code reviewed.
57. What is Behavior-Driven Development (BDD)?
A practice that uses natural language (Gherkin: Given-When-Then) to define test scenarios collaboratively by developers, testers, and business. Tools: Cucumber, SpecFlow. Promotes shared understanding.
58. How do you manage regression testing in Agile when time is limited?
By automating a core regression suite that runs on each build, selecting only impacted areas for manual regression (risk-based), and using tools to identify changed code (impact analysis).
Automation Testing
59. What is automation testing?
Using specialized tools to execute pre-scripted tests on the software, comparing actual outcomes to expected results automatically. It reduces manual effort for repetitive, data-driven, and regression tests.
60. Which tests should NOT be automated?
Exploratory tests, usability tests, tests that run only once with a low ROI, tests for frequently changing UI, and tests where setup is too complex or fragile.
61. What are the key criteria for selecting a test automation tool?
Ease of use, technology compatibility (web, mobile, API), scripting language support, integration with CI/CD, reporting capabilities, cost, and community/support.
62. Name some popular test automation tools.
Selenium (web), Cypress, Playwright, Appium (mobile), JUnit/TestNG (unit frameworks), Rest Assured (API), Postman/Newman, JMeter (performance), Katalon Studio, UFT.
63. What is a test automation framework?
A set of guidelines, coding standards, and tooling that provide an environment for efficient test script creation and execution. Examples: Data-Driven, Keyword-Driven, Hybrid, Modular.
64. Explain the Page Object Model (POM).
A design pattern in Selenium automation where web pages are represented as classes, and elements are stored with associated actions. It enhances reusability and maintenance by separating test logic from page structure.
65. What is Data-Driven testing?
Running the same test logic with multiple sets of data from external sources (Excel, CSV, databases). Enables testing with many inputs without duplicating test scripts.
66. What is Keyword-Driven testing?
Tests are created using a set of keywords that represent actions (e.g., “Click”, “Input”). Testers can write test cases in a table using these keywords, and the framework executes the corresponding code.
67. How do you handle dynamic elements in Selenium?
Using relative XPath, CSS selectors with attributes that don’t change, contains(), starts-with(), or waiting mechanisms (explicit waits) until element is clickable/visible.
68. What is the difference between findElement and findElements in Selenium?findElement returns a single web element and throws NoSuchElementException if not found. findElements returns a list of elements (empty list if none found), no exception.
69. What is an assertion?
A verification command that checks whether a condition is true; if it fails, the test stops (hard assertion) or logs the failure and continues (soft assertion). Tools: TestNG assertions, JUnit assertions.
70. What are the challenges of test automation?
High initial investment, maintenance overhead especially for UI tests, false positives/negatives, dealing with dynamic content, and integrating with CI/CD. Requires skilled resources.
Performance Testing
71. What is performance testing?
Evaluating the speed, responsiveness, stability, and scalability of an application under a given workload. Types include load, stress, soak, and spike testing.
72. What is load testing?
Testing the system’s behavior under expected normal and peak load conditions, ensuring it meets the required response times and throughput.
73. What is stress testing?
Pushing the system beyond its normal capacity to find its breaking point and see how it recovers (failover, error handling).
74. What is soak (endurance) testing?
Testing the system over an extended period with a continuous expected load to detect memory leaks, resource leaks, and degradation.
75. What is spike testing?
Suddenly increasing the load by a large factor and observing behavior; tests the system’s ability to handle bursts of traffic.
76. What key metrics do you analyze in performance testing?
Response time, throughput (requests per second), concurrent users, hits per second, error rate, CPU/memory utilization, network latency.
77. Name a performance testing tool you’ve used.
JMeter (open source, can simulate heavy loads), LoadRunner, Gatling, k6. For quick API load tests, I also use Artillery.
78. How do you set up a performance test environment?
Ensure it mirrors production hardware/networks as closely as possible, isolate from other tests, use dedicated monitoring tools, and start with a dry run to calibrate load generators.
79. How do you analyze performance test results?
Look for bottlenecks: high response times correlating with CPU spikes, memory leaks (gradual increase), thread deadlocks, slow database queries. Provide recommendations to dev team.
80. What is the difference between throughput and concurrent users?
Throughput is number of transactions/requests per second. Concurrent users are the number of users logged in or active at the same time; they might not all be making requests simultaneously.
API Testing
81. What is API testing?
Testing application programming interfaces directly — verifying that they meet functionality, reliability, performance, and security expectations without a UI.
82. What are common HTTP methods used in REST API testing?
GET (retrieve), POST (create), PUT (update/replace), PATCH (partial update), DELETE (remove).
83. What status codes should you expect from a successful POST, GET, PUT, DELETE?
POST → 201 Created; GET → 200 OK; PUT → 200 OK or 204 No Content; DELETE → 200 OK or 204 No Content; and for errors, appropriate 4xx/5xx.
84. What is the difference between PUT and PATCH?
PUT replaces the entire resource; you must send the full representation. PATCH applies a partial update to the resource.
85. How do you test an API manually?
Using tools like Postman or cURL: I send requests with parameters, headers, and body in appropriate format (JSON), then validate response status code, body structure, data, headers, and performance.
86. What do you validate in an API response?
Status code, response time, response body schema and data correctness, headers (Content-Type, etc.), error messages for invalid requests, and security (authentication required).
87. What is a mock API and when is it used?
A simulated API that mimics the real API’s responses. Used when the real API is under development or unavailable, enabling front-end development and testing without dependency.
88. How do you automate API tests?
Using Rest Assured (Java), requests + pytest/unittest (Python), Postman/Newman, or SuperTest (Node.js). Tests are integrated into CI pipeline.
89. What is authentication in API testing? How do you test it?
Methods: Basic Auth (username/password), API Key, OAuth 2.0 token. You test valid credentials get 200, invalid get 401 Unauthorized or 403 Forbidden, and without token get appropriate error.
90. How does API testing fit into CI/CD?
Automated API tests are triggered post-build/deployment; they run faster than UI tests and catch business logic errors early in the pipeline.
Mobile & Specialized Testing
91. What are the challenges in mobile application testing?
Device fragmentation (OS versions, screen sizes), network conditions (3G/4G/5G/Wi-Fi/offline), battery consumption, app interruptions (calls, notifications), and frequent OS updates.
92. What is the difference between an emulator and a simulator?
Emulator mimics both hardware and software of the target device (closer to real device). Simulator only mimics the software and is used for app-level behavior (like iOS Simulator). Emulators are better for Android testing.
93. What is cross-browser testing?
Testing a web application across multiple browsers (Chrome, Firefox, Safari, Edge) and versions to ensure consistent behavior and appearance.
94. How do you test a mobile app’s offline capabilities?
Enable airplane mode or disable network, then perform actions that require data, ensuring the app shows appropriate messages, caches data, and syncs when back online.
95. What is security testing?
Identifying vulnerabilities in the software to ensure data is protected: authentication flaws, injection (SQL/XSS), encryption, session management. Penetration testing is a part of it.
96. What is usability testing?
Evaluating how easy, efficient, and satisfying the application is to use from the end user’s perspective, focusing on UI flow, intuitiveness, and accessibility.
Scenario-Based & Soft Skills
97. You are given a login page. What test cases would you write?
Positive: valid username and password. Negative: invalid username, invalid password, empty fields, case sensitivity, password masking, max length, special characters, SQL injection, rate limiting (multiple failed attempts), “forgot password” flow, and browser back/forward button after login.
98. You have a very tight deadline; what is your testing approach?
I prioritize high-risk and high-impact areas, use exploratory testing for quick feedback, focus on critical path and core functionality, and ensure at least a basic smoke test. I communicate the risks of reduced coverage to stakeholders.
99. How do you stay updated with new testing trends and tools?
I follow testing communities (Ministry of Testing, STH), blogs, attend webinars/conferences, contribute to open source projects, and practice with new tools in side projects.
100. Tell me about a bug you missed and what you learned.
(Prepare your own) “I missed a date-related bug where a leap year wasn’t handled. It reached production. I learned to always include boundary values for date fields and added a checklist item for date-related edge cases in test plans. I also implemented a review of edge cases with developers for all date-handling stories.”