📜 ⬆️ ⬇️

Overview of frequently asked questions about software testing at interviews and answers to them

The main goal of this article is to help overcome the fear that arises from software testers (both beginners and experienced) for the upcoming interview in connection with the lack of knowledge about the future.

The secondary goal is to put together the main questions that are most likely to be asked at the interview. As a novice tester, I have already gained some experience in preparing for interviews for this position, and I can see that even specialized QA forums do not cope with this goal, and they may not put it in front of them at all.

The list of questions, of course, is not final and does not pretend to be exemplary, but serves only as a guide in the training of specialists with software testing.

Actually questions:
')
  1. Explain the term "software life cycle."
    The software life cycle (SLC) is the period of time that begins with the advent of the software concept and ends when the use of the software is no longer possible. The software life cycle typically includes the following steps: concept, requirements description, design, implementation, testing, installation and commissioning, operation and support, and, sometimes, decommissioning phase. These phases may overlap or be iterated.

  2. Explain the term "software development life cycle."
    The software development life cycle (SDLC) is a concept that describes a set of activities carried out at each stage (phase) of software development.

  3. Explain the advantage of using the software development life cycle (SDLC) model.
    • providing the basis of the project (methodology, activity ...);
    • providing visualization of the project implementation;
    • assistance to the company in the effectiveness and successful completion of the project (cost reduction, reduced development time and testing, improving the quality of the final product);
    • reducing the risks associated with the software development process;
    • providing a special mechanism for tracking project progress.


  4. What are the main phases of the software development life cycle model?
    1. Decisions (idea) on the need to create software;
    2. Requirements collection and analysis;
    3. Design (Systems and software) based on requirements;
    4. Coding based on system design;
    5. Testing;
    6. Implementation in the user environment;
    7. Maintenance (including fixing errors found in the user environment);
    8. Withdrawal from use (rarely);


  5. Explain Quality Assurance?
    Quality Assurance (QA) is a set of measures covering all technological stages of development, production and operation of software and undertaken at different stages of the software life cycle to ensure the required level of quality of the product.

    Quality assurance is defined in the ISO 9000: 2005 Standard for Quality Management Systems. Fundamentals and vocabulary ”as“ part of quality management aimed at creating confidence that quality requirements will be met. ”

    Quality management is presented in the same standard as “coordinated management of the organization in relation to quality,” and the note states that it “usually includes the development of quality policies and objectives, quality planning, quality management, quality assurance and quality improvement ".

  6. Explain what is Quality Control?
    Quality control (QC) is a set of actions taken on the software during the development process to obtain information about its current status in terms of readiness for release, compliance with the recorded requirements and compliance with the stated quality level of this software.

  7. Explain what software testing is?
    Software testing is a process that contains all life cycle activities, both dynamic and static, related to the planning, preparation and evaluation of a software product and the associated results of work in order to determine that they meet the described requirements and show that they are suitable for the stated purposes and for the determination of defects.

    From this definition it becomes clear that software testing involves two different processes:
    Validation: proven by objective research results confirming that the requirements for a particular specific use of an application have been met. [ISO 9000]
    Verification: proof of the objective results of the research that certain requirements have been met. [ISO 9000]

  8. What are the main goals of software testing?
    The purpose of testing (test objective, test target) is the reason or goal of developing and executing a test.

    Basic goals:
    • provide software cleansing from errors (you can not provide 100% coverage, but you must do everything possible, and ensure that obvious errors are corrected);
    • ensure that the software meets the original requirements and specifications;
    • provide confidence in the software (users, customers, etc.).


  9. When should I start testing software?
    The simple answer is as soon as possible! In more detail:
    • when software testing is carried out at an early stage, you can easily influence the design, since its change at this stage is not so expensive than at later stages;
    • in turn, the earlier the error is detected, the cheaper it is for the company;
    • Testing can also begin before the actual software acquisition (static testing), which is really important, as it reduces the complexity of the dynamic testing phase. There is an opinion that many errors that were found in the dynamic testing stage could and should have been fixed in the static testing stage;
    • testing at the early stages (studying requirements, specifications, business cases, etc.) will provide the tester with more knowledge about the software, help to identify logical and technical errors that would affect the software, its final design and cost.


  10. When should I finish software testing?
    The simple answer is a management decision that is most likely to be made based on:
    • test coverage;
    • risk analysis;
    • deterioration testing.

    A more detailed study of this issue will help the coolest article by Michael Bolton in his blog with different there heuristics "piñata" and "dead horse."

  11. What are the main levels of software testing?
    1. Component / module testing (module, unit testing) is testing of individual software components [IEEE 610];
    2. Integration testing (integration testing) is testing performed to detect defects in interfaces and in the interaction between integrated components or systems.

      You should also understand what big-bang testing is, top-down testing, ascending and incremental testing;

    3. System testing (system testing) is the process of testing the system as a whole in order to verify that it meets the established requirements;
    4. Acceptance testing is a formal testing against the needs, requirements and business processes of a user, conducted to determine whether a system meets acceptance criteria (exit criteria that a component or system must meet in order to be accepted by a user, customer or another authorized person) [IEEE 610], from which such species are followed, which it is desirable not to forget:
      • user acceptance testing;
      • Production acceptance testing (factory acceptance testing) is the acceptance testing conducted on the software product developer’s side and conducted by the supplier’s company to determine whether a component or system meets both software and hardware requirements;
      • third-party acceptance testing (site acceptance testing) is acceptance testing by users or the customer on their side. It is conducted to determine both the compliance with the business process and to ensure that the system or component meets the needs of the user or customer. Usually includes checking both software and technical base;
      • operational acceptance testing is operational testing in the acceptance testing phase, usually performed by a user and / or employees with administrative access, in a working environment (possibly simulated), focusing on functional aspects (recoverability, resource behavior, installability and technical conformity).

    5. Alpha testing is simulated or actual field testing by potential users / customers or an independent testing team on the developer side, but outside the developing organization. Alpha testing is often applied to boxed software as an internal acceptance test;
    6. Beta testing (beta testing) is operational testing by potential and / or existing customers / customers on the outside not connected to the developers in order to determine whether a component or system really meets the requirements of the client / customer and fits into the business processes. Beta testing is often conducted as a form of external acceptance testing of off-the-shelf software in order to get market feedback;


  12. What are entry criteria?
    Entry criteria are a set of general and specific conditions for continuing the process with a specific task, for example, the testing phase. The purpose of entry criteria is to prevent the start of a task, which may require more (unhelpful) efforts than to eliminate unfinished entry criteria.

    Simply put, for you, as a future tester, the entry criteria should be understood as the basic conditions that must be met before you and your team can begin testing.

  13. Give a few examples that explain the entry criteria for software testing.
    • all defects that belong to the early stages (design) are closed and checked;
    • code verified by the implementation of the "Unit" tests;
    • basic software functionality ready for testing;
    • there is documentation that defines the requirements;
    • all testers are familiar with the software architecture;
    • all testers are familiar with the objectives of the project;
    • ready testing environment;
    • builds available for use;
    • approved test plan and / or test cases.


  14. What are exit criteria?
    The exit criteria are a set of general and specific conditions agreed in advance with the interested parties so that the process can be officially considered completed. The goal of the exit criteria is to prevent the opportunity when the task is considered completed, but there are still some incomplete parts of the task. Exit criteria are used for reporting as well as planning when to stop testing.

    Simply put, both the entry criteria determine the beginning of the test, and the exit criteria define its end and the software is ready for the next stage of the life cycle (implementation, etc.).

  15. Give a few examples that explain exit criteria for software testing.
    • All predefined areas of the software as "risky" are tested and such status is lowered / removed;
    • all errors are carefully documented and communicated to the management / shareholders / customers;
    • all tests with high priority passed and accordingly marked as "Pass";
    • all SRS documentation requirements (Software Requirements Specification);
    • STR approved by the owner of the project;
    • tested software architecture;
    • no serious or critical errors remain open;
    • 90-95% of all tests done.


  16. Provide several tools that can be used to automate testing.

    In addition - the results of a steep survey on automated testing.

  17. How do you explain Bug / Defect / Error in software?
    Any problem / software bug caused by the following behavior:
    • software crash or display of invalid notification;
    • Software provides invalid results;
    • Software failed to execute as expected (any deviation from expected results).


  18. Explain the verification process.
    The real question to which we are looking for the answer is: do we build the product correctly?

    The verification process is performed to ensure that each stage of the software development life cycle (development, testing, etc.) is built on the basis of predefined requirements (requirements) and specifications (specifications) and without any deviations from them. (see number 7)

  19. Describe the various activities of the verification process.
    • analysis of various aspects of testing (time, resources, personnel, cost, testing tools, etc.);
    • statement coverage - the percentage of statements executed by the test suite to their total number; condition coverage - the percentage of outcomes of conditions that were tested by a test suite. 100% coverage of conditions requires that each individual condition in each expression of the solution be verified as True and False; coverage of alternatives (decision coverage) - the percentage of results of an alternative that was tested by a set of tests. One hundred percent coverage of solutions implies one hundred percent coverage of branches and one hundred percent coverage of operators;
    • A review is an assessment of the state of a product or project in order to identify discrepancies with planned results and to put forward suggestions for improvement. Examples of peer review may include: management peer review, non-formal peer review, technical analysis, inspection and review;
    • Walkthrough is a step-by-step analysis by the author of the document to collect information and ensure the same understanding of the content of the document;
    • inspection is a type of peer analysis based on visual inspection of documents for finding errors. For example, violation of development standards and inconsistency of higher level documentation. The most formal method of reviewing is, therefore, always based on a documented procedure.


  20. Give verification examples depending on the levels of testing. (see number 11)
    1. Unit testing
      -check the implementation of software design.
    2. Integration testing (integration testing):
      - testing for integration between all relevant components before the software moves to the next level (system).
    3. System testing (system testing):
      - ensuring that the system meets predefined requirements and specifications.
    4. Acceptance testing:
      - make sure that the system meets the requirements of the client.


  21. Explain the validation process.
    The real question to which we are looking for the answer is: are we building the right product?

    A process that allows a tester to evaluate software after the development stage before transferring it to the customer. In this process, we must make sure that the software is developed based on user needs.

    Remember, validation covers the dynamic side of testing, where certain software is tested and evaluated against a given SRS documentation.

  22. Give a few reasons that lead to bugs in the software.
    • human errors (design process and implementation process);
    • changing requirements while testing;
    • lack of understanding of requirements and specifications;
    • lack of time;
    • poor prioritization of testing;
    • poor orientation in software versions;
    • the complexity of the software itself.


  23. What is a test procedure?
    A document describing the sequence of actions during the test. Also known as manual testing script.

  24. What is a software component?
    Basically, software components are small software items that are built from still small units that are in turn integrated together (classes, methods, stored procedures, binary trees, etc.).

  25. Explain code coverage.
    Code coverage is an analysis method that determines which parts of the software were tested (covered) with a test suite and which were not, for example, operator coverage, coverage of alternatives or condition coverage.

  26. Explain code inspection.
    Code inspection or code review is a systematic check of the program source code to detect and correct errors that go unnoticed during the initial development phase. The purpose of viewing is to improve the quality of the software product and improve the skills of the developer.
    During the inspection process, problems such as errors in string formatting, race condition (memory condition), memory leak (buffer overflow) can be found and fixed, which improves the security of the software product. Version control systems make it possible to conduct joint code inspection. In addition, there are special tools for joint code inspection.
    Software for automated code inspection simplifies the task of viewing large pieces of code by systematically scanning it to detect the most known vulnerabilities.

  27. What does the phrase code complete mean?
    A simple term related to a particular SDLC stage. Saying “the code is complete”, we actually mean that its implementation is complete (all software functionality has been successfully implemented). Although even if the code is fully implemented, there are always new errors detected during testing.

  28. What is a walkthrough code?
    A walkthrough is a testing technique used to review the progress of the code by the programmer and the testing team, and during parsing the code is executed using several simple tests to determine its quality and logic.

  29. What is debugging?
    Debugging is the process of searching, analyzing, and eliminating the causes of failures and errors in software.

    To understand where the error occurred, it is necessary:
    • find out the current values ​​of variables;
    • find out which way the program was run.

    There are two complementary debugging technologies.
    • Using debuggers - programs that include a user interface for step-by-step program execution: operator by operator, function by function, with stops on some lines of the source code or when a certain condition is reached.
    • Output of the current state of the program using output statements located at critical points of the program — on the screen, printer, loudspeaker, or to a file. Outputting debug information to a file is called logging.


  30. What is an emulator and simulator?
    Emulation is the reproduction of the work of a program or system (and not some scanty part of it) while preserving its key properties and principles of operation. Emulation executes the program code in an environment familiar to this code, consisting of the same components as the emulated object.

    Simulation is a reproduction of the work of the original program purely virtual, on the engine of a special program (a course development tool, for example). The simulation only simulates the execution of the code, and does not copy it, everything is virtually 100%, all the "make-believe".

    Consequently:

    A software emulator is a full-featured counterpart of the original software, or its version, which may have a number of restrictions on the functionality, capabilities, and behavior of the software.

    Software simulator is a model of original software in which the logic of this software is implemented (partially or completely), software behavior is simulated, its interface is copied.

    Simulator in the completeness of the functions / parameters taken into account is narrower than the emulator. An object is emulated, and its properties, functions, or behavior are simulated.

  31. What is a software specification?
    The specifications are a text file describing what to test in the test data. It indicates what results the program should receive. The test code finds the real results calculated on the live code. A test engine produces a verification of the specification and the calculated results.

    We obtain the specification from the customer by analyzing, examining his requirements and transferring them to a qualitatively new, more detailed level at which the development team will use them.

  32. What is coding?
    Coding (coding) - is the process of writing software code, scripts, in order to implement a specific algorithm in a specific programming language.

    Some confuse concepts such as programming and coding directly. Coding is only part of programming, along with analysis, design, compilation, testing and debugging, maintenance (In narrow circles, coding can also be called “coding.” However, according to Wiki, this term is rarely used in literature.).

  33. What is the requirement?
    Requirement (requirement) - a set of statements regarding the attributes, properties or qualities of the software system to be implemented. Created in the process of developing software requirements as a result of requirements analysis.

    Requirements can be expressed in the form of textual statements and graphical models.

    In the classical technical approach, a set of requirements is used at the design stage of software (software). Requirements are also used in the software testing process, since tests are based on certain requirements.

    The requirements development phase may have been preceded by a feasibility study, or a conceptual phase of project analysis. The requirements development phase can be broken down into requirements identification (collection, understanding, consideration and clarification of the needs of stakeholders), analysis (integrity and completeness check), specification (requirements documentation) and validation.

  34. What is stability testing?
    The task of testing stability (stability) / reliability (reliability) is to check the performance of the application during long-term (many hours) testing with an average load level. The execution time of operations can play a minor role in this type of testing. At the same time, the absence of memory leaks, server restarts under load, and other aspects affecting precisely the stability of work come out on top.

  35. Tell us about the criticality (severity) of the bug and the generally accepted levels of such criticality.
    Criticality (severity) is the importance of the impact of a particular defect on the development or operation of a component or system.

    The criticality of the error is determined by the tester, who found a bug, but before that he must answer the following questions:
    • How will this error affect the testing process?
    • How will this error affect the customer?
    • How does this error affect the system?
    • How does this error affect the timing of testing?
    • Does this error block other tests?
    • Etc.

    Each company can define its own scale for the degree of criticality (severity), but there are several levels that are used by almost all teams:
    • Blocker / show-stopper (blocking) - software or a specific component is not suitable for use / testing (complete failure, system crash, etc.) and there is no workaround.
      Examples: the system crashes when the user presses the start button; the system does not start after damage to the installer; shutdown software caused by hardware failures.
    • Critical - the main functionality does not work as it should, there is a workaround that may affect the integrity of the tests.
      Example: software can randomly crash using various functionalities; The software produces inconsistent results, the basic requirements can not be confirmed.
    • Major (important) - the defeat of minor functionality, there is no effect on other components, and there is a fast and effective bypass.
      Example: A user cannot use certain functionality directly, but can use it by using access to it from different modules.
    • Minor (minor, minor) - a minor impact on a particular place, there is no need to create a workaround, the integrity of the software is not affected.
      Examples: spelling errors, improvements, change request


  36. Tell us about the priority of the bug.
    The priority is the degree of importance assigned to the bug. In other words, it is determined how urgently this error should be corrected.

    Priority is a management tool, and before defining it, the latter must answer at least the following questions:
    • How does a bug affect the timing?
    • How does a bug affect the testing process?
    • How does a bug affect the work of other testers?
    • What are the costs necessary to eliminate the bug?
    • Should we change the software requirements?


  37. What is Build?
    Build - an information product prepared for use. Most often this is an executable file (a binary file containing the executable program code).

    Suppose the build version number looks like this: 1.35.6.2
    1. The first identifier is the major version number.
    2. The second identifier is an additional version number.
    3. The third identifier is the build number.
    4. The fourth identifier is the revision number.


  38. Can I start testing without a build?
    Of course - yes! After all, there are two types of testing methodology (static and dynamic), which allow the tester to start working without a working assembly by the static method, the more such a method is more cost-effective than the “dynamic” one.

  39. What is static analysis?
    Static analysis (static analysis) - is the analysis of software artifacts, such as requirements or software code, conducted without the execution of these software artifacts.

  40. What is a test driver and test binding?
    A driver is a software component or testing tool that replaces a component that provides control and / or a call to a component or system.

    Harness is a test environment that includes the stubs and drivers needed to conduct a test.

  41. What are trace matrices?
    To measure coverage of requirements, it is necessary to analyze product requirements and break them down into points. Optionally, each item is associated with test cases, checking it. The combination of these links is the traceability matrix. Tracing the connection, you can understand exactly what the requirements of the test case checks.

    Non-requirements tests are meaningless. Non-test requirements are “white spots”, i.e. Having completed all the created test cases, it is impossible to give an answer whether this requirement is implemented in the product or not.


  42. What is end-to-end testing?
    End-to-end testing is a type of testing where a tester uses software (scripts that examine the entire flow of execution) under conditions that the user most likely has.

    In addition, the tests will work by combining several relevant scenarios in the real world:
    • software launch in an environment with network communication delays;
    • running software in an environment with low resources;
    • Running software on other server hardware;
    • Run the software in the same environment as some other applications that consume server resources.


  43. What is functional testing? What are the main types of functional testing? What types of functional tests do you know?
    Functional testing (functional testing) - is testing based on the analysis of the specification of the functionality of a component or system.

    Functional tests are based on functions and features, as well as interactions with other systems, and can be presented at all levels of testing: component or module (Component / Unit testing), integration (Integration testing), system (System testing) and acceptance testing. . Functional types of testing consider the external behavior of the system.

    :
    1. , , (, , , ). , , (use cases). :
      • «» (requirement-based testing) (Test Cases). , , ; ( ), (test cases). .
      • «-» (business-process-based testing) -, . (test scripts), , (use cases).

    2. T (security testing) — . :
      • ;
      • ;
      • ;
      :
      • xss (cross-site scripting) — (Web ), , , , ;
      • xsrf / csrf (request forgery) — , HTTP , : , , , (, ..), , , ;
      • code injections (sql, php, asp ..) — , , ;
      • server-side includes (ssi) injection — , HTML ;
      • authorization bypass — , .

    3. (interoperability testing) — .

    :
    • , .
    • ( 1 ) ( ), , .


  44. ?
    (non-functional testing) — , , , , , , .. (, , ).

  45. , ?
    1. (portability testing) .
    2. (interoperability testing) — .
    3. (performance testing) — .
    4. (reliability testing) — , .
    5. (usability testing) — , , .
    6. (safety testing) — .
    7. (stress testing) — , , , .
    8. (load testing) — , ( / ) .


  46. ?
    — , .

    , , , . .

    :
    • ;
    • ;
    • ;
    • ;


  47. ?
    — () , . .

    :
    • ;
    • ;
    • - ;
    • ;


  48. ?
    (onversion testing) — , , .

  49. ?
    — , (IEEE, W3C ..) .


!

PS , , ( ), . , .

!

Source: https://habr.com/ru/post/257529/


All Articles