Practice Exams:

ISTQB CTFL-2018 – 2018: Testing Throughout The Software Life Cycle

  1. Testing Levels : Integration Testing

Now that each unit has been proven to be working correctly, individually and ready to go, the next level would be to put those units together to build the system. This is what we call integration. At this stage, testers concentrate slowly on the integration itself. For example, if they are integrating module A with module B, they are interested in testing the community between those modules, not the functionality of the individual module, as that was done already during component testing tests.

Thus, examples of work products that can be used as a test basis for integration testing include software and system design, sequence diagrams, interface and communication protocol specifications use cases architecture at component or system level, workflows external interface definitions and the test objects typically are subsystems databases, infrastructure interfaces, APIs and micro services. Objectives of integration testing integration testing focuses on interactions between components or systems. Objectives of integration testing include reducing risk, verifying whether the functional and non functional behaviors of the interfaces are as designed and specified building confidence on the quality of the interfaces finding defects which may be in the interfaces themselves or within the components or systems preventing defects from escaping to higher test levels.

As with component testing, in some cases automated integration regression tests provide confidence that the changes have not broken existing interfaces, components or systems. There are two different sub levels of integration testing described in this service component integration testing, which focuses on the interactions and interfaces between integrated components and is performed after component testing. This type of integration testing is usually carried out by developers and is generally automated in Iterative and incremental development. Component integration tests are usually part of the continuous integration process. System integration testing, which focuses on the interactions and interfaces between systems, packages and micro services. So we are talking about bigger test objects here. Systems system integration testing can also cover interactions with and interfaces provided by external organizations, for example, web services. To use the example of the car, system integration is done when the whole car is already assembled and tested and you want to try the car with another system, say a camper.

The car is working perfectly fine by itself and the camper is working fine by itself and now we want to try the two together, especially the hedge part, which connects the car to the camper. This is system integration testing. To have an example from the software industry, a trading system of an investment bank will interact with the stock exchange to get the latest prices for its stocks and shares on the international market. This type of integration testing is usually carried out by testers. In this case, the developing organization doesn’t control the external interfaces, which can create various challenges for testing, for example, ensuring that tester blocking defects in the external organization’s code are resolved, arranging for test environments, and so on.

System integration testing may be done after system testing or in parallel with ongoing system test activity in both sequential or iterative and incremental development. Ideally, testers should understand the architecture and influence integration planning.

If integration tests are planned before component or systems are built, maybe we can build those components in the order required for most efficient testing. Typical defects and failures for component integration testing, typical defects and failures include incorrect data, missing data or incorrect data encoding incorrect sequencing or timing of interface calls interface mismatch failures in communication between components unhandled or improperly handled communication failures between components, and incorrect assumptions about the meaning units or boundaries of the data being passed between components.

Examples of typical defects and failures for system integration testing include inconsistent method structures between systems, incorrect data, missing data or incorrect data encoding, interface mismatch failures in communication between systems, unhandled or improperly handled communication failures between systems, incorrect assumptions about the meaning units or boundaries of the data being passed between systems, and failure to comply with mandatory security regulations. Specific Approaches and Responsibilities for integration Testing functional, non functional and structure test types are applicable. We will explain those later. In addition, there are various integration strategies. The first one, Big Bang integration, where we integrate a bunch of units together in one single step, resulting in a complete system. The problem with this kind of integration that it looks like building the system faster.

But in real life, it’s much more time consuming, either because we would have to wait till we have a bunch of units ready to integrate or for the fact that when testing of this system is conducted, it’s difficult to isolate any errors found. We have an error caused when few units have been added to. Which unit caused the error it’s hard to know. But on the other hand, there are more systematic integration strategies that are usually based on the system structure. For example, top down integration or bottom up integration, or integration based on functional tasks, transaction processing sequences, or some other aspects of the system or components. In such systematic strategies, where we usually integrate the components one by one, it’s much easier to isolate and detect defects. Early integration should normally be incremental meaning a small number of additional components or system at a time rather than big bank. The greater the scope of integration, the more difficult it becomes to isolate defects to a specific component or system, which may lead to increased risk and additional time for troubleshooting. This is one reason that continuous integration where software is integrated on a component by component basis, for example, functional integration has become common practice. Such continuous integration often includes automated regression testing, ideally at multiple test levels. Finally, risk analysis of the most complex interfaces can help to focus the integration testing.

  1. Testing Levels : System Testing

Objectives of System Testing Now that we know that audience are working together, the next step is to consider the behavior and capabilities of the whole system from an end to end perspective. This activity is called system testing. The objectives of system testing include reducing risk verifying whether the functional and nonfunctional behaviors of the system are as designed and specified validating that the system is complete and will work as expected building confidence in the quality of the system as a whole, finding defects, preventing defects from escaping to higher test levels of production. For certain systems, verifying data quality may be an objective in system testing.

The test environment, which is the hardware and software used as an environment for testing, should correspond to the final target or production environment as much as possible in order to minimize the risk of environment specific failures not being found in testing. System testing is concerned with testing all the scenarios that the system might go through. Thinking of all the possible scenarios is tricky and needs a good understanding of the system domain and the potential users.

Most often, it’s carried out by specialist testers that form a dedicated and sometimes independent test team within development reporting to the development manager or project manager. As with component testing and integration testing, in some cases automated system migration tests provide confidence that changes have not broken existing features or end to end capabilities. System testing often produces information that is used by stakeholders to make release decisions. System testing may also satisfy legal or regularity requirements or standards. Examples of vocabulary that can be used as a test basis for system testing include system and software requirement specification functional and nonfunctional risk analysis reports use cases EBIT and user stories models of system behaviors, state diagrams, system and user manuals.

 EBICS are a bigger user story a group of user stories. Test Objects typical test objects for system testing include applications, hardware, software, systems, operating systems, system under, test sut system configuration and configuration data. Typical Defects and Failures examples of typical defects and failures for system testing include incorrect calculations. Incorrect or unexpected system functional or nonfunctional behavior incorrect control and or data flows within the system failure to properly and completely carry out end to end functional tasks failure of the system to work properly in the production environments failure of the system to work as described in System and User Manuals. Finally, specific approaches and responsibilities related to system testing.

System testing should focus on the overall end to end behavior of the system as a whole. Both functional and non functional system testing should use the most appropriate techniques for the aspects of the system to be tested. We will learn about various techniques in the Test Design Techniques section. Independent testers typically carry out system testing defects in specifications, for example, missing user stories, incorrectly stated business requirements, and so on can lead to a lack of misunderstanding or disagreement about expected system behavior such situations can cause false positives and false negatives which waste time and reduce defect detection effectiveness respectively. Early involvement of testers in user story refinement or a static testing activities such as reviews helps to reduce the incidents of such situations.

  1. Testing Levels : Acceptance Testing

Acceptance testing is simply a form of yes or no testing. Should we accept the software? Yes or no so acceptance testing should answer the question can the system be released and deployed or not? Objectives of Acceptance Testing Acceptance testing, like system testing, typically focuses on the behavior and capabilities of of a whole system or border. Objectives of acceptance testing include establishing confidence in the quality of the system as a whole validating that the system is complete and will work as expected verifying that functional and nonfunctional behaviors of the system are as specified and designed. So remember that acceptance testing main objective is acceptance.

Yes or no defects may be found during acceptance testing, but finding defects is often not an objective for acceptance testing, whereas the component integration and system testing main objective is to find defects. Therefore, we say that test levels have different objectives. Acceptance testing may produce information to assess the system’s readiness for deployment and use by the customer or end user. Finding a significant number of defects during acceptance testing may in some cases be considered a major bullshit risk. Acceptance testing may also satisfy legal or regularity requirements or standards. Examples of worker products that can be used as a test basis for any form of acceptance testing include business processes, user or business requirements regulations, legal contracts and standards. Use cases, system requirements, system or user documentation installation procedures risk analysis reports. Typical test objects for any form of acceptance testing include system under, test system configuration and configuration data, business processes for a fully integrated system, recovery systems and hot sites for business continuity and disaster recovery testing. Operational and maintenance processes, forms reboots existing and converted production data.

Typical defects and failures found in any form of acceptance testing include system workflows. Do not meet business or user requirements. Business rules are not implemented correctly. System doesn’t satisfy contractual or regularity requirements, non functional failures such as security, inadequate performance efficiency under high loads or improper operation on a supported platform. Specific approaches and Responsibilities acceptance testing is often the responsibility of the customers. Business users, product owners or operators of a system and other stakeholders may be involved as well. Acceptance testing is often thought of as the last test level in a sequential development life cycle. But remember that acceptance testing is a yes or no kind of testing. So anywhere where we test something to get a yes or no answer, then it’s a form of acceptance testing which may also occur at other times. For example, acceptance testing of COTS or commercial of the shelf software product may occur when it’s installed or integrated. Acceptance testing of a new functional enhancement may occur before system testing. Common forms of acceptance testing include the following user acceptance testing operational acceptance testing contractual and regularity acceptance testing alpha and beta testing let’s explain each one of them in detail first.

User acceptance testing or UAT the acceptance testing of the system by users is typically focused on validating the fitness for use of the system by intended users in a real or simulated operational environment. The main objective is building confidence that the users can use the system to meet their needs, fulfill requirements, and perform business processes with minimum difficulty, cost and risk. Users can perform any test they wish, usually based on their business processes. It can happen at any time during the project. So yes, you can demo the software to the customer in the middle of the project and actually they can decide not to continue if they want. And of course, UAT usually happens before the final user sign off. Operational Acceptance Testing or Oat Operational acceptance testing tests the system for acceptance from the system administrator’s point of view.

 It’s usually done by operations or system administration staff and usually performed in a simulated production environment. The tests focus on operational aspects and may include testing of backup and restore, installing, uninstalling and upgrading disaster recovery, user management, maintenance tasks, data load and migration tasks. It checks for security vulnerabilities I hate this word. I cannot pronounce it, so move along. Performance Testing the main objective of operational acceptance testing is building confidence that the operators or system administrators can keep the system working properly for the users in the operational environment, even under exceptional or difficult conditions. Besides the test basis we have mentioned before testing for driving test cases for operational acceptance testing, one or more of the following worker products can be used backup and restored procedures, disaster recovery procedures, non functional requirements, operations, documentation, deployment and installation instructions, performance targets, database packages, and security standards or regulations.

The third type of acceptance testing is contract and regulation acceptance testing. First, let’s talk about contract acceptance testing. Sometimes the criteria for accepting a system are decremented in a contract. If a customer needs to buy a C OTS software but will add a minor requirement, say to add his company name and logo to the first display screen, then I guess you will agree with me that we don’t need to change the requirements document for this and go through the software development lifecycle again. Such a minor change could be documented in the contract and that’s it. Testing then is conducted to check that these criteria have been met before the system is accepted. Contractual acceptance testing is often performed by users or by independent testers. Regulation acceptance Testing in some industries, systems must meet governmental, legal or safety standards.

Examples of these are the defense, banking and pharmaceutical industries. For example, if a software company wants to add the ISO logo to its software, then it should follow ISO guidelines or regulations in developing the software. Testing then is conducted to test whether we should pass the ISO guidelines or not. Yes or no? Regularity acceptance testing is often performed by users or by independent testers, sometimes with the result being witnessed or audited by regularity agencies. The main objective of contractual and regularity acceptance testing is building confidence that contractual or regularity compliance has been achieved. Finally, alpha and beta Testing some systems are developed for the mass market. For example, commercial off the shelf software or COTS, where there is no actual customer who provided you with his requirements. But instead the marketing team of your company suggested some requirements for you and you built the system based on those suggestions. Still, you want to get feedback of how the user who will buy the system feels when using the system before the software product is put on the market.

Very often, this type of system undergoes two stages of acceptance testing alpha testing, which takes place at the developers site. We invite some potential users to our site using our machines and let them play with the software for a while and get feedback from them. The second type is beta testing, which takes place at the customers site. We send the software to some potential users and ask them to play with the software using their machines and ask them to send us feedback when they are done. Beta testing may come after alpha testing or may occur without any preceding alpha testing. So remember, alpha is done as a development site and beta is done at the customer site. By the way, when you download any beta version of a software from the net, that means you are currently a beta tester. One objective of alpha and beta testing is building confidence among potential or existing customers and or operators that they can use the system under normal everyday conditions and in the operational environments to achieve their objectives with minimum difficulty, cost and risk.

Another objective may be the detection of defects related to the conditions and environments in which the system will be used, especially those conditions and environments that are difficult to replicate by the development team. So I want you to give it a thought which is better from your point of view alpha or beta testing? Actually, the answer is both. Each one of them has its Cones and Bose, so both are good. You cannot say one is better than the other. In iterative development, project teams can employ various forms of acceptance testing during and at the end of each iteration, such as those focused on verifying a new feature against its acceptance criteria, and those focused on validating that a new feature satisfies the user needs. In addition, alpha tests and beta tests may occur either at the end of each iteration, after the completion of each iteration, or after a series of iterations. Also, user acceptance tests, operational acceptance tests, regularity acceptance tests and contractual acceptance test also may occur either at the close of each iteration, after the completion of each iteration, or after a series of.

  1. Test Types

In the last few lectures, we learned that each test level has specific testing objectives. In this lecture, we will look at the types of testing required to meet those objectives. We need to think about different types of testing because testing the functionality of the component or system alone may not be sufficient to meet the overall test objectives. A test type is a group of test activities aimed at testing specific characteristics of a software system or a part of a system based on specific test objectives. Such objectives may include evaluating functional quality characteristics such as completeness, correctness and suitability evaluating non functional quality characteristics such as reliability, performance, efficiency, security, compatibility and usability. Evaluating whether the structure or architecture of a component or system is correct, complete and as is specified and evaluating the effects of the changes, such as confirming that defects have been fixed, which is called confirmation testing, and looking for unintended changes in behavior resulting from software or environmental changes, which is called regulation testing. Let’s look at each one of them in detail.

Functional Testing The function of a system or component is what it does. The function of a system is typically described in the requirements dislocation function or in use cases. Some functions are assumed and not necessarily required by the customer directly, such as copy and paste in any system which should be implemented even if the customer didn’t ask for it explicitly. Functional tests are based on these functions described in documents or understood by the testers. Another type of functional testing is interoperability testing, which evaluates the capability of the software product to interact with one or more specified components. Functional tests should be performed at all test levels. The smoothness of functional testing can be measured through functional coverage. Functional coverage is the extent to which some type of functional elements have been exercised by tests and is expressed as a percentage of the type or types of elements being covered.

For example using Torcibility between tests and functional requirement the percentage of these requirements which are addressed by testing can be calculated potentially identifying coverage gaps. Functional test design and execution may involve spatial skills or knowledge such as knowledge of the particular business problem the software solves for example, geological modeling software for the oil and gas industries or the particular role of the software servers for example, computer gaming software that provides interactive entertainment. The techniques used for functional testing are often black box based to drive test conditions and test cases from the functionality of the software or system. But other techniques can also be used. We will be talking more about those techniques in the Design Techniques section. Non Functional Testing Imagine an accounting system where every functionality works perfectly well, but it’s very slow. You might need hours to print a report. Such a system is still not usable, even though it performs its functions quite well. That’s why we need the second type of testing, which is testing of the quality characteristics or non functional testing. Here we are interested in how it is done. The functional attribute what it does might be print a report. The non functional attribute how it’s done might be in two minutes. Non functional testing includes, but not limited to performance testing. How many users can connect to the system and how will that affect the performance of the software? Load testing how will the system perform if we do a single transaction so many times? Stress testing how will the system perform under very tough circumstances?

Many users, many transactions, low memory, and so on. Usability Testing Is the system easy to use? Maintainability testing Is the system easy to maintain if we need to fix a defect? Reliability Testing Is the system reliable or does it crash eventually? Portability Testing Is the system easy to boot from one platform to another? Security testing Is a system secure enough that no hackers can hack into the system and get unauthorized data? Security testing used to be a functional testing attribute in the previous versions of the Ictpb service, but in this version it’s a non functional attribute.

Refer to ISO standard ISO 25,010 for a classification of software budget quality characteristics if you are interested to learn more about it. Non functional requirement can be stated in the requirement document as a direct customer request. For example, the system must support 1000 users simultaneously or as an industry standard. For example, the web page response time should be less than 6 seconds. We will talk more about some non functional attributes in the Tools section of this course. Contrary to common misconceptions, non functional testing can and often should be performed at all test levels and done as early as possible. The late discovery of nonfunctional defects can be extremely dangerous to the success of the project.

Test design techniques can be used to drive test conditions and test cases for non functional testing. The thoroughness of non functional testing can be measured through non functional coverage. Non functional coverage is the extent to which some type of non functional element has been exercised by tests and is expressed as a percentage of the type or types of element being covered. For example, using traceability between tests and supported devices for a mobile application, the percentage of devices which are addressed by compatibility testing can be calculated potentially identifying coverage gaps. Non functional test design and execution may involve spatial skills or knowledge, such as knowledge of the inherent weaknesses of a design or technology. For example, security problems associated with boutique programming languages or the particular user base, for example, the bill sona of users of healthcare facility management systems. White box Testing We will talk more deeply about white box testing in a future section. White box testing simply means that we use any information about the system’s internal structure or implementation and how the software is constructed to test the software. Internal structure may include code architecture, workflows and or data flows within the system.

So for example, if you as a tester know the components of the system’s database or how a specific function is implemented, then you can use this to your advantage and use white box testing. The most famous kind of white box testing is when you have the source code written by the developers. Again, the thoroughness of whitebox testing can be measured through structural coverage. Structural coverage is the extent to which some type of structural element has been exercised by test and is expressed as a percentage of the type of element being covered. At the component testing level, code coverage is based on the percentage of component code that has been tested and may be measured in terms of different aspects of code coverage items such as the percentage of executable statements tested in the component or the percentage of decision outcomes tested. These types of coverage are collectively called code cover. At the component integration testing level, white box testing may be based on the architecture of the system, such as interfaces between components. And structural coverage may be measured in terms of the percentage of interfaces exercised by tests. White box test design and execution may involve visual skills or knowledge such as the way the code is built, for example, to use code coverage tools, how data is stored, for example, to evaluate possible database queries and how to use coverage tools and to correctly interpret their results. Last a Change related Testing first, let’s talk about confirmation testing or retesting. When a test fails and we determine that the cause of the failure is a software defect, the defect is reported to the developer and we can expect a new version of the software that has had the defect fixed.

In this case, we will need to execute the test again to confirm that the defect has indeed been fixed. This is known as confirmation testing, also known as retesting. When doing confirmation testing, it’s important to ensure that the test is executed exactly the same way as it was the first time using the same inputs, data and environment. The software may also be tested with new tests. If, for instance, the defect was missing functionality, at the very least, the steps to reduce the failures caused by the defect must be re executed on the new software version.

The purpose of a confirmation test is to confirm whether the original defect has been successfully fixed or not. If the last test now passes, does this mean that the software is now correct? Well, we now know that at least one part of the software is correct where the defect was. However, this is not enough. The effects may have introduced or uncovered a different defect elsewhere in the software. The way to detect these unexpected side effects of fixes is to do regression testing.

The word regress is the opposite of the word progress. Progress moving forward. So regress is moving backward in regression testing to move backward and test areas that were already working fine to make sure it’s still working fine after any kind of change to the software. It changes may include the changes to the environment such as a new version of an operating system or database management system. Confirmation testing and recurring testing are performed at all test levels, especially in Iterative and incremental development life cycles.

For example, Azure new features It changes to existing features and code refactoring result in frequent changes to the code, which also requires a change related testing. Due to the evolving nature of the system, confirmation and repression testing are very important. This is particularly relevant for Internet of Things systems where individual objects, for example, devices are frequent, frequently updated or the blade regression test suites are one many times and generally evolved slowly, so regression testing is a strong candidate for automation. Automation of these tests should start early in the budget.