Practice Exams:

ISTQB CTFL-2018 – 2018: Testing Throughout The Software Life Cycle Part 2

  1. Test Levels vs Test Types

One last thing to talk about is the relationship between test levels and test types. What kind of test types can be performed at each test level? Let’s consider a banking application and look at the relationship one by one for a banking application, starting with functional tests. For component testing, tests are designed based on how a component should calculate compound interest. For component integration testing, tests are designed based on how account information captured at the user interface is passed through the business logic. For system testing, tests are designed based on how account holders can apply for a line of credit on their checking accounts. For system integration testing, tests are designed based on how the system uses an external microservice to check an account holder’s credit score. For acceptance testing, tests are designed based on how the Banker handles approving or declining a credit application.

The following are examples of non functional tests for component testing performance tests are designed to evaluate the number of CPU cycles required to perform a complex total interest calculation. For component integration testing, security tests are designed for buffer overflow vulnerabilities due to data passed from the user interface to the business logic. For system testing, portability tests are designed to check whether the presentation layer works on all supported browsers and mobile devices. For system integration testing, reliability tests are designed to evaluate system robustness if the credit score microservice fails to respond. For acceptance testing, usability tests are designed to evaluate the accessibility of the Banker’s credit processing interface for people with disabilities.

The following are examples of wide box tests for combined testing, tests are designed to achieve complete statement and decision coverage for all the components. We will talk more about those two types of coverage in a later section. So for component testing, tests are designed to achieve complete statement and vision coverage for all components that perform financial calculations. For component integration testing, tests are designed to exercise how each screen in the browser interface passes data to the next screen and to the business logic. For system testing, tests are designed to COVID sequences of web pages that can occur during a credit line application.

For system integration testing, tests are designed to exercise all possible inquiry types sent to the credit score microservice. For acceptance testing, tests are designed to COVID all supported financial data, files, structures and value ranges for bank to bank transfers. Finally, the following are examples for change related tests. For component testing, automated regression tests are built for each component and included within the continuous integration framework.

For component integration testing, tests are designed to confirm fixes to interface related defects as the fixes are checked into the code repository. For system testing, all tests for a given workflow are re executed if any screen on that workflow changes. For system integration testing, tests of the application interacting with the credit scoring microservice are executed daily as part of continuous deployment of that microservice. For acceptance testing, all previously failed tests are re executed after a defect found in acceptance, testing is fixed. While this section provides examples of every test type across every level, it’s not necessary for all software to have every test type represented across every level. However, it’s important to run applicable test types at each level, especially the earliest level where the test type occurs.

  1. Maintenance Testing

Once the software is delivered and deployed, it could be in operation for years. During this period it may become necessary to change the system and after any change we must test the system to make sure everything is still working fine. Testing that takes place on a system that is in operation in the life environment is called maintenance testing.

Maintenance testing is really different than regular testing for a system under development. There are users who are expecting the system to be continuously working fine and cannot wait for long till you finish updating the system. And also there is sensitive live data that you need to take care of while updating a live system. When any changes are made as part of maintenance, maintenance testing should be performed both to evaluate the success with which that changes will be made and to check for possible side effects or regressions in parts of the system that remain unchanged, which is usually most of the system. Maintenance can involve the planned releases and unplanned releases, which we call hot fixes. So we need to perform maintenance testing no matter how small or quick the fix is.

Triggers for Maintenance there are several reasons why software maintenance and thus maintenance testing takes place both for bland and unplanned changes. We can classify the triggers for maintenance as follows modification, such as planned enhancement, for example release based corrective and emergency changes. It changes of the operational environment, such as planned operating system or database upgrades. Upgrades of COTS software. And patches for defects and vulnerabilities migration, such as from one platform to another, which can require operational tests of the new environment as well as of the change of software or tests of data conversion when data from another application will be migrated into the system being maintained. Retirement, such as when an application reaches the end of its life when an application or system is retired. This can require testing of data migration or archiving if long data retention periods are required. Testing restore retrieve procedures after archiving for long retention periods may also be needed.

In addition, regression testing may be needed to ensure that any functionality that remains in service still works for Internet of Things systems maintenance testing may be triggered by the introduction of completely new or modified things such as hardware devices and software devices into the overall system. The maintenance testing for such systems places biotech emphasize on integration testing at different levels, for example network level or application level and on security aspects in boutique those relating to personal data. Now let’s talk about impact analysis for maintenance. Maintenance testing is a kind of art by itself in my point of view. Unlike regular testing, there are many aspects we need to be careful about when we do maintenance testing as we are testing on a live environment. Imagine if your system is a banking system with tons of data about the bank, clients and transactions I can test.

For example, does delete all the records still working or not? Come on. Or can I test transferring $1 million to my account? Is it working or not? That would be nice to look at. So in addition to testing that has been a change It, which is V testing, maintenance testing, includes extensive vigoration testing to bars of the systems that haven’t been changing. But a lot of questions could arise. What could this change have an impact on? How important is a fault in the impacted area? Should we test what has been affected? Only if yes. How much the most important affected areas or areas most likely to be affected, or the whole system? The answer is it depends. Determining how the existing system may be affected by changes is called impact analysis and is used to help decide how much regression testing to do.

The impact analysis may be used to determine the regression test suite to use. Impact analysis can also help to identify the impact of a change on existing tests. The side effects and affected areas in the system need to be tested for regressions, possibly after updating any existing tests affected by the change. Impact analysis may be done before a change is made to help decide if the change should be made based on the potential consequences in other areas of the system or not. Actually, I personally believe that impact analysis should always be done before any change is made and not after it.

A maintenance release may require maintenance testing at multiple test levels using various test types based on its scope. The scope of maintenance testing depends on the degree of risk of the change. For example, the degree to which the changed area of software communicates with other components or systems, the size of the existing system and the size of the change. Impact analysis can be difficult if specifications, for example, business requirements, user stories, architecture are out of date or method test cases are not documented or are out of date. Bi directional possibility between tests and the test basis has not been maintained. Tool support is weak or nonexistent. The people involved with do not have domain and or system knowledge. Insufficient attention has been made to the software maintainability during development. So what would you do if you face a situation like this yourself? Well, you should consider what the system is currently doing. For example, examine existing system and look in the user manual guides and ask the experts who are the current users.

  1. Testing in Context

So we have learned about software development models, test levels and test types. So the question that usually pops up here is which software development model to use or which software development model is better? The answer is software development models must be selected and adapted to the context of the project and the product characteristics. An above yet software development lifecycle model should be selected and adopted based on the project goal, the type of product being developed, business barrier’s for example, time to market identified product and project risks. For example, the development and testing of a minor internal administrative system should differ from the development and testing of a safety critical system such as an automobile brakes control system unless organizational and cultural issues for example, in some cases smooth communication between team members in some cases may prevent or abstract iterative development from the testing point of view.

Depending on the context of the project, it may be necessary to combine or reorganize test levels and or test activities. For example, if we need to integrate a commercial of the shelf COTS software product into a larger system, the buyer may perform interoperability testing at the system integration test level to make sure that it integrates fine. And also we can do testing at the acceptance test level, functional and nonfunctional testing along with user acceptance testing and operational acceptance testing. In addition, software development lifecycle models themselves may be combined. For example, a V model may be used for the development and testing of the back end systems and their integrations, while an Azure development model may be used to develop and test the front end user interface or UI, and functionality prototyping may be used early in the project with an incremental development model adopted once the experimental phase is complete. Internet of Things or IoT systems which consists of many different objects such as devices, products and services typically apply separate software development lifecycle models for each object.

 This presents a particular challenge for the development of the Internet of sync system versions. Additionally, the software development lifecycle of such objects places a stronger emphasis on the later phases of the software development lifecycle after they have been introduced to operational use. For example, operate, update and decommission phases.

Okay, now to the Farmba. What kind of questions can we see in the exam related to this section? Well, we have learned many terminologies in this section and we need to know what means what and who does what. The V model is a sequential software model where we can do a planning and design of testing iterative. Incremental models like Spiral, Agile and Rationale are where we need to use automation testing, combine testing, integration testing, system testing and acceptance testing are test levels functional, non functional, white box and migration and retesting all types of testing.

Maintenance testing is done on a live environment, alpha testing is done at the developer site, beta testing is done at the user site and both are kinds of acceptance testing. A question might be what kind of testing can be done by a developer? Let’s see component testing, integration testing, functional testing, non functional testing and for sure, white box testing, Bloss V testing and retrievation testing. Another question could be what kind of testing can be done by the customer, acceptance testing and the four types of testing.