Practice Exams:

ISTQB CTFL-2018 – Test Management Part 3

  1. Test Estimation

When we put a plan, we need to estimate the effort needed to execute the plan. We can use the estimated effort to estimate other elements like the time needed, the number of resources needed, and the budget needed. There are many techniques to estimate the different elements needed for the plan. The Http curriculum mentioned only two techniques metrics based and expert based techniques. Let’s look at each one of them. The Metrics Based Approach to understand this approach, let’s give an example. First, if your previous project was 1000 hours long, and the testing efforts in that project was 300 hours out of the 1000 hours, now the new project is 2000 hours, can you estimate the testing effort? Yes, it will be around 600 hours.

Now, do you think you can estimate the number of defects that you can expect to find in the new project? Actually, yes. If you found 500 defects in the 1000 Hours project, then you would expect the defects in the new project to be around $1,000. But you know, your development team now has the knowledge and expertise of such software domain. Then you would expect the defects to be less than 1000, say 800 defects. However, there’s a module in the new software that will use a new technology you have never dealt with before. So you can estimate the number of defects to be around 850 bugs. So in this technique, we used collected data and some sort of equations to estimate the project. The way I see it, we used some information about our history, which was how many bugs were found in the 1000 Hours project.

We used some information about our present, which is the knowledge of the current development team, and we used some information about our future, which is the expectation of the complexity of a specific module. I’m good. Other kind of data that can be estimated include the number of test conditions, the number of test cases written, the number of test cases executed, the time taken to develop test cases, the time taken to run test cases, the number of defects found.

The accuracy of this technique will heavily depend on the accuracy of the collected data. Examples of metrics based estimation approach are for sequential projects. Defect removal models are examples of the metrics based approach. This is similar to the one I was just explaining, where volumes of defects and time to remove them are captured and reported, which then provides a basis for estimating future projects of similar nature. For agile projects, burned down charts are examples of the metrics based approach as effort is being captured and reported and is then used to feed into the team’s velocity to determine the amount of work the team can do in the next iteration. You don’t need to understand or to know the details about the specific techniques for the exam. Just knowing that there are techniques called defective models and burn down charts is enough for now.

The expert based approach depends on using the experience of some stakeholders to drive an estimate. In this context, experts could be business experts tested process consultants, developers, analysts and designers anyone with knowledge about the application to be tested or the task is involved in the process. For sequential projects, the Wide Band Delphi distribution technique is an example of the expert based approach in which groups of experts provides estimates based on their experience. For agile projects, Planning Poker is an example of the expert based approach as team members are estimating the effort to deliver a feature based on their experience. Again, you don’t need to know details about specific techniques for the exam. Just knowing that there are techniques called Wideband Delfi and Planning Broker is enough for now.

Details of those techniques can be found in the ICB Agile Extension Syllabus or the ICB Advanced Test Manager syllabus. The question that comes to mind now is which technique is better than the other? Again, wrong question. Each technique has its own pros and cons. I can say we better use both techniques to confirm each other. Factors Influencing the Test Effort Test effort estimation involves predicting the amount of test related work that will be needed in order to meet the objectives of testing for a particular project release or iteration.

Factors influencing the test effort may include characteristics of the product, characteristics of the development process, characteristics of the people and the test results. Let’s look at each one of those in detail. Product characteristics, the risks associated with the product, the quality of the test basis, the size of the product, the complexity of the product domain, the requirements for quality characteristics, for example, security and performance, the required level of detail.

For test documentation requirements for legal and regularity compliance development to process characteristics include the stability and maturity of the organization, the development model in use, the test approach, the tools used, the test process time pressure bible characteristics include the skills and experience of the bebop involved especially with similar projects and product. For example the domain knowledge, team cohesion and leadership how the team works together as a team test Results the number and severity of defects found the amount of vWork required the more the number of defects found or the more amount of vWork required will increase the effort estimate.

  1. Test Control

The purpose of test monitoring is to gather information and provide feedback and visibility about test activities. Information to be monitored may be collected manually or automatically. A plan won’t mean anything without monitoring the execution of the plan. Test monitoring can serve various purposes during the project, including give the test team and the test manager feedback on how the test testing work is going, allowing opportunities to guide and improve the testing, and the project provides a project team with visibility about the test results. Measure whether the exit criteria or the testing tasks associated with an Agile project definition of done are satisfied, such as meeting the targets of for coverage of product risks, requirements or acceptance criteria. Gutter data for use in estimating future test efforts and above all, prove that the plan itself is right and following it will eventually lead to the test objective.

We are looking for metrics used in testing. Metrics can be collected during and at the end of test activities in order to assess progress against the blend schedule and budget current quality of the test object adequacy of the test approach effectiveness of the test activities with respect to the objectives percentage of blend work done in test case preparation or percentage of bland test cases implemented, percentage of blend work done in test environment preparation, test case execution, for example, number of test cases one not. One test cases pass failed and or test conditions pass failed defect information. For example, defect density defects found and fixed failure rate and confirmation test results.

You will know more about defect dynasty in the quiz actually test coverage of requirements, user stories, acceptance criteria, risks or code, task completion, resource allocation and usage and effort cost of testing including the cost compared to the benefit of finding the next defect or the cost compared to the benefit of running the next test reporting. Test reporting is about summarizing and communicating test activity information to project stakeholders both during and at the end of a test activity or a test level. Test reporting. The purposes of test reporting are notifying project stakeholders about test results and exit criteria status help stakeholder, understand and analyze the results of a test period, helping stakeholders to make decisions about how to guide the project forward and assuring that the original test plan will lead us to achieve our testing goals or objectives.

ISO standard 29 1193 refers to two types of test reports test broadly reports and test completion reboots, also called test summary reboots in this service and contains the structures and examples for each type. The test report prepared during a test activity may be referred to as a test progress report, so during the test activity it’s called test progress reboot. While a test report at the end of a test activity may be referred to as a test summary reboot. During test monitoring and control, the test manager regularly issues test burglary boards for stakeholders when the exit criteria are reached the test manager issues the test Summary Report this report provides a summary of the testing performance based on the latest test bogus report and any other relevant information.

Typical tested bogus reports and test summary reports may include summary of testing performance where we identify all relevant support materials such as test items, environment and references so that the reader of the report knows which version and release of the project or software is being reported on. Information on what occurred during a test period deviations from the Blend What is different from the blend? Deviations include deviations in schedule, duration, or effort of test activities the status of testing and product quality with respect to the exit criteria or definition of done factors that blocked or continued to block progress metrics on defects, test cases, test coverage, activity progress and resource consumption. Residual risks the remaining risks that we haven’t handled yet.

Reusable test worker products reduced in addition to content common to test progress reports and test summer reports. Typical test progress reports may also include the status of the test activities and progress against the test plan factors impeding or blocking the progress testing plan for the next reporting period and the quality of the test object. The contents of a test report will vary depending on the project, the organizational requirements, and the software development lifecycle. For example, a complex project with ministry holders or a regulated project may require more detailed or formal reporting than a quick software update.

In Agile development, tester bogus reporting may be incorporated into task boards, defect summaries, and burn down charts, which may be discussed during a daily stand up meeting. You can learn more about those terms in Agile development in the ICEB Agile Syllabus. In addition, if we were using risk based testing, then stakeholders would expect to see the updated list of product and project twists, responses and effectiveness of the responses. If we were using requirements based testing, then we could measure coverage in terms of requirements or functional areas. In addition to tailoring test reports based on the context of the project, test to report should be tailored based on the report’s audience.

The type and amount of information that should be included for a technical audience or at his team may be different from what would be included in an executive summary report in the technical audience case, detailed information on defect types and trends may be important in the report targeting the executives. A high level report may be more appropriate. Executives love one page or one PowerPoint slide presentations that might include elements like a status summary of defects by priority, budget, schedule and test conditions. Best Failed or not Tested test Control If you have heard of Murphy’s Law, then you know that’s hardly that anything goes as planned.

Risks happen, the customer changes his mind every now and then, the stakeholders interfere, software crashes, market changes, stuff quit, and so on. When plans don’t execute the way we want, then control is needed to get things back to its track. Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and possibly reported actions may cover any test activity and may affect any other software lifecycle activity. Consider the following example a module or component will not be ready on time for testing. Test control might involve repuriatorizing the test so that we start testing against what is available now.

You discovered that most of the executed test cases have failed, which results in too many defects logged against the software. After investigation, you discovered that the easy test cases are the ones that run first. Test control could be to tighten the inter criteria for testing, as it seems that developers don’t do proper unit testing. Examples of other test control actions include reporiatorizing tests when an identified risks or care for example, software delivered late and changing the test schedule due to availability or unavailability of a test environment or other resources.

Reevaluating whether a test item meets an entry or exit criteria due to viewwork adjusting the scope of testing perhaps the number of tests to be run to manage the testing of late change requests. And we can, as I said, tighten the entry criteria. Corrective actions taking do not have to be testing related. For example, describing of functionality, removing some less and bolts important to blend deliverables from the initial delivered solution to reduce the time and effort required to achieve that solution, or delaying release into the production environment until exit criteria have.

  1. Configuration Management

How many times have you heard questions like what is the correct version of the software module that I have to continue its coding? Who can provide me with an accurate copy of the last year’s version four, one of the Expert Wave software package. What version of the design documents matches the version we are adapting to a new customer? What version of the software system is installed at ABC Industrial Industries? What changes have been introduced in the version installed at the ABC Industries site? Can we be sure that the version installed at ABC doesn’t include undocumented changes? The answers to such questions are within configuration management.

The purpose of configuration management is to establish and maintain the integrity of the component or system, the software, and their relationships to one another through the project and the product lifecycle. Configuration management is keeping track of the different versions and iterations of the project’s artifacts or local products, such as documents, components and data.

To properly support testing, configuration management may involve ensuring the following all test items are uniquely identified, version controlled, tracked for changes, and related to each other. All items of test square are uniquely identified. Version controlled, tracked for changes related to each other and related to versions of the test items so that traceability can be maintained throughout the test process. All identified documents and software items are referenced unambiguously in the test documentation during tested blending, configuration management procedures and the infrastructure like tools should be identified, documented and implemented.

Notice here that we said that configuration management procedures and infrastructure should be implemented during tested planning. Actually, we should have configuration management be ready to be used before the project starts. You will create different work of products during the test planning, test analysis, and test design. And you want everything to be included in the configuration management from day one.

  1. Defect Management

Management ensures that defects are tracked from recognition to classification to correction to closure. It’s important that organizations document their defect management to process. Aside from the big words, defect management is simply the bug reporting tool that you are using. It’s a sort of database to store and track defects. This process must be agreed with all those participating in defect management, including designers, developers, testers and product owners, depending on the organization defect logging and tracking from very informal to very formal. I have seen companies use email systems to communicate defects and boy, how many problems have raised from this.

Like a developer claims that he didn’t receive the email or the manager cannot know how many defects are currently orbit or fixed. It was a total miss. As we have mentioned before, some of the reports may turn out to describe false bostives. Remember, false bostives. False boss tips are defects, but are not due to actual failures. For example, a test may fail when a network connection is broken or times out. This behavior does not result from a defect in the test object, but is an anomaly that needs to be investigated.

Testers should attempt to minimize the number of false bosses reported as defects. There are many ways we can classify the defects, but we can only classify the defects if we put the right information in the defect report. Defects may be reported during coding, static analysis, reviews, dynamic testing or use of a software product. Defects may be reported for issues in code or working systems or in any type of documentation, including requirements, user stories and acceptance criteria, development documents, test documents, user manuals or installation guides. In order to have an effective and efficient defective management process, organizations may define standards for the attributes, classification and workflow of defects.

Typical defect reports have the following objectives provide developers and other parties with information about any adverse event that occurred to enable them to identify specific effects, to isolate the problem with minimal reducing tests, and to correct the potential defects as needed or to otherwise resolve the problem. Provide test managers a means of tracking the quality of the work product and the impact on the testing. For example, we can measure how many defects are orbit, how many defects per developer, how many defects per tester, how many high priority defects, how many defects per module, and so on.

And the last objective of defects reporting is provide ideas for development and tested bosses improvement. For example, for each defect, the point of injection should be documented. For example, a defect in the requirement or in the code and subsequent to boss’s improvement can focus on that particular area to stop the same defect occurring again. To achieve these objectives, the defectory board has to be again very effective, to the point and very efficient. Not too many details and not too little details. I think of the defective board as an art. It should help any and all of its readers the developers, the managers, the customers, the new developer who just joined the team who has no idea how the software actually works and the tester who has to do regression testing using your defectory board a few years later after releasing the software.

The report should be as helpful as possible to help the developer reproduce the bug easily. Every step should be unambiguous and clear enough for anyone to understand. That’s why it’s highly recommended that the tester try the bug or twice, the Defect scenario a few times, and on different configurations to make sure it will always be reducible. A defect report during dynamic testing typically includes an Identifier, a title and a short summary of the defect being reported, date of the defect report issuing, organization and author, identification of the test item, configuration item being used and environment the development lifecycle phase or phases in which the defect was observed. A description of the defect to enable reproduction and resolution, including plugs, database, DOMS screenshots or recordings if found during test execution.

Expected and actual results. Scope or degree of impact severity of the defect on the intents of stakeholders urgency or priority to fix a state of the defect to report, for example robin deferred, duplicate, waiting to be fixed, awaiting confirmation testing, reorbin, closed conclusions, recommendations and approvals global issues such as areas that may be affected by a change resulting from the defect. A change history, such as the sequence of actions taken by project team members with respect to the defect to isolate, repair and confirm it as fixed and last references, including the test case that revealed the problem. Some of these details may be automatically included and or managed when using Defect management tools. For example automatic assignment of Identifier assignment and update of the Defect report state during the workflow, and so on.

Defects found during a static testing boutiquellary. Reviews will normally be documented in a different way. For example, in review meeting notes, an example of the content of a defectory board can also be found in the ISO standard it 20 911 nine Three, which refers to and in this standard, defectory boards are referred to as instant reboots. Looking at an example of a defective board will make us understand it better.

Summary Application crash on clicking the Save button while creating a new user bug ID 12345 and it’s usually automatically created module New Users menu build Number version number 5. 0. 1 and you will know more about the build number and version numbers in the configuration management video. Severity High where we have High, Medium, Low or we can boot one priority also High where we also have High, Medium, Low or one to three assigned to developer X reported by your name reported on today’s date. Status New Urban Active, depending on the tool you are using. Environment windows Ten SQL Server 2010 steps to reproduce log in into the application, navigate to the user’s menu and select New User. Fill all the user information fields. Click on the Save button. See the error page exception. Insert value error.

Attach it logs for more information and you can attach any logs related to the bug, if any. And also see the attached screenshot of the error page expected result on clicking save button. It should be vomited to success message new user has been created successfully and attach application crash screenshot. Developers love screenshots. Thank you.