Practice Exams:

ISTQB CTFL-2018 – 2018: Static Testing

  1. Static Testing Basics and differences with Dynamic Testing

We have mentioned static testing in the first section of this course and said that static techniques test software without executing the software code, while dynamic testing, on the other hand, requires the execution or running of the software. Under test, static testing can be considered the first line of defense in minimizing defects from the final software by removing the defects from the work products that we shall use in building our software or as test basis and hence makes those defects cheaper and easier to fix.

Static testing is often a forgotten area of software testing and companies lose a lot by not incorporating static testing techniques in their software development lifecycle. As we have said, static testing relies on examining the software worker products. The examination is either manual examination of worker products, for example, reviews or using software tools and utilities, for example, static analysis. Whether the examination is manual or tool driven, both types assess the code or other worker product being tested without actually executing the code or work product being tested. The syllabus concentrates more on the review process that I had to record few values to explain it. Meanwhile, it only mentioned a static analysis by tools.

Very briefly, here is all what you need to know about static analysis aesthetic analysis is essential for safety critical computer systems, for example, aviation, medical or nuclear software. But aesthetic analysis has also become essential and common in other settings. For example, static analysis is an important part of security testing. Static analysis is also often incorporated into automated build and delivery systems, for example in agile development, continuous delivery and continuous deployment. Differences between Static and Dynamic Testing as we are talking about dynamic testing, let’s talk about the differences between static and dynamic testing.

As we have mentioned before, static testing dynamic testing can have the same objectives. Objectives such as providing an assessment of the quality of the local products and identifying defects as early as possible. Static and dynamic testing complement each other by defining different types of defects. Let me give you an example so you can taste the main difference between static and dynamic testing. In regular dynamic testing techniques, the developer will write the code and build an executable software and pass the executable software to the tester to test it.

The tester will need to design test cases and test procedures, run the test procedures and compare expected result versus actual result and if they are not the same, the sister will try the scenario a few times to make sure it’s severely a failure. The tester then will create a bug report and send it to the developer to fix it. The developer will try the scenario and might communicate with the tester several times till he can verify the failure. The developer then will try to find the root cause of the failure and believe me, it might take days to find it.

And if found, the developer will try to fix it, which again might take a long time as well. After the bug has been fixed, the developer will send the new executable software with the fix to the tester to confirm the fix in case the bug is not actually fixed. Then the cycle will be repeated several times till the test confirms the fix and so on. So it’s a very long trip. Now imagine a situation where a person or a software tool can examine the source code written by the developer and directly finds the bug. It will be easy to fix the bugs in and we are done.

That’s the magic of static testing. Finding defects and not failures is one main distinction between static and dynamic testing. A defect can reside in the work product for a very long time without causing a failure. The best where the defect lies may be rarely exercised or hard to reach, so it will not be easy to construct and execute academic tests that encounters it. Aesthetic testing may be able to find the defect with much less effort. Another distinction is that static testing can be used to improve the consistency and internal quality of worker products.

While dynamic testing typically focuses on externally visible behaviors, using static testing techniques to find effects and then fixing those defects. Bombardier is almost always much cheaper for the organization than using dynamic testing to find defects in the test object and then fixing them, especially when considering the additional costs associated with updating other worker products and performing confirmation and regression testing. Even though static testing sounds like magic, but still we will need to do dynamic testing as running the excitable code can find other types of defects.

So static testing is complementary to dynamic testing. Compared to dynamic testing. Typical defects that are easier and cheaper to find and fix through static testing include requirements defects, for example, inconsistencies, ambiguities contradictions, omissions, inaccuracies and redundancies design defects, for example, inefficient algorithms or database structures.

High coupling, low cohesion coding defects. For example, variables with undefined values, variables that are declared but never used, unreachable code and duplicate code deviations from standards, for example, lack of adherence to coding standards. Incorrect interface specifications, for example, using integers instead of floats when passing parameters between components security vulnerabilities, for example, susceptibility to buffer overflows gaps or inaccuracies in test basis to stability or coverage, for example, missing tests for an acceptance criterion. Moreover, most types of mentality defects can may be found by static testing. For example, the code is too complex to maintain improper modularization poor reusability of components codes that is difficult to analyze and modify without into reducing new defects.

  1. More on Static Testing

So the input to static testing is any written document and the output will be defects found in that document. So what type of documents can be examined using static testing? The answer is any written work product can be examined using static testing reviews and or static analysis. For example, verification, including business requirements, function requirements and security requirements. Azurely specific local products like Epics. User stories and acceptance criteria architecture and design specifications source code testware, including test the blends, test cases, test procedures and automated test scripts user guides, web pages, even documents that are not directly software related, like contracts, project plans, schedules and budgets.

Models such as activity diagrams, which may be used for model based testing. Generally, reviews can be applied to any written worker product that a human can understand, whereas static analysis usually requires as an input more formal, more structured, more formatted worker products typically code or models for which an appropriate static analysis tool exists. So a compiler is a type of a static analysis tool where it analyzes the software code written using a specific computer language. A very simple example of a static analysis tools are grammar check tools which work on natural language Voker products such as requirements. Benefits of a static testing There are plenty of benefits to using static testing techniques. Early detection of defects before dynamic test execution the earlier the defect it found, the cheaper and easier it is to fix. This is especially when compared to defects found after the software is deployed and in active use.

Identification of defects not easily found by dynamic testing. This will result in reduced fault levels so the overall severity of the bugs get reduced. Preventing defects in design or coding by uncovering inconsistencies, ambiguities contradictions, omissions in accuracies and redundancies in requirement. By identifying the defect early in the life cycle, it’s a lot easier to identify why it was there in the first place, thus providing information on possible process improvements that could be made to prevent the same defect appearing again. Increasing development productivity, for example, due to improved design, more maintainable code as developers love to work on a stable non buggy software. Reduced development time scales due to the number of bugs get drastically reduced, hence, less time is spent on fixing bugs, hence reducing development cost and time.

Also, the lower the number of bugs will reduce testing and cost as it will result in less time documenting bug reboots, less time retesting fixed bugs and fewer bugs swing back and forth between the developers and pesters, reducing total cost of quality over the software lifetime due to fewer failures later in the life cycle or after delivery into operation. Therefore, ongoing support costs will be lower, which will result in lifetime cost reductions and last improving communication between team members in the course of participating in reviews. Reviews could be the only time in your organization where a senior talks to a junior and points to him what’s wrong with the review document and how to avoid making the same mistake again. Reviews, if done correctly, will improve team communication and knowledge transfer.

  1. Review Process

So how can we perform the review? It’s not a random activity. We have a process for that. Reviews can vary widely in the level of formality, where formality relates to the level of structure and documentation associated with the activity. Reviews vary from informal to formal. Informal reviews are characterized by not following a defined process and not having formally documented results. Just like when you ask a colleague passing by to look at one of your documents so there are no written instructions for reviewers. It’s very informal. On the other hand, formal reviews are characterized by team participation, documented results of the review, and documented procedures for conducting the review. There are factors that affect the decision on the appropriate level of formality. Those are organization based factors that affect the level of formality of any review.

It’s usually based on the software development lifecycle model. Waterfall might need a more formal process, while Agile might be okay with informal ones. The maturity of the development Process as the more mature the process is, the more formal reviews tend to be. The complexity of the worker product to be reviewed. The more complex the worker product, the more formal the review process should be. Legal or regularity requirements for example, in safety critical software applications domain, the regularity or legal requirements determine what kinds of review should take place. The need for an audit Trail the level of formality in the different types of review used can help to raise the level of an audit trail to trace backward throughout the software development lifecycle Isostandard 20,246 contains more indepth descriptions of the review bosses for work of products, including roles and review techniques. Here is another standard number for you to remember 20246 let’s talk now about the work product review process.

The different types of reviews vary in its formality, but before discussing the different types of reviews, let’s talk first about the five groups of activities of the review process they are planning initiate review, individual review or individual preparation, issue, communication and analysis and last, fixing and reporting. Again, we need to know which activities happen during which a group and also memorize the sequence of the activities. As this is one repeated question in the Ice TB exam, let’s talk about each of those activities in detail. Planning reviews are good, but we cannot review every work product we get. So we should be defining the scope, deciding the purpose of the review, what documents or parts of documents to review and the quality characteristics to be evaluated, where to do it, and if there is already any company process, then guidelines or redefine the checklists we could use in the review process.

Also estimating effort and time frame we know when to do it, how long it should take identifying review characteristics such as the review type with roles, activities and checklists to know selecting the people to participate in the review and allocating roles. The reviewers should be skilled to do the job know how to dig for mistakes in the document. They also should be of different background. For example, someone with a design background, someone who is an expert in UI, someone with performance background, another with the standards, knowledge, and so on.

The selected personnel will be assigning roles responsibilities accordingly. Also in planning, we should be defining the entry and exit criteria for more formal review types. For example, inspections entry criteria define what criteria should be fulfilled to start the review, such as making sure the document is still checked before starting the review and the exit criteria. Define what criteria should be fulfilled to stop the review, such as fixing major bugs found in the document. Now we need to sit and wait, checking that the intercriteria are met if we have one, so reviewers won’t waste time on an unready document. The second group activity is initiate review. Before the actual review, we need to make sure all the reviewers know exactly what’s expected from them to initiate the review. For example, distributing the worker product physically or by electronic means and other material if needed. Hand the reviewers any issue look forms, checklists and related worker products that they might use explaining the scope, objectives, process rules and worker products to the participants and answering any questions that participants might have about their view.

After initiate review, we have individual review or individual preparation. Each of the participants alone will review all or part of the local product, noting potential defects, recommendations, questions and comments. This activity could be time boxed, usually two to 4 hours. After individual preparation, it’s time to issue communication and analysis, so it’s time for the participants to communicate the identified potential defects. This could be in a review meeting. Participants will go through a discussion regarding any defects found. The discussion will usually lead to more defects findings analyzing potential defects assigning ownership and status to them. The viewers may only suggest or recommend fixes, but not an actual discussion on how to fix it. This will be done later by the author. Evaluating and documenting quality characteristics. At the end of the meeting, a decision on the document under review has to be made by the participants evaluating the review findings against the exit criteria. To make a review decision, should we proceed with this document or drop it altogether? Or a simple follow up meeting after fixing the defects found will be enough.

Last is fixing and reporting. This is after the meeting. We could create defect reports for those findings that require changes. The author will have a series of defects to investigate, answering questions and suggestions raised in the review meeting and fixing defects found typically done by the author in the worker product reviewed. We might need to communicate defects to the appropriate person or team when found in a local product related to the worker product reviewed. Recording updated status of defects informal reviews, potentially including the agreement of the command originator gathering metrics is again for more formal reviews. For example, how much time was spent on the review and how many defects were found achieving that exit criteria are met. This is again for more formal reviews and accepting the local product when the exit criteria are reached. The results of the local product review vary depending on the review type and formality.

  1. Roles in Formal Review

The participants in any formal review should have adequate knowledge of the review process and have been properly trained as reviewers when necessary. A typical formal review will include the following roles author, management, facilitator, or moderator, review leader, reviewers, and last, describe or recorders. Let’s talk about each role in in little detail the author. He is the person who creates the worker product under review and fixes defects in the worker product under review, if needed. Management is responsible for review planning, decides on the execution of reviews, assigns staff budget and allocates time in budget schedules monitors ongoing cost effectiveness, executes control decisions in the event of inadequate outcomes.

Facilitator, which also often called moderator, ensures effective running of review meetings when held, is often the person upon whom the success of the review demands and is responsible for making sure no bug fixing will be discussed in the review meeting, and also responsible for making sure the reviewers will discuss the code objectively, not subjectively. And last mediates, if necessary, between the various points of view and the review meetings. Review leader takes over all responsibility for the review, decides who will be involved with and organizes when and where it will take place. Viewers may be subject matter experts, persons working on the project, stakeholders with interest in the worker product, and or individuals with specific technical or business backgrounds who, after the necessary preparation, identify potential defects in the worker product under review and may represent different perspectives.

For example, tester, programmer, user, operator, business analyst, usability experts, and so on. And last, scribe or recorder collects potential defects found during the individual review activity and records new potential defects, open points and decisions from the review meeting. When held, some might get confused over the difference between management, review leader and the moderator or facilitator. Well, think of management as management time, cost to resources, very high level decisions, no technicality in the review needed.

The review leader is like the team leader in your project he understands technically what’s going on and will make sure everything is executed. The facilitator, on the other hand, usually gets involved in the review meeting only his job is helping others do their job right and make sure that it runs it smoothly without any interruptions or tension between the participants. Also, the actions associated with its role may vary based on review type. In addition, with the advent of tools to support the review process, especially the logging of defects or bents and decisions, there is often no need for a scribe. Notice that it’s normal that one person may play more than one role, and one role can be played by more than one person. Again, more detailed roles are possible as described in ISO Standard 20,246, it’s the only standard for everything related to reviews. ISO standard. ISO IEC.

  1. Review Types

As any event where someone needs to go through a document with another one, there could be multiple reasons why you need to go through a document with another one. The objectives of any review could be finding defects, gaining understanding, educating participants such as testers and new team members, or discussing and deciding by consensus. The focus of any review depends on the agreed objectives of the review.

But despite what type of review, finding defects is always welcome in any type of reviews. You won’t find someone pointing a defect to you in a document and your reply would be no. This meeting is only to educate you about this document, so you are not allowed to find a defect in this document. So finding defects is always a purpose for any review type. There are four types of reviews vary in their formality, starting from the lowest to the highest formal review type we have informal walkthrough, technical review and inspection.

There are different factors that help to decide the view type. Those are project based factors that affect the type of review for example, the needs of the project, available resources, product type and risks, business domain, company culture and other selection criteria. Reviews can be classified according to various attributes. The following lists the four most common types of reviews and their associated attributes. Questions in the exam are usually about differentiating between the different review types, so we will try to pinpoint some keywords to highlight the few type characteristics.

First, informal Review informal review also known as body check pairing peer review the main purpose is to quickly find effects and an inexpensive way to achieve some limited benefit. Possible additional purposes generating new ideas or solutions quickly solving minor problems. The least formal review type, where there is no formal process to answer, may not involve a review meeting and may be performed by a colleague of the author body check or by more people. Findings in the review are not usually documented. Informal review varies in usefulness depending on the reviewers. Use of checklists is optional and last, very commonly used in agile development as an example of the informal review is pair programming, a technique introduced by the Agile Extreme programming methodology where two programmers work together to write the same code. So one programmer instantly reviews the code of the other programmer.

The keyword here is no process and quick. Second is walkthrough, where the author has something to explain or show in his document to the participants. So the main purpose here is for the participants to learn something from the document or gain more understanding about the content of the document. Also, workshop can be used to find effects in the document, improve the software product, consider alternative implementations, evaluate confirmations to standards and specifications possible additional purposes exchanging ideas about techniques of style variations, training of participants and achieving consensus in this type of review, the meeting is led by the author. Review sessions are openended and may vary in practice from quite informal to very formal. Appointment of a scribe who is not the author is mandatory. Preparation by reviewers before the walkthrough meeting is optional. Use of checklists is optional. When you walk through a work product, it may take the form of scenarios, dry runs or simulations. We will talk about scenarios and dry runs in another video. Defect logs and review reports may be produced, so they are also optional.

Keywords here lived by the author main purpose learning and gaining understanding and most of the review process activities are optional. Third Technical Review A technical review is a discussion meeting that focuses on achieving consensus about the technical content of a document.

Finding defects is a blast as usual. Possible further purposes Evaluating quality and building confidence in the worker product, generating new ideas Motivating and enabling authors to improve future worker products. Considering alternative implementations reviewers are usually experts in their field and can be technical peers of the author. Most of the review process activities are executed. Individual preparation before the review meeting is required. The review meeting is optional, ideally led by a trained Facilitator. Typically not the author. Scribe is mandatorily, ideally also not the author. User checklists is optional. Potential defect logs and review reports are typically produced. Keywords here led by a trained motivator.

Purpose is discussion, gaining consensus and taking decisions and evaluation of alternatives, and most of the activities in the view process are executed. Last and most formal is inspection. Inspection Main purposes detecting potential defects, evaluating quality and building confidence in the worker product preventing future similar defects through author learning and root cause analysis. Possible further purposes Motivating and enabling authors to improve future work products and the software development process and achieving consensus. An inspection follows a defined process with formally documented outputs based on rules and checklists. All the rules mentioned in the various video are mandatory and may include a dedicated reader who reads the worker product aloud during the review meeting. Note that reader was not mentioned in the rules. It’s also mentioned here in the inspection, so it may include a dedicated reader. Individual preparation before the review meeting is required. The viewers are usually peers of the author and should be experts in disciplines that are relevant to the worker product. Specified entry and exit criteria are used.

Scribe is mandatory. The view meeting is led by a trained facilitator, again not the author. The author cannot act as the review leader, reader, or scribe. Potential defect logs and review reports are reduced. Metrics are collected and used to improve the entire software development process, including the inspection process. Keywords here Led by a trained Motivator the main purpose is finding bugs and all the activities in the review process are executed. In reality, there’s a fine line between the view types often get blurred and what is seen as a technical review in one company may be seen as an inspection in another. The key for each company is to agree on the objectives and benefits of the reviews that they plan to carry out. Also, a single worker product may be subject to many different review types. For example, an informal review may be carried out before the document is subjected to a technical review or debating. A technical review on inspection may take place before a walkthrough with the customer. The types of reviews described can be done as peer reviews, done by colleagues at the same or a similar approximate organization level. The types of defects found in a review vary depending mainly on the local product being reviewed.

  1. Applying Review Techniques

As I have said before, it’s a skill to eat a document and find effects in it. I see it as a skill like the movie critics. Many might go to a movie and like it, but critics find it very bad. Movie critics have trained eyes to find effects that others might not notice well. In this video, we will learn how to enhance this skill by learning a number of techniques that people use that you can apply during the individual review or individual preparation activity to uncover defects. These techniques can be used across the review types described before. The effectiveness of the techniques may differ depending on the type of review used. And as I said, it’s a skill, so it needs a practice to master those techniques. We will talk about five techniques ad hoc, a checklist based scenario, and dry runs. Rule based and prospective based. Ad hoc usually means no planning or little preparation. In an ad hoc review, reviewers are provided with little or no guidance on how this task should be performed.

Reviewers often read the worker product sequentially, identifying and documenting issues as they encounter them. This technique is highly dependent on reviewer skills and experience and may lead to many duplicate issues being reported by different reviewers. A Checklist Based we will talk about the checklist testing in detail in future videos. But for now, imagine if I give you a list of questions and asking you to answer them according to the document or test object you have at hand. This is simply checklist based testing. It’s a systematic technique. Reviewers detect issues based on checklists that are distributed at review initiation by the Facilitator.

They just answer the questions according to their point of view. A view checklist consists of a set of questions based on potential defects which may be derived from experience. Checklists should be specific to the type of vocal product under review and should be maintained regularly to COVID issue types missed in previous reviews. Questions like Is the section nonfunctional requirements exist? Do we have a UML diagram for every use case? Does every function in the source code have a detailed comment about its purpose? The main advantage of the checklistbased technique is a systematic coverage of typical defect types.

Care should be taken not to simply follow the checklist in individual reviewing, but also to look for defects outside the checklist. Next is scenarios. And do I runs? In a scenario based review, reviewers are provided with structured guidelines on how to read through the Walker product. These scenarios provide reviewers with better guidelines on how to identify specific defect types than sample a checklist’s entries. A dry run is a testing technique where you try to mimic real life situations going as long as possible. For example, an aerospace company may conduct a dry run test of a jet’s new pilot ejection seat while the jet is parked on the ground rather than while it’s in flight. A scenario based approach supports reviewers in performing dry runs on the worker product based on the expected usage of the worker product. If the worker product is documented in a suitable format, such as use cases, role based, consider software like Microsoft Word. You may consider potential users to this software like a student, a secretary, and a publishing company.

Now I want you to imagine how each one of those users will use the software. A student wants everything to work using shortcuts and toolbar icons. A secretary is a speed typist. She wants only to use the keyboard to finish any task. A publishing company doesn’t mind going through detailed dialogues to set up the printing process very accurately. This is role based testing in which the reviewers evaluate the local product from the perspective of individual stakeholder roles. Typical roles include specific end user types, like I mentioned, experienced, inexperienced, senior, child, and so on, and specific roles in the organization user, administrator, system administrator, performance tester, and so on. Last is perspective based technique. I think of perspective based reading as a mix of both rolebased and check based and scenario based techniques.

This technique acknowledges that there are multiple consumers of the document to be used during the requirement development phase. PBR, or perspective based reading offers each of the reviewers a viewpoint’s or perspective specific to each type of consumer, similar to rolebased review. But here typical consumers or stakeholder viewpoints include end user, marketing, designer, tester, or operation. The technique instructs the reviewers on precisely what to search for, thus enabling them to find more defects in less time. From each of these perspectives, the inspector is advised to apply a scenario based approach to reading a document.

Each scenario consists of a set of questions and activities that guide the inspection process by relating the requirements to a regular work practices of a specific stakeholder. What does this mean? It means that perspective based reading also requires the reviewers to attempt to use the worker product under review to generate the product they would drive from it. For example, a tester would attempt to generate draft acceptance tests if performing a perspective based reading on a requirement specification to see if all necessary information was included. Further, in perspective based reading, checklists are expected to be used.

Using different stakeholder viewpoints leads to more dips in individual reviewing with less duplication of issues across reviewers. Empirical studies or statistics or experience have shown that perspective based reading to be the most effective general technique for reviewing requirements and technical worker products. A key success factor is including and waiting different stakeholder viewpoints appropriately based on.

  1. Success Factors for Reviews

In order to have a successful review, the appropriate type of review and the techniques used must be considered. In addition, there are plenty of other factors that will affect the outcome of the review. We can categorize those factors to organizational success factors and people related success factors. Organizational success factors for reviews include each review has clear objectives defined during review planning and used as measurable exit criteria. Review types are applied which are suitable to achieve objectives and are above it to the type and level of software. Worker products and participants. Any review techniques used, such as a checklist based or role based reviewing, are suitable for effective defect identification in the worker product to be reviewed.

Any checklists used should address the main risks and are up to date. Large documents are written and reviewed in a small chance so that quality control is exercised. By providing authors early and frequent feedback on defects, participants have adequate time to prepare. Reviews are scheduled with adequate notice. Management supports the review process, for example, by incorporating adequate time for review activities in schedules. People related success factors for reviews include the right people are involved with to meet the review objectives.

For example, people with different skill sets or perspectives who may use the document as a work input testers are seen as valued reviewers who contribute to the review and learn about the local product, which enables them to prepare more effective tests and to prepare those tests earlier. Participants dedicate adequate time and attention to detail. Reviews are conducted on small chance so that reviewers don’t lose concentration during individual review and or review meeting. When held, defects found are acknowledged, abbreviated and handled. Objectively the meeting is well managed, so the participants consider it a valuable use of their time.

The review is conducted in an atmosphere of trust. Everyone knows that the main objective is to increase the quality of the document under review. The outcome will not be used for the evaluation of the participants. I have seen companies that calculate the monthly bonus depending on the number of bugs. Bare Development this is so unrealistic and unfair. Participants avoid body language and behaviors that might indicate boredom, irritation or hostility to other participants happens adequate training is provided, especially for more formal review types such as inspections. A culture of learning and process improvement is promoted. We should learn from our mistakes and we should use the metrics collected to improve the overall software process.