Practice Exams:

ISTQB CTFL-2018 – 2018: Tool Support For Testing Part 2

  1. Special Considerations for Test Execution Tools

In order to have a smooth and successful implementation. There are a number of things that should be considered when selecting and integrating test execution tools into an organization. As we mentioned before, test execution tools test the software using automated test scripts. This type of tools often require significant effort in order to attract achieve significant benefits. Test execution tools are used by testers. Therefore, any tester who wishes to use a scripting test execution tool will need to use programming skills to create and modify those scripts. First, let’s talk about capture and playback test Execution Tools the idea behind Capture Playback Tools is that while you are running your manual tests, the tester assembly turns on the capture feature of the tool. The tool would record everything the tester do his mouse clicks, keyboard clicks, mouse movements, everything. When done, the tester can save what was recorded to play it later.

When played back, the recorded actions, all the testosterone’s actions will exactly be repeated. Mouse clicks, keyboard clicks, mouse movements, everything. So if the software and its surrounding environment is exactly the same as when we recorded our test, then the playback will be some sort of repeating the test again and again as many times as you want. But all you have to do is just to hit play. A capture script is a linear representation with specific data and actions as part of each script. The concept of such type of test execution tools breaks down when you try to replay the captured tests and the software, or the surrounding environment of the software. It changed it from its original state. Imagine if there was a button at a specific location on the screen and you clicked that button.

The tool would save the horizontal and vertical location of your mouse down click if for any reason the button moved its location. Then when you try to lay back the script, your mouse down click won’t actually hit the button at all and you would need to either rerecord the script or fix the script. Also, the script may rely heavily on the circumstances, state and context of the system at the time the script was recorded. For example, the script may have used the file in a specific folder. If that file moved, or if the folder moved, then the test will break. The test input information is hard coded, meaning that the test input data is embedded inside the individual script for each test.

So it will always use the same data every time you run the script again. Any of these things can be overcome by modifying the scripts, but the effort would be more complicated than just re recording the script again. Also, this approach doesn’t scale to large number of test scripts. Capture playback tools can be of great benefit during exploratory testing. Remember exploratory testing, one of the experience based test case design techniques? In exploratory testing, the tester blaze with the software until she hits a bug. Sometimes it would be very hard for the tester to remember the steps that we’d used the bug. But if we were using the capture tool, we would have saved a list of the tester’s actions, which she can play back to reduce the bug so we can document and report the defect.

The latest generation of these tools which take advantage of smart image capturing technology has increased the usefulness of this class of tools. Although the generative scripts, it still require ongoing maintenance as the system’s user interface evolves over time. Second type of test execution tools are data driven test execution tools. Another type of test exemption tools, which is of advanced capability over captured scripts is using a datadriven approach. A datadriven testing approach separates out the test inputs the data and usually store it into an external file, let’s say a spreadsheet. So with know how to coded data, our test script will be more generic that can read the input data and execute the same test script with different data every time. Testers who are not familiar with the scripting language can then create the tested data for those redefined scripts. A more advanced data driven technique is where instead of using datablazed in a spreadsheet, the data is generated using algorithms based on configurable parameters at one time and supplied to the application. For example, a tool may use an algorithm which generates a random data or ID or password or something of that kind. The last type we will talk about is key driven test execution tools. Key driven scripts give significantly more benefits than the previous two types we’ve talked about.

In a keyword driven testing approach, the spreadsheet contains keywords describing the actions to be taken, also called the action words besides the test data, testers, even if they are not familiar with the scripting language, can then define tests using the keywords which can be tailored to the application being tested. Further details and examples of data driven and keyword driven testing approaches are given in the ISTQB Advanced level test automation engineer Syllabus. The above approaches require someone to have expertise in the scripting language, testers developers, or specialists in test automation. Regardless of the scripting technique used, the expected results for each test need to be compared to actual results from the test, either dynamically while the test is running, or stored for later post execution comparison. Automation testing became a trend these days, so it’s a skill that you might consider to acquire if you haven’t already done so.

  1. Benefits and Risks of Test Automation

There are potential benefits and opportunities with the use of tools in testing, but there are also risks. This is particularly true for test execution tools, which is often referred to as test automation. Benefits potential benefits of using tools to support test execution include repetitive work is reduced. Imagine, for example, if you had to run the same test case tens of times to prove that it’s already not working before you create a defectory board against it. It’s boring and wastes time. Such a task, it can be done by a tool, would be a great benefit to reduce repetitive work, greater consistency and reputability. Using the same example, there’s a big chance you might make a mistake if you had to run the same test case tens of times. Such mistakes will make your work not consistent and not repeatable. A tool will exactly reproduce what it did before, so each time it’s run, the result is consistent. Objective Assessment if you ask a tester how much coverage you have achieved, they might tell you the subjective opinion or tell the numbers that you wanted to hear.

Using the tool to measure coverage, for example, will always give you an objective assessment. Ease of access to information about tests or testing. We mentioned in test monitoring that we will be collecting lots of data. This data needs to be stored, communicated and maintained. We will surely need tools to do that. Risks although there are many benefits that can be achieved using tools to support the testing activities, but many organizations have not achieved the benefits they expected. Istkbull seems to warn us of the risks of using a tool by listing more than ten plus risks. After all, it’s a huge investment and if it didn’t work, it will be a waste of money, time and effort. Simply purchasing a tool is no guarantee of success and achieving benefits, just as buying a gym machine at home doesn’t guarantee that you will be better if you don’t know how to use it.

Risks of using tools include unrealistic expectations for the tool. Unrealistic expectations may be one of the greatest twists to success with tools. I have seen companies that thought that purchasing a testing automation tool would make them get rid of the testers altogether. This is very unrealistic and will never happen. Companies should have clear objectives for what the tool can do and that those objectives are realistic. Underestimating effort for the initial introduction having purchased a tool, you will need to train some people to use the tool. There could be some resistance from some people. There will be technical problems to overcome. Underestimating effort to achieve significant benefits. There’s a learning curve to anything you learn. Remember when you first used Microsoft Word, for example, or Excel? When was that?

Like ten years ago or maybe more. Can you claim that you are a professional user of Microsoft Word? Most of us cannot claim that. Most of us used just a few features of the software, leaving hundreds more that we are not aware of. For example, I remember I asked my assistance once to send an email to 500 users. A couple of days later I asked her if she sends an email. She said there’s like 300 remaining. I taught her then about the mailing functionality in Microsoft Word, where we linked Word with the Excel sheet that has the names and emails of the users and Voila. The email was sent to 500 people in just five minutes. Effort to maintain the test assets insufficient planning for maintenance of the assets that the toolboard uses is a strong contributor to tools being dumped. For example, people forget that they might need to update the test scripts that will be used by the test automation tools every time there’s a change in the software or the environment used.

Over reliance on the tool. If you buy the smartest phone, you still need to know how to use it. It’s the smartness comes from how good are you in using it? Same thing about testing tools. The issue can help, but it doesn’t replace the intelligence needed to know how best to use it and how to evaluate current and future uses of the tool. The tool will not understand the context or the domain of the application under test as much as you do. Neglecting interoperability between critical tools also, the interrelationship between the tools is important.

For example, if you purchased a design specification tool that doesn’t work with the test execution tool, then you would need some effort to convert the output of the design specification tool to something that the test execution tool can understand. That’s an extra effort. We also have risks such as vendor going out of business, poor vendor for support, and inability to support a new platform. Now imagine that the vendor where you bought your tool form went out of business or decided not to support a specific new platform. All of these are very bad at customer service. You would be stuck with whatever tool you have now in hand for some time.

Neglecting version control same thing if you cannot upgrade your tool to a new version, you would miss new features and bug fixes in that version. Risk of suspension of open source or free tool project if the tool is an open source or a free tool and it’s being decided to suspend that tool, you would also be stuck and it would take a huge effort to move to another tool. I remember we had to spend days moving our data when Microsoft decided to suspend source Safe, which is a configuration management tool. And last, there may be no clear ownership of the tool, for example, for monitoring, updates and so on.

  1. Effective Use of Tools

Main principles for Tool Selection we have seen from the previous lecture that there are many risks in using tools to support the testing activity. That’s why we need to be very careful when introducing a tool to our company to avoid falling into any of the mini traps we mentioned before. To get an idea what we are trying to achieve here, imagine if I give the smartest phone in the market to Illiterate Man. Would he benefit from it either? So it doesn’t matter how good the tool is, but we still need to be ready for it. The place to start when introducing a tool into an organization is not with the tool itself, it’s with the organization. In order for a tool to provide benefit, it must match a need within the organization and solve that need in a way that both effective and efficient. The tool should help to build on the strengths of the organization and addresses its weaknesses.

The organization needs to be ready for the changes that will come with the new tool. If the current testing practices are not good and the organization is not mature enough, then it’s generally more cost effective to improve the testing practices rather than to try to find tools to support poor practices. Automating chaos just gives faster chaos. Of course, we can sometimes improve our own processes in parallel with introducing a tool to support those practices, and we can pick up some good ideas for improvement from the ways that the tools work. However, be aware that the tool should not take the lead, but should provide support to what your organization defines. The main considerations in selecting a tool for an organization include assessment of organizational maturity, strengths and weaknesses, and identification of opportunities for an improved test bosses supported by tools.

For example, you should ask yourself will we be ready to change to adopt the new tool? Number two identification of opportunities for an improved test process supported by tool understanding of the technologies used by the test object or test objects in order to select a tool that is compatible with the technology. The build and continuous integration tools already in use within the organization in order to ensure tool compatibility and integration.

Evaluation against clear requirements and objective criteria, you should ask yourself do we have clear requirements and objectives for the new tool? Do we know exactly what we need to achieve? Number six consideration of whether or not the tool is available for a free trial period and for how long evaluation of the vendor, including training, support and commercial aspect or support for non commercial, for example, open source tools. Identification of internal requirements for coaching and monitoring in the use of the tool. Do we know how we will coach our employees to use the tool? Do we know how we will mentor the usage of the tool to make sure it meets its objectives?

Number nine evaluation of training needs considering the current test teams test automation skills we should ask ourselves do we have a training plan to train our employees to use the tool? Number ten consideration of bonds and cons of various licensing models for example, commercial or open source estimation of cost benefit ratio based on a concrete business case with the benefit outweigh the cost of the tool? Can we have a proof of concept by using a test tool during the evaluation phase to establish whether it performs effectively with the software under test and within the current infrastructure or to identify changes needed to the infrastructure to effectively use the tool or not?

Pilot projects for introducing a tool into an organization once a tool is purchased and that’s very important to understand once the tool is already purchased, we should gradually introduce the tool to the different teams in the organization and it should start with a pilot project. A pilot project means using the tool on a very small scale with sufficient time to explore different ways of using the tool. So a pilot project should have the following objectives gaining in-depth knowledge about the tool understanding both its strengths and weaknesses evaluating how the tool fits the existing processes and practices and determining what would need to change deciding on a standard way of using, managing, storing and maintaining the tool and the test assets. For example, deciding on naming conventions for files and tests selecting coding standards creating libraries and defining the modularity of test suites assessing whether the benefits will be achieved at a reasonable cost understanding the metrics that you wish the tool to collect and report and configuring the tool to ensure these metrics can be captured and rebooted. Last, we will talk about success factors for tools.

As we have said, success is not guaranteed or automatic when implementing a testing tool, but many organizations have succeeded. Here are some factors for evaluation implementation, deployment and ongoing support of tools within an organization that have contributed to success. Incremental roll out after the pilot to the rest of the organization. Adopting and improving processes testware and tool artifacts to get the best fit and balance between them and the use of the tool.

Providing advocate training, coaching and monitoring for new users, defining and communicating guidelines for the use of the tool based on what we learned in the pilot project, implementing a way to gather usage information from the actual use, and implementing a continuous improvement mechanism. On how to use the tool monitoring tool use and benefits achieved providing support for the test team for a given tool gathering lessons learned from all teams.

It’s also important to ensure that the tool is still technically and organizationally integrated into the software development lifecycle, which may involve separate organizations responsible for operations and or third party sublives. Thank you.