Which of the following answers describes the LEAST relevant concern in selecting suitable test automation tools for a test automation project?
AWhat is the degree of technical knowledge and skills within the test team to implement code-based test automation for the project (e.g., in terms of programming and design patterns)?
BIn the case of open-source test automation tools, are these tools released under permissive or restrictive licenses, and, if applicable, is it specified whether they can be modified and by whom?
CHas the test team been formed with the different personalities of its members in mind, to ensure that the interaction between them is effective in achieving the objectives of the test automation project?
DIn the case of commercial test automation tools, what factors determine the licensing costs of these tools (e.g., in terms of the maximum number of users supported and whether the license type is fixed or floating)?
Which of the following information in API documentation is LEAST relevant for implementing automated tests on that API?
ARelease notes/change logs on past changes to the API
BDetails about the parameters accepted by each API endpoint
CAuthentication mechanisms are required to access the API
DDetails about the format of the API responses
As a TAE, you are evaluating a test automation tool to automate some UI tests for a web app. The automated tests will first locate the required HTML elements on the web page using their corresponding identifiers (locators), then perform actions on those elements, and finally check the presence of any expected text for an HTML element. These tests are independent of each other and are organized into a test suite that must be run every night against the most recent build of the web app. There is a high risk that the web app will crash while running some automated tests.
Based only on the given information, which of the following is your MOST important concern related to the evaluation of the test automation tool?
ADoes the test automation tool provide a feature to specify automated tests in a descriptive meta-language that is not directly executable on the web app?
BDoes the test automation tool offer a feature to restore the web app, recover from the failed test, skip such tests, and resume the next one in the suite?
CDoes the test automation tool offer a feature to create a mock server that simulates the behavior of a real API by accepting requests and returning responses?
DDoes the test automation tool support a licensing scheme that allows accessing different feature sets?
The last few runs for a suite of automated keyword-driven tests on an SUT were never completed. The test where the run was aborted was not the same between runs. Currently, it is not possible to identify the root cause of these aborts, but only determine that test execution aborted when exceptions (e.g., NullPointer, OutOfMemory) occurred on the SUT by analyzing its log files. Test execution log files are currently generated, in HTML format, by the TAS as follows: all expected logging data is logged for each keyword in intermediate log files. This data is then inserted into the final log file only for keywords that fail, while only a configurable subset of that data is logged for keywords that execute successfully.
Which of the following actions (assuming it is possible to perform all of them) would you take FIRST to help find the root cause of the aborts?
ALog the stack trace and amount of memory available to the SUT at the start and end of each test in the suite, in the SUT log files
BSplit the generated log file into smaller parts, load them into external files that are loaded into the browser in transparent mode when needed
CLog all expected logging data in the final test execution log file, not only for keywords that fail, but also for keywords that execute successfully
DUse appropriate colors to effectively visually highlight different types of information in the test execution log files
Question 6
Implementation and Deployment Strategies for Test Automation
0
Question 7
Introduction and Objectives for Test Automation
Question 8
Preparing for Test Automation
Question 9
Introduction and Objectives for Test Automation
Question 10
Implementation and Deployment Strategies for Test Automation
Question 11
Continuous Improvement
Question 12
Implementing Test Automation
Question 13
Verifying the Test Automation Solution
Question 14
Implementation and Deployment Strategies for Test Automation
Question 15
Implementing Test Automation
Question 16
Test Automation Reporting and Metrics
Question 17
Implementing Test Automation
Question 18
Test Automation Reporting and Metrics
Question 19
Test Automation Architecture
Question 20
Preparing for Test Automation
Question 21
Introduction and Objectives for Test Automation
Question 22
Verifying the Test Automation Solution
Question 23
Continuous Improvement
Question 24
Implementing Test Automation
Question 25
Verifying the Test Automation Solution
Ask AstroTutor
0
Ask AstroTutor
0
Ask AstroTutor
0
Ask AstroTutor
0
Ask AstroTutor
0
Ask AstroTutor
0
Ask AstroTutor
0
Ad
Want a break from the ads?
Become a Supporter and enjoy a completely ad-free experience, plus unlock Learn Mode, Exam Mode, AstroTutor AI, and more.
Ask AstroTutor
0
Ask AstroTutor
0
Ask AstroTutor
0
Ask AstroTutor
0
Ask AstroTutor
0
Ask AstroTutor
0
Ask AstroTutor
0
Ask AstroTutor
0
Ask AstroTutor
0
Ask AstroTutor
0
Ask AstroTutor
0
Ask AstroTutor
0
A new TAS allows the implementation of automated data-driven test scripts. All the tasks planned for the initial deployment of this TAS, aimed at installing and configuring the TAS components and provisioning the infrastructure, will be performed manually by a dedicated, specialized team. This TAS is expected to be deployed in the future in other similar environments. As a TAE, you see a risk that the correct and reproducible deployment of the TAS cannot be guaranteed.
Which of the following options is BEST suited for mitigating this risk?
ANothing needs to be done, because the team that will manually perform the specified tasks, as they are specialized, will not make mistakes and will therefore be able to ensure a correct and reproducible deployment.
BPartition the data tables containing test data used by data-driven test scripts into smaller data tables, using an appropriate logical criterion, to make them more manageable.
CReview data-driven test scripts to better organize test libraries by adding test functions containing identical sequences of actions commonly implemented in a relevant number of scripts.
DTry to automate most of the tasks related to the installation and configuration of the TAS components and those related to the provisioning of the infrastructure.
Which of the following statements about how test automation is applied across different software development lifecycle models is TRUE?
AIn Agile software development, automated regression test suites sometimes grow so large that they can become difficult to maintain, and thus, it becomes crucial to invest in test automation at multiple test levels
BIn a Waterfall model, automated tests are usually executed only during the last phase of the development lifecycle, but their implementation occurs in the early stages
CIn Agile software development, regardless of context (e.g., type of application to be developed, tools available), test automation must be based on the test automation distribution known as the test pyramid model
DUnlike Agile software development, where automated unit tests are written by developers, often in a test-first fashion, in a V-model automated unit tests are written by testers as part of unit testing
You are currently conducting a (Proof of Concept) PoC aimed at selecting a tool that will be used for the development of a TAS. This TAS will exclusively be used by one team within your organization to implement automated UI-level test scripts for two web apps. The two tools selected for the PoC use JavaScript/TypeScript to implement the automated test scripts and offer capture and playback capabilities. Three test cases for each of the two web apps were selected to be automated during the PoC. The PoC will compare these two tools in terms of their effectiveness in recognizing and interacting with UI widgets exercised by the test cases, to quickly determine whether test automation is possible and which tool is better.
Which of the following TAF is BEST suited for conducting the PoC?
AA one-layer TAF (‘test scripts’)
BA two-layer (‘test scripts’, ‘test libraries’)
CA three-layer TAF (‘test scripts’, ‘business logic’, ‘core libraries’)
DA layered TAF with more than three layers
Which of the following statements refers to a typical advantage of test automation?
AAutomated tests can determine whether actual results match expected results, even for non-machine-interpretable results
BOn average, automated tests written at the API level are likely to run faster than automated tests written at the UI level
CArtificial intelligence can be used to help identify redundant tests within large, long-running automated regression test suites
DAutomated tests can allow defects to be detected earlier than manual tests because their execution times can be shorter
You have been tasked with adding the execution of build verification tests to the current CI/CD pipeline used in an Agile project. The goal of these tests is to verify the stability of daily builds and ensure that the most recent changes have not altered core functionality.
Currently, the first activity performed as part of this pipeline is the static source code analysis.
Which of the following stages in the pipeline would you add the execution of these smoke tests to?
AAs a first activity, before performing static source code analysis and before generating the new build
BAfter performing static analysis on the source code and before generating the new build
CAfter deploying the new build to the test environment and before performing more extensive testing
DAs a final activity, immediately before releasing the new build into production
Which of the following is the BEST example of how static analysis tools can help improve the test automation code quality in terms of security?
AStatic analysis tools do not generate false positives when attempting to detect security vulnerabilities within test automation code
BStatic analysis tools can help detect the presence of repeated instances of code within test automation code
CStatic analysis tools can help detect hard-coded credentials that expose sensitive information within test automation code
DStatic analysis tools can ensure there are no security vulnerabilities within test automation code
To improve the maintainability of test automation code, it is recommended to adopt design principles and design patterns that allow the code to be structured into:
Ahighly coupled and loosely cohesive modules
Bhighly coupled and highly cohesive modules
Cloosely coupled and highly cohesive modules
Dloosely coupled and loosely cohesive modules
An automated test case that should always pass, sometimes passes, and sometimes fails intermittently (non-deterministic behavior) when executed in the same test environment, even if no code (i.e., SUT code, or the test automation code) has been changed.
Which of the following statements about the root cause of this non-deterministic behavior is TRUE?
AThe specified root cause is a race condition that can be identified by also analyzing the log files of the test case, the SUT, and the TAF
BDetermining the specified root cause may require, in addition to the TAE, the support of others such as developers and system engineers
CThe specified root cause must be in the instability of the test environment, since no code has been changed
DDetermining the specified root cause is certainly easier than if the automated test always fails (deterministic behavior)
In User Acceptance Testing (UAT) for a new SUT, in addition to the manual tests performed by the end-users, automated tests are performed that focus on the execution of repetitive and routine test scenarios.
In which of the following environments are all these tests typically performed?
ABuild environment
BIntegration environment
CPreproduction environment
DProduction environment
Automated tests at the UI level for a web app adopt an asynchronous waiting mechanism that allows them to synchronize test steps with the app, so that they are executed correctly and at the right time, only when the app is ready and has processed the previous step: this is done when there are no timeouts or pending asynchronous requests. In this way, the tests automatically synchronize with the app’s web pages. The same initialization tasks to set test preconditions are implemented as test steps for all tests. Regarding the pre-processing (Setup) features defined at the test suite level, the TAS provides both a Suite Setup (which runs exactly once when the suite starts) and a Test Setup (which runs at the start of each test case in the suite).
Which of the following recommendations would you provide for improving the TAS (assuming it is possible to perform all of them)?
AAdopt a manual synchronization with the app’s web pages using hard-coded waits instead of the current automatic synchronization
BImplement the initialization tasks aimed at setting the preconditions of the tests within the Test Setup feature at the test suite level
CAdopt a manual synchronization with the app’s web pages using dynamic waits via polling instead of the current automatic synchronization
DImplement the initialization tasks aimed at setting the preconditions of the tests within the Suite Setup feature at the test suite level
Which of the following statements about a test progress report produced for an automated test suite is TRUE?
AThe test progress report should indicate, for each test in the suite, the timestamps related to the test steps
BThe content of the test progress report should not be affected by the stakeholders to whom the report is intended
CThe test progress report should indicate the test environment in which the tests were performed
DThe test progress report should indicate, for each test in the suite, the start and end timestamps of the test
You are evaluating the best approach to implement automated tests at the UI level for a web app. Specifically, your goal is to allow test analysts to write automated tests in tabular format, within files that encapsulate logical test steps related to how a user interacts with the web UI, along with the corresponding test data. These steps must be expressed using natural language words that represent the actions performed by the user on the web UI. These files will then be interpreted and executed by a test execution tool.
Which of the following approaches to test automation is BEST suited to achieve your goal?
ATest-driven development
BKeyword-driven testing
CData-driven testing
DLinear scripting
A suite of automated test cases was run multiple times on the same release of the SUT in the same test environment. Consider analyzing a test histogram that shows the distribution of test results (pass, fail, etc.) for each test case across these runs.
Which of the following potential issues is MOST likely to be identified as a result of such an analysis?
AOutliers in test execution times
BSecurity vulnerabilities in automated test cases
CUnstable automated test cases
DMaintainability issues in automated test cases
Which of the following aspects of ‘design for testability’ is MOST directly associated with the need to define precisely which interfaces are available in the SUT for test automation at different test levels?
AAutonomy
BArchitecture transparency
CControllability
DObservability
Which of the following practices can be used to specify the active (i.e., actually available) features for each release of the SUT and determine the corresponding automated tests that must be executed for a given release?
AFeature-driven development
BThe use of feature files
CTest-driven development
DThe use of feature toggles
A SUT (SUT1) is a client-server system based on a “thin client”. The client is primarily a display and input interface, while the server provides almost all the resources and functionality of the system. Another SUT (SUT2) is a client-server system based on a “fat client” that relies little on the server and provides most of the resources and functionality of the system. A given TAS is used to implement automated tests on both SUT1 and SUT2. The main objective of the TAS is to cover as many system functionalities as possible through automated tests executed as fast as possible.
Which of the following statements about the automation solution is BEST in this scenario?
AThe TAS should support mainly client-side automation for both SUT1 and SUT2
BThe TAS should support mainly client-side automation for SUT1 and server-side automation for SUT2
CThe TAS should support mainly server-side automation for both SUT1 and SUT2
DThe TAS should support mainly server-side automation for SUT1 and client-side automation for SUT2
Consider a TAS implemented to perform automated testing on native mobile apps at the UI level, where the TAF implements a client-server architecture. The client runs on-premise and allows creation of automated test scripts using TAF libraries to recognize and interact with the app’s UI objects. The server runs in the cloud as part of a PaaS (Platform as a Service) service, receiving commands from the client, translating them into actions for the mobile device, and sending the results to the client. The cloud platform hosts several mobile devices dedicated for use by this TAS. The device on which to run test scripts/test suites is specified at run time. You are currently verifying whether the test automation environment and all other TAS/TAF components work correctly.
Which of the following activities would you perform to achieve your goal?
AManage the infrastructure that hosts the server, including hardware, software updates, and security patches
BCheck whether the references to the device on which the given test scripts/test suites will be executed are correctly hard-coded within these test scripts/test suites
CCheck whether the TAF libraries that the test scripts will use to recognize and interact with the app’s UI objects (widgets) function as expected
DCheck whether all test scripts that will be executed by the TAS as part of a given test suite have expected results
A TAS is used to run on a test environment a suite of automated regression tests, written at the UI level, on different releases of a web app: all executions complete successfully, always providing correct results (i.e. producing neither false positives nor false negatives). The tests, all independent of each other, consist of executable test scripts based on the flow model pattern which has been implemented in a three-layer TAF (‘test scripts’, ‘business logic’, ‘core libraries’) by expanding the page object model via the facade pattern. Currently the suite takes too long to run, and the test scripts are considered too long in terms of LOC (Lines Of Code).
Which of the following recommendations would you provide for improving the TAS (assuming it is possible to perform all of them)?
AModify the TAF so that test scripts are based on the page object model, rather than the flow model pattern
BImplement a mechanism to automatically reboot the entire web app in the event of a crash
CSplit the suite into sub-suites and run each of them concurrently on different test environments
DModify the architecture of the SUT to improve its testability and, if necessary, the TAA accordingly
In a first possible implementation, the automated test scripts within a suite locate and interact with elements of a web UI indirectly through the browsers using browser-specific drivers and APIs, provided by an automated test tool used as part of the TAS. In an alternative implementation, these test scripts locate and interact with elements of the same web UI directly at the HTML level by accessing the DOM (Document Object Model) and internal JavaScript code. The first possible implementation:
Ahas a lower level of intrusion than the alternative implementation and therefore its test scripts are less likely to produce false positives
Bhas a higher level of intrusion than the alternative implementation, and therefore its test scripts are less likely to produce false positives
Chas a lower level of intrusion than the alternative implementation, and therefore its test scripts are more likely to produce false positives
Dhas the same level of intrusion as the alternative implementation, and therefore the risk of test scripts producing false positives is the same in both cases
A release candidate of a SUT, after being fully integrated with all other necessary systems, has successfully passed all required functional tests (90% were automated tests and 10% were manual tests). Now, it is necessary to perform reliability tests aimed at evaluating whether, under certain conditions, that release will be able to guarantee an MTBF (Mean Time Between Failures) in the production environment higher than a certain threshold (expressed in CPU time).
Which of the following test environments is BEST suited to perform these reliability tests?