Network Suite Jobs: Keyword-Test selection
Hi, i use TestComplete and TextExecute on a remote machine. In my TestComplete workspace i got many Keyword-Tests. Now i want to add multiple tests to the Jobs in the Network Suite but i can only select one test at once and it takes its time until the test is loaded from the remote TestExecute into the NetworkSuite. Why it took so long to choose and load a test for a NetworkSuite Job? I want to quick add multiple tests to the Job list without waiting after each new test added. It should be possible to drag and drop the KeywordTests in TestComplete to the NetworkSuite Job list. Furthermore i didn´t understand the behavior of the remote test selection window. There is no directory hierarchy like in the project workspace and i have to scroll across a large list of test and the next bad thing: the lists isn´t sorted alphabetically. Searching the right test in a list with over 200 KeywordTests without directory structure or sorting function is a time-killing job :) Hope you will find a better way for this. Long live the software ergonomics!8.5KViews5likes5CommentsQMetry MCP: Bringing Control, Coverage, and Traceability to AI-Driven QA
As AI automates more QA work, it becomes harder to track what it creates and runs across tools. QMetry addresses this by providing the control and observability needed to keep AI assisted and AI-driven test management aligned with requirements, execution, and release decisions. This foundation becomes increasingly important as QA workflows grow more automated and distributed across tools. Through the QMetry MCP server, users can interact with QMetry, Jira, and GitHub to generate tests, organize execution, and maintain traceability through simple prompts, without jumping between tools or doing manual linking. These workflows can be triggered from MCP compatible clients such as IDEs or tools like Claude Desktop, while QMetry ensures that all test assets, results, and relationships remain visible, controlled, and auditable. What the QMetry MCP server enables in practice The QMetry MCP server exposes core QMetry test management capabilities as MCP tools. This allows AI assistants to interact with QMetry programmatically, using the same permissions and project context as a human user. Instead of AI generating artifacts in isolation, test cases, execution data, and their relationships are created and maintained directly in QMetry, as part of an AI-driven test management platform. In practice, this unlocks a wide range of QA use cases. Here are a few examples of what teams can do with QMetry MCP: Turn requirements into executable tests AI assistants can fetch epics or stories with acceptance criteria from Jira and generate test cases directly in QMetry, including detailed steps and expected outcomes. These test cases are automatically linked back to their requirements and can be immediately planned for execution. This reduces manual test authoring while ensuring coverage remains visible and traceable. Automate test planning, execution, and defect creation Once test cases exist, the QMetry MCP servercan organize them into test suites based on release cycle and platform, track execution results, and update statuses in bulk. When tests fail, defects can be created in Jira and linked back to the failed executions and original requirements, keeping QA and development workflows connected without manual handoffs. Understand test coverage based on code changes The QMetry MCP server can also support workflows that start from code. By analyzing GitHub commits or pull requests, MCP can identify the related ticket IDs, validate whether those requirements exist in QMetry, and check if they already have test coverage. If gaps are found, teams can address missing coverage before testing or release activities begin. Plan regression testing based on impact Rather than relying on static or overly broad regression suites, MCP driven workflows can help identify which areas of the system are affected by recent changes. Existing test cases can be reused where possible, and relevant test suites can be created or recommended, helping teams focus regression testing where risk is highest. Get end to end traceability and release visibility Because all test cases, executions, and defects are created and linked through the QMetry MCP server, traceability is established as part of the workflow, not as a separate reporting step. Teams can review coverage, execution status, and outstanding defects in context and use this information to assess release readiness without manually stitching data across tools. As AI accelerates the speed of test creation and execution, QMetry becomes the critical layer that keeps everything connected and controlled. It doesn’t just manage tests, it ensures visibility, traceability, and governance across an increasingly distributed, AI-driven QA landscape. With the QMetry MCP server, teams can leverage this functionality directly from the tools they already use, without switching contexts, making QMetry not only the system of record, but the backbone of modern, AI-powered test management. Learn more and started with SmartBear MCP server. See it in action See a couple of short demos that showcase what the QMetry MCP server can unlock in real QA workflows. We’re rapidly expanding MCP driven use cases and unlocking new ways to automate more of the testing lifecycle, while keeping everything visible and traceable in QMetry. Demo 1: Requirements-to-execution traceability Example: Using the QMetry MCP server to turn Jira requirements into executable tests with end-to-end traceability A QA team can pull an epic from Jira, generate detailed test cases in QMetry, automatically link them to the synced requirement, and organize them into a release and platform specific test suite ready for execution. The output is a complete execution and traceability view, showing test status and linked defects in context across the epic, requirement, test cases, suite, executions, and Jira bugs, without manual linking or cross-tool stitching. Demo 2: Release health and readiness Example: Using the QMetry MCP server to generate a release health and readiness report A QA manager can automatically validate release readiness against predefined quality gates, including planned scope completion, test coverage, test execution status, defect severity, and automation results. The output is a clear release decision, supported by full traceability across Jira, test cases, executions, and defects, all exportable into a shareable report. Try out QMetry today or sign up for a quick demo to see more MCP use cases.147Views3likes0CommentsSmartBear Talks | Meet the ReadyAPI Team - Alianna Inzana
I’m sure you want to know more about ReadyAPI! We continue introducing the members of the ReadyAPI team to you, so you can learn more about the people who are developing the best tool for API testing – ReadyAPI! Today, we will talk with Alianna Inzana, Senior Director of Product Development for API and Virtualization. Ali started her career as an analyst, where she was happy to work with digits before she dove into the world of APIs. Check out what her passion is these days, along with what we can expect from ReadyAPI in 2021. Alianna Inzana - Senior Director of Product Development for API and Virtualization at SmartBear I hope you found this video informative! Feel free to post your questions under this post, in case you have any. More video interviews are available under the Interview label. Don't forget to subscribe to the SmartBear Community Matters blog to get updates about all the great content we post here.1.7KViews2likes0CommentsUsing context parameter in Request Authorization
Hi, I am currently building a Test Case in which I use scripts to parse the response body of a few REST API Call Test Steps. 1. I create a user: POST /user/signup. 2. I log in with the created user: POST /user/login ( response body contains a JWT token that I want to use as Authorization in the following API Calls ) 3. I parse the token with the following script: import groovy.json.JsonSlurper responseContent = testRunner.testCase.getTestStepByName("POST User Login").getPropertyValue("response") jsonParser = new JsonSlurper().parseText(responseContent) context.JWTToken = jsonParser.token log.info ("Token in context: " + context.JWTToken) The token correctly logs in the log.info (line 5 of the script), so it is valid and stocked as a context variable. 4. I want to create a product: POST /products . This API Call needs a valid JWT to suceed, so I want to pass my stocked context.JWTToken as the value of the the Access Token. It doesn't work and I would gladly like to get some help on how to make it work. I also tried: ${context.JWTToken} ; context.JWTToken ; JWTToken ; ${=JWTToken} ; ${JWTToken} Thank youSolved3.9KViews1like2Comments