QMetry MCP: Bringing Control, Coverage, and Traceability to AI-Driven QA
As AI automates more QA work, it becomes harder to track what it creates and runs across tools. QMetry addresses this by providing the control and observability needed to keep AI assisted and AI-driven test management aligned with requirements, execution, and release decisions. This foundation becomes increasingly important as QA workflows grow more automated and distributed across tools. Through the QMetry MCP server, users can interact with QMetry, Jira, and GitHub to generate tests, organize execution, and maintain traceability through simple prompts, without jumping between tools or doing manual linking. These workflows can be triggered from MCP compatible clients such as IDEs or tools like Claude Desktop, while QMetry ensures that all test assets, results, and relationships remain visible, controlled, and auditable. What the QMetry MCP server enables in practice The QMetry MCP server exposes core QMetry test management capabilities as MCP tools. This allows AI assistants to interact with QMetry programmatically, using the same permissions and project context as a human user. Instead of AI generating artifacts in isolation, test cases, execution data, and their relationships are created and maintained directly in QMetry, as part of an AI-driven test management platform. In practice, this unlocks a wide range of QA use cases. Here are a few examples of what teams can do with QMetry MCP: Turn requirements into executable tests AI assistants can fetch epics or stories with acceptance criteria from Jira and generate test cases directly in QMetry, including detailed steps and expected outcomes. These test cases are automatically linked back to their requirements and can be immediately planned for execution. This reduces manual test authoring while ensuring coverage remains visible and traceable. Automate test planning, execution, and defect creation Once test cases exist, the QMetry MCP servercan organize them into test suites based on release cycle and platform, track execution results, and update statuses in bulk. When tests fail, defects can be created in Jira and linked back to the failed executions and original requirements, keeping QA and development workflows connected without manual handoffs. Understand test coverage based on code changes The QMetry MCP server can also support workflows that start from code. By analyzing GitHub commits or pull requests, MCP can identify the related ticket IDs, validate whether those requirements exist in QMetry, and check if they already have test coverage. If gaps are found, teams can address missing coverage before testing or release activities begin. Plan regression testing based on impact Rather than relying on static or overly broad regression suites, MCP driven workflows can help identify which areas of the system are affected by recent changes. Existing test cases can be reused where possible, and relevant test suites can be created or recommended, helping teams focus regression testing where risk is highest. Get end to end traceability and release visibility Because all test cases, executions, and defects are created and linked through the QMetry MCP server, traceability is established as part of the workflow, not as a separate reporting step. Teams can review coverage, execution status, and outstanding defects in context and use this information to assess release readiness without manually stitching data across tools. As AI accelerates the speed of test creation and execution, QMetry becomes the critical layer that keeps everything connected and controlled. It doesn’t just manage tests, it ensures visibility, traceability, and governance across an increasingly distributed, AI-driven QA landscape. With the QMetry MCP server, teams can leverage this functionality directly from the tools they already use, without switching contexts, making QMetry not only the system of record, but the backbone of modern, AI-powered test management. Learn more and started with SmartBear MCP server. See it in action See a couple of short demos that showcase what the QMetry MCP server can unlock in real QA workflows. We’re rapidly expanding MCP driven use cases and unlocking new ways to automate more of the testing lifecycle, while keeping everything visible and traceable in QMetry. Demo 1: Requirements-to-execution traceability Example: Using the QMetry MCP server to turn Jira requirements into executable tests with end-to-end traceability A QA team can pull an epic from Jira, generate detailed test cases in QMetry, automatically link them to the synced requirement, and organize them into a release and platform specific test suite ready for execution. The output is a complete execution and traceability view, showing test status and linked defects in context across the epic, requirement, test cases, suite, executions, and Jira bugs, without manual linking or cross-tool stitching. Demo 2: Release health and readiness Example: Using the QMetry MCP server to generate a release health and readiness report A QA manager can automatically validate release readiness against predefined quality gates, including planned scope completion, test coverage, test execution status, defect severity, and automation results. The output is a clear release decision, supported by full traceability across Jira, test cases, executions, and defects, all exportable into a shareable report. Try out QMetry today or sign up for a quick demo to see more MCP use cases.147Views3likes0CommentsAdd tests to cycle via API
Hello all, I am so sorry to bother you every day. Today I want to add a tests into a test cycle via API. I tried to use the following link https://zfjcloud.docs.apiary.io/#reference/execution/get-execution/add-tests-to-cycle I created a JWT for the endpoint, used the body which is included on the mentioned link, adjusted the values, but I am getting the error "method field is required.". If I saw correctly it is not included in the body. Can you help me with this? As always I am including some screen shots. Thanks for the tips, TiborSolved4.4KViews0likes6CommentsSmartBear Talks | Meet the ReadyAPI Team - Alianna Inzana
I’m sure you want to know more about ReadyAPI! We continue introducing the members of the ReadyAPI team to you, so you can learn more about the people who are developing the best tool for API testing – ReadyAPI! Today, we will talk with Alianna Inzana, Senior Director of Product Development for API and Virtualization. Ali started her career as an analyst, where she was happy to work with digits before she dove into the world of APIs. Check out what her passion is these days, along with what we can expect from ReadyAPI in 2021. Alianna Inzana - Senior Director of Product Development for API and Virtualization at SmartBear I hope you found this video informative! Feel free to post your questions under this post, in case you have any. More video interviews are available under the Interview label. Don't forget to subscribe to the SmartBear Community Matters blog to get updates about all the great content we post here.1.7KViews2likes0CommentsCreate a new test case via API
Hello all, I would like to ask how can I create Test or any kind of issue through Zephyr's API. I was able to create test cycle through the following API https://zfjcloud.docs.apiary.io/#reference/cycle I did not find the way to create an issue or more specifically test case via API. I would really appreciate a tip how to configure headers and body too. Can you please help me with this?Solved1.8KViews0likes4Commentsqsh error when creating test cycle with API
Hello all, I am trying to create a testing cycle through an API (currently Postman). This is what I am doing I am not using Authorization tab at all. As a JWT generator I use this code in JAVA I tried both "GET" and "POST" methods. An I still get the gsh error like seen in the first Image. I am using the following site to get the idea how should I send an API https://zfjcloud.docs.apiary.io/#reference/cycle/creates-new-cycle/creates-new-cycle?console=1 BTW: I am pretty sure that project ID and version is correct Am I doing something wrong? Can you please help me with this? Thanks a lotSolved2.3KViews0likes7CommentsHow to test REST API with Login Authentication
Hi, I am trying to test REST API. The login authentication has its own URI and it works when I test it. However, I have another URI but it requires the Login Authentication to execute first. So, what I did was I created a REST project, create a Resource for Login Authentication and another Resource for the other the URI. When I ran the TestRunner, it does not seem to authenticate well. Is this the way to tests or is there another way? Thanks2KViews0likes4CommentsWriting log messages of test to the file
Hi, When TestComplete is being used, We try to write log messages which are come out of the "Log. Message()" or "Log.Error()" function to a text file in order to keep the log messages together as a log.txt. However, after creating a file, we try to use "Log.File()" function or "SaveResultsAs()" to write logs messages to this file, It just created the file, could not write anything. Is there any way to keep log messages that are written in the test script in the txt file in a given directory? as an example code; function creatingAndWritingLogMessagesToTheTxtFile(){ //create a file var filePath="C:\\.....";//Directory aqFile.Create(filePath); var logName="filePath"+".mth"; Log.SaveResultsAs(logName,lsMHT); } And in the functions how we can write the log messages to this created file that is created to keep log messages? as an example code; function testA(){ //....codes doing something Log. Messages("Test Successful"); //Here How can we use that method (If it is useful) to write this log to that text file?? }Solved1.4KViews0likes3CommentsUse dynamic parameters for TestItem
Hey, I would like to structure my test sequence via TestItems in TestComplete. But I ran into one problem: I want to start my TestItems (TI) out of the script with dynamic/changing parameters. Is this possible? I could not figure out, how to hand over a variable to the TI. Thank you for your help.Solved1.3KViews0likes1CommentSwagger in junit
From the swagger UI (https://petstore.swagger.io/?_ga=2.138463798.444042846.1559674010-1100161644.1559674010#/pet/addPet) I can make a call in introduced format by clicking "Try it out" button. It is great! But, is there a possibility to: - use swagger to generate http request objects in java in introduced format. For instance, I want make this call (https://petstore.swagger.io/?_ga=2.138463798.444042846.1559674010-1100161644.1559674010#/pet/findPetsByStatus) from the java test, providing a parametr status as an argument to some method, which will generate httpRequest or make a call and return httpResponse.Scheduling TestComplete Keyword Tests with Microsoft Task Scheduler
Hello, I am trying to schedule my TestComplete keyword tests to run automatically when I am out of the office. After doing some research, it seems as though Microsoft Task Scheduler can help me achieve this goal. However, I am relatively new to TestComplete, and was wondering if anyone could provide me with more clarity on this process. Thank you.1.2KViews0likes1Comment