QMetry MCP: Bringing Control, Coverage, and Traceability to AI-Driven QA
As AI automates more QA work, it becomes harder to track what it creates and runs across tools. QMetry addresses this by providing the control and observability needed to keep AI assisted and AI-driven test management aligned with requirements, execution, and release decisions. This foundation becomes increasingly important as QA workflows grow more automated and distributed across tools. Through the QMetry MCP server, users can interact with QMetry, Jira, and GitHub to generate tests, organize execution, and maintain traceability through simple prompts, without jumping between tools or doing manual linking. These workflows can be triggered from MCP compatible clients such as IDEs or tools like Claude Desktop, while QMetry ensures that all test assets, results, and relationships remain visible, controlled, and auditable. What the QMetry MCP server enables in practice The QMetry MCP server exposes core QMetry test management capabilities as MCP tools. This allows AI assistants to interact with QMetry programmatically, using the same permissions and project context as a human user. Instead of AI generating artifacts in isolation, test cases, execution data, and their relationships are created and maintained directly in QMetry, as part of an AI-driven test management platform. In practice, this unlocks a wide range of QA use cases. Here are a few examples of what teams can do with QMetry MCP: Turn requirements into executable tests AI assistants can fetch epics or stories with acceptance criteria from Jira and generate test cases directly in QMetry, including detailed steps and expected outcomes. These test cases are automatically linked back to their requirements and can be immediately planned for execution. This reduces manual test authoring while ensuring coverage remains visible and traceable. Automate test planning, execution, and defect creation Once test cases exist, the QMetry MCP servercan organize them into test suites based on release cycle and platform, track execution results, and update statuses in bulk. When tests fail, defects can be created in Jira and linked back to the failed executions and original requirements, keeping QA and development workflows connected without manual handoffs. Understand test coverage based on code changes The QMetry MCP server can also support workflows that start from code. By analyzing GitHub commits or pull requests, MCP can identify the related ticket IDs, validate whether those requirements exist in QMetry, and check if they already have test coverage. If gaps are found, teams can address missing coverage before testing or release activities begin. Plan regression testing based on impact Rather than relying on static or overly broad regression suites, MCP driven workflows can help identify which areas of the system are affected by recent changes. Existing test cases can be reused where possible, and relevant test suites can be created or recommended, helping teams focus regression testing where risk is highest. Get end to end traceability and release visibility Because all test cases, executions, and defects are created and linked through the QMetry MCP server, traceability is established as part of the workflow, not as a separate reporting step. Teams can review coverage, execution status, and outstanding defects in context and use this information to assess release readiness without manually stitching data across tools. As AI accelerates the speed of test creation and execution, QMetry becomes the critical layer that keeps everything connected and controlled. It doesn’t just manage tests, it ensures visibility, traceability, and governance across an increasingly distributed, AI-driven QA landscape. With the QMetry MCP server, teams can leverage this functionality directly from the tools they already use, without switching contexts, making QMetry not only the system of record, but the backbone of modern, AI-powered test management. Learn more and started with SmartBear MCP server. See it in action See a couple of short demos that showcase what the QMetry MCP server can unlock in real QA workflows. We’re rapidly expanding MCP driven use cases and unlocking new ways to automate more of the testing lifecycle, while keeping everything visible and traceable in QMetry. Demo 1: Requirements-to-execution traceability Example: Using the QMetry MCP server to turn Jira requirements into executable tests with end-to-end traceability A QA team can pull an epic from Jira, generate detailed test cases in QMetry, automatically link them to the synced requirement, and organize them into a release and platform specific test suite ready for execution. The output is a complete execution and traceability view, showing test status and linked defects in context across the epic, requirement, test cases, suite, executions, and Jira bugs, without manual linking or cross-tool stitching. Demo 2: Release health and readiness Example: Using the QMetry MCP server to generate a release health and readiness report A QA manager can automatically validate release readiness against predefined quality gates, including planned scope completion, test coverage, test execution status, defect severity, and automation results. The output is a clear release decision, supported by full traceability across Jira, test cases, executions, and defects, all exportable into a shareable report. Try out QMetry today or sign up for a quick demo to see more MCP use cases.200Views3likes0CommentsAutomating Pact Test Generation with the SmartBear MCP Server and PactFlow
Here is a new video demo showing how developers can use the SmartBear MCP Server together with GitHub Copilot to automatically generate and run Pact Tests right inside Visual Studio Code. In this walkthrough, you’ll see how easy it is to: ✅ Connect Copilot to a locally running SmartBear MCP server ✅ Use MCP tools to generate Pact Tests from your existing API specs and templates ✅ Filter endpoints and control file naming to fit your workflow ✅ Validate and run the generated tests directly from your terminal This example highlights how MCP can act as the bridge between your AI assistant and SmartBear’s quality ecosystem, turning natural language prompts into real, executable tests no manual setup required. Watch the full demo here: Automating PACT Tests with SmartBear MCP Server If you’re experimenting with your own MCP integrations or have ideas for other test automation workflows you’d like to see, drop your thoughts below!103Views1like0CommentsWhat’s Next for SmartBear’s MCP Server?
Over the past few months, we've been busy adding tools to the SmartBear MCP server with updates/releases happening every few weeks. We’ve seen teams begin exploring how MCP can securely connect SmartBear’s API, testing, and observability tools directly into their AI assistants and IDEs with no extra setup or manual data wrangling required. Here’s what early users are getting excited about: Full Context in One Place: Developers can instantly access API definitions, test outcomes, and error insights from tools like SwaggerHub, Zephyr, and BugSnag without leaving their workflow. Agentic Automation: Teams are building AI-driven workflows that trigger tests, surface flaky results, and even connect runtime errors to missing API documentation. Secure Local Control: All data stays protected under your organization’s governance policies. MCP simply makes it easier and safer to access what you already own. Next up, we’re focusing on real-world use cases like error-driven API refinement (BugSnag + Swagger) and automated validation after fixes (BugSnag + Reflect) to show how cross-product workflows can speed up release confidence. Here is a video from a SmartBear that has been exploring how to desgin APIs and ensure best practices around API Standardization. Swagger API Governance with MCP We’d love to hear how you’re experimenting with MCP whether it’s inside your IDE, your assistant, or your CI pipeline. Drop your feedback or share what workflows you’d like to see next!79Views0likes0Comments