Quality Lens: Smarter QA, Every Day [SmartBear MCP Hackathon]
We’ve been working on an idea called Quality Lens, a tool that helps QA and engineering teams focus their efforts where it really matters. The goal is simple: bring together data from Jira, QMetry, and BugSnag into one daily report that highlights risk, test coverage gaps, and top priorities automatically.
The Problem
If you’ve ever tried to get a clear picture of your project’s quality state, you know how scattered the data is. Jira tells you what’s being built, QMetry shows what’s tested, and BugSnag reports what’s breaking in production. But these tools don’t talk to each other, which means QA teams spend hours every day jumping between systems, trying to piece together the story.
The result? People end up guessing which areas are most at risk, which features need more coverage, or what’s worth testing next. It’s a lot of manual review and context switching that slows down the release process and sometimes lets avoidable issues slip into production.
The Solution
Quality Lens prototype connects these dots automatically using the SmartBear and Jira MCP integration. It pulls data from Jira, QMetry, and BugSnag, then analyzes it to understand three key things:
- What’s being built (from Jira)
- What’s tested (from QMetry)
- What’s failing in production (from BugSnag)
Once it correlates all this information, it generates a daily QA intelligence report, a single, easy-to-read view of where risk lives in your product. The report highlights missing tests, areas with frequent production issues, and new tickets that lack QA coverage.
The result is a prioritized action plan that answers the most important questions for any QA team:
Where are we exposed? What needs testing next? And how can we reduce risk today?
The output can be generated in HTML or PDF format, ready to share with QA, engineering, and product stakeholders, so everyone sees the same picture of quality.
Why It Matters
The Quality Lens prototype has the potential to remove the guesswork. Instead of spending half the day collecting data and debating priorities, teams can focus on actual testing and improvement. It reduces time spent on manual reviews, cuts down on tool-hopping, and keeps everyone aligned on what matters most for release confidence.
The best part is that it fits naturally into the daily workflow. The MCP-driven automation ensures the report updates every day, giving teams a fresh, data-driven starting point without any manual input. When QA, engineering, and product teams all share the same “quality lens,” decisions get faster, coverage gets better, and releases get smoother.
This prototype is still a work in progress, and we’d love to get feedback from others who face similar challenges. How do you currently connect your QA insights across tools? What would you want to see in a daily quality report?