Forum Discussion

user-604551's avatar
user-604551
New Member
3 months ago

ROVO AI support Zephyr Scale

I see that ROVO AI is not able to fetch any information related to Zephyr Scale, are we expecting to support this any soon?

4 Replies

  • Hi Raymond​

    Thanks for recommending Reqase Lite – AI Test Generator for Zephyr

    I’m part of the Reqase team, and I wanted to share a bit about how teams are using Reqase Lite. Several community members have asked about AI-assisted test generation in Zephyr, so I thought it might be helpful.

    Why teams like it:

    • Instantly generates structured test cases directly from Jira issues — no extra steps.
    • Supports Manual, Generic, and Cucumber test types.
    • Lets you connect your own Azure OpenAI for full control over data and model behavior. All requests go directly to your model, no middle server.
    • Includes a simple review interface, making it easier to validate AI-generated tests before syncing into Zephyr.

    We’ve seen QA teams save significant time while improving test coverage, and we’d be happy to help if you want to explore it in your workflow.

  • Hi everyone! I'm Miwako from the UX Research team at Smartbear. We're looking for participants for 40-minute research sessions to explore how Rovo might fit into your Zephyr workflow and hear what you'd expect from it. If you'd like to participate, please choose a time that works for you: https://calendly.com/miwakozosel-smartbear/zephyrrovo  Thank you!

  • Raymond's avatar
    Raymond
    New Contributor

    ROVO currently can’t read structured data from Zephyr Scale (test cases, steps, libraries, test cycles, etc.), so it unfortunately can’t generate anything usable inside Scale. We ran into the same limitation a while ago.

    We eventually switched to Reqase Lite for AI-assisted test case generation because it works directly from Jira issues instead of trying to access Scale’s backend. The AI output is cleaner, easier to review, and much closer to real testing practices. You approve the cases first, then sync them—so it fits Scale teams pretty well.

    Another big plus for us is that it supports our company’s own Azure OpenAI endpoint, which solved the security/compliance concerns that prevented us from using other AI tools. Being able to use our own LLM while still getting high-quality test generation has been a huge win.

    If your goal is to generate test cases from requirements rather than manipulate Scale data directly, this workflow has been much more reliable for us. Hope the experience helps.