Status:
Accepted for Discussion
Submitted on
05-07-2015
11:57 AM
Submitted by
PepeFocus
on
05-07-2015
11:57 AM
Test Runner screen is one of the layouts can't be customized at the moment in the app. In addition, the default view of pre-set fields does not allocate much room for some fields like, worth case for us, Title (test case name) column. For manual testing it requires to always resize this field prior execution and changes aren't save after window is closed. In a big organization, with a large amount of test sets per QA, going through these few clicks can definitely result in a real time waster. As discussed here: http://community.smartbear.com/t5/QAComplete-Feature-Requests/QAComplete-add-the-Priority-of-Test-Cases-to-the-Test-Runner/idi-p/98369 We looked into providing a "screen layout" for the test runner and it proved to be a larger issue than we expected. 1) Is this feature in development or dropped? I can't find a related ticket to Kudo it. 2) Would it be possible that changes in size/length of fields are kept after the test runner is closed? So that opening a new test runner screen will display exact setting of previous screen.
... View more
Currently (QAC 11.0 or older) in order to get complete Release report for example, you need to link manually all requirements and test cases which are in scope for this release. In our team we assumed logically that if you have Test sets associated to test cases and you map these Sets to a release the Tests scenarios will be automatically associated with this release. If we have to run say full regression testing of all existing test cases in one release (like few thousands), that means that we have to link manually all these test cases to the release. The link between the above work items should em improved and automated. Example below:
... View more
Status:
Accepted for Discussion
Submitted on
05-07-2015
12:25 PM
Submitted by
PepeFocus
on
05-07-2015
12:25 PM
After a test run is completed (all steps marked either pass or fail) its outcome gets locked and can't be altered. While the reason behind its logic and needed, adding the ability of changing test outcome depending on access level (admin/lead access only) would allow more flexibility and would help in situations like: Several test runs are completed as failed due to an issue that was overlook and in the end not an issue. This mistake can not be amended and a new run is required, which will then require more work when extracting metrics etc. Test outcome should be editable as long as End Run button was not pressed.
... View more
Requesting a new Testing Results Report that provides the following: Contains a drop-down list of the Releases for the applicable QAComplete project. Once a Release is selected, the report lists the Requirements linked to the selected Release. Under each linked Requirement, the report lists the Test Cases (Test ID & Title) linked to the applicable Requirement(s). For each Test Case listed, the Test Results are included that only pertains to the Release selected. Via "Legacy Reports", "Adhoc-Detail", "Requirements Report", we can generate the above report except it doesn't support item #4. I submitted a support ticket (Case #00235269) but was told it was unlikely any future work will occur in the Legacy Report section. As such I request the capability of creating this report in the Reports UI but I need your votes to make it happen. Our organization is deeply committed with the capability of linking Requirements to Releases and linking Test Cases to the applicable Requirements. In fact, we've included the above report as part of our SDLC making it a required deliverable before a service request can be completed. We purchased QAComplete because this functionality (Requirements Traceability) supported our objective in a new Test Management tool. Overall, QAComplete supports this functionality but from our perspective falls short in acccurately reporting it. Thank you for your consideration.
... View more
See more ideas labeled with:
Status:
Accepted for Discussion
Submitted on
06-22-2015
07:12 AM
Submitted by
kwiegandt
on
06-22-2015
07:12 AM
I know that ALMComplete allows us to make custom dashboards and reports. That being said, most of us do not have the security clearance to do that. This is an area where we need a MAJOR improvement. Get rid of Crystal Reports! Tools that have a learning curve or require SQL knowledge leave many people behind. We don’t have that kind of time. A simple way of creating reports and dashboards is needed!
... View more
Status:
Accepted for Discussion
Submitted on
06-22-2015
07:07 AM
Submitted by
kwiegandt
on
06-22-2015
07:07 AM
We need a method by which a step or series of steps can be written once and called from multiple tests. For example, I want to write a login procedure and then call it from other tests. Specifically, I want to have a test step something like “Call ‘Login’”. At execution, this step would be replaced by the steps in the test called “Login”. I do not want to copy the steps (as we do now), because then I have to maintain those steps in many places.
... View more
Many organizations rely on requirements documents in Word to communicate requirements to the business for review and approval. In my organization, for example, we do no have (nor is it likely we will ever have) the business log into QAComplete (or any other requirements/testing tool) to review requirements. All such reviews are conducted via Word documents. Currently, my only option via QAComplete is to export project requirements to Excel and then copy/paste these into a Word template used for requirements. This works, but is awkward at best. Adding the ability to generate a Word-based requirements document from a template would make the requirements part of the tool much more useful to business analysts.
... View more
Status:
Accepted for Discussion
Submitted on
05-05-2016
10:59 AM
Submitted by
dgossner
on
05-05-2016
10:59 AM
This would allow creation of a super Test Set, made up of other test sets without having to add each test to it individually, and prevent having to update multiple test sets when one test is added.
... View more
At present develoepers are not using QA complete to find out the defects. I have to send them the list of defects. I didn't find any way to send Screenshot along with TESTSETS that were ran for which we added screenshot under Actual Results. So, we have to manually create another word document to send it to Developers with all defects along with the Word document which has screenshots. I noticed under "Run History" for each test sets we can click on "Printer Friendly" option that can be saved as PDF to developers but it is missing a screesnhot which were added during run. Please find a way to include the screenshots with that "Printer Friendl"y report or find a way just to extract "Printer Friend" version of defects only with screenshots. That will help us eliminate creation of another word document which is time consuming.
... View more
At present we can copy Test Set from one project to another. But there is no way of copying TestSet. For organization they have mutiple module which might need to run across same Standards and Framework related testing. Each time for 50 plus module I have to manually create same Test Sets which I already created for others. There should be easier way to copy those TestCase as long as the folder structure is same as previous project. Or should be a way to import it.
... View more
Status:
New Idea
Submitted on
09-13-2016
07:10 AM
Submitted by
jamie_dswinc
on
09-13-2016
07:10 AM
We recently began using QAComplete, and are missing a screenshot tool that our former QA app had available right on the test/defect ribbon bar. We could select the icon to launch a feature allowing us to capture our screen, or a portion of our screen, and then automatically attach the picture to the defect, test, or individual test step - all without having to navigate away from the tool or save an image. This is the #1 request we're getting from our users as we roll it out. Do we know if there are any plans to include a screenshot tool in a future release? Thank you! Jamie
... View more
Status:
Accepted for Discussion
Submitted on
05-04-2015
04:53 PM
Submitted by
MCastle
on
05-04-2015
04:53 PM
QAC makes entering Test Steps and Expected Results quick and easy. What is not quick is having to change tabs, upload a limited number of files, and specify the mapping for which file should be associate with a particular step. I would like to see this process streamlined and on the same tab as Test Steps. Perhaps after tabbing past Expected Results, you could have a load file icon which would have the focus, the user could press the space bar, select the file, click Okay, and continue tabbing to enter the next Step and Expected Result. Below is a mockup of what it could look like.
... View more
I’d like to suggest a potential improvement of introducing bulk link items feature. For instance right now if I want to link a test to requirement I need to do it one by one. It would be great to have option in Fast Edit > Update multiple items > Link multiple items to Requirement/Agile Task etc.
... View more
Let’s say that I have two releases, Alpha and Beta. I run my tests sets for the Alpha release and all test sets pass. I have not executed any test sets for the Beta release yet. In the navigation pane on the left, I select the release Alpha and choose last run status as one of the fields to display on the test set listing. All test sets show as being passed. So far so good. Now I switch the release in the navigation pane to Beta and the last run status column still shows all test sets as being passed. This is very confusing, as no test set has been run yet for the Beta release. The last run status should show “not run” or nothing, but not “passed” or “failed”. Now I execute all test sets for the Beta release and they all fail. In the navigation column on the left, I select the Alpha release. The last run status for all tests shows as failed, which is not appropriate for the Alpha release. This can be very confusing. It would be VERY helpful if the software could look at the run history records and determine the last run status that is appropriate for the release selected. This same issue exists on the test library dashboard chart called “Tests by Last Run Status (Current Project)”.
... View more
We have about 2500 tests at this time. As the process of connecting that many tests, even organized into test sets, is cumbersome (creating another suggestion for that issue), it would extremely helpful to be able to clone a release or iteration to another release or iteration, and carry across the linked items (we only use tests, not sprints or requirements). That way, even though I would have to manually link all the tests to a release, at least for the next release, I could clone and be done, and just add any new tests (or test sets).
... View more
Status:
Accepted for Discussion
Submitted on
05-05-2015
05:42 AM
Submitted by
johnc_1
on
05-05-2015
05:42 AM
There are only 2 ways to link a test to a configuration...either 1. select the test (or test set), open the link to items, select the configuration folder, and check each sub item (e.g. Windows8>IE, and Chrome and Firefox), repeating for each config folder. 2. select the configuration, open the link to items, specify type and folder, and individually check every item (test) in the folder. Repeat for every configuration. Even with just a few Windows environments, each with 3 or 4 different browsers, times our 2500 tests, and this is a monumental task. While we could have done it as we created tests, our focus was on creating the tests, and not linking..that was a mistake. However now, we need to make these links, yet we are looking at nearly 10,000 clicks/steps. If nothing else, in the selection of tests (or sets) drop down, add a "Select All" to check all tests in that folder. Then we would just need to repeat for each Configuration, each folder...a much more manageable task.
... View more
The last run status on a Test Set can be confusing, especially for a newbie like me 🙂 I started a test run, some tests failed and I ended the run Run History shows status = failed (good and accurate) Test set list in right nav shows failed (good and accurate) I started the test run again, did not End Run, just Save & Close Run History shows status = In Progress (good and accurate) Test set list in right nav shows failed (not current, not accurate and confusing)
... View more
Rename Pause Run and Start Run in Test Runner to Pause Timer and Start Timer as the test run can be edited despite the buttons names.
... View more
Anytime a folder structure is displayed within QAC, I would like to be able to collapse folders. Additionally, I would like the initial view of the folder structure to default to a collapsed state. Navigating through a directory tree where all folders are expanded is time consuming and at times frustrating.
... View more
Status:
Accepted for Discussion
Submitted on
03-10-2015
09:06 AM
Submitted by
chrisb
on
03-10-2015
09:06 AM
Update / modernize the UI.
... View more