Many of our tests have more than six lines per step. We can't see these in the Step frame of the right panel of the Test Runner without scrolling, which is annoying. It's more important to us to see everything in the Step frame than see what's in the Expected and Actual Results panels.
It's not practical for us to hover over the step in the left nav because sometimes we have to copy/paste text from the Step box.
Also, the lines in the test step were entered single-spaced in the Test Editor and the Test Runner shows it as double-spaced.
I suggest that you change this panel to allow the user to pull down the line in the center of the frame to act as dynamic resizing of the top of the frame that contains the Step box. This will let the user see more in the Step box and less of the Expected and Actual Results boxes, or the reverse if they choose.
Linking requirements to tests is a time-consuming task - there is no quick and easy way to do it. It would be great if you could include the path/name of the requirement(s) a test should be linked to in the CSV file that is used to upload tests into QA Complete. The prerequisite would be that the requirements are loaded prior to loading the tests.
Currently (QAC 11.0 or older) in order to get complete Release report for example, you need to link manually all requirements and test cases which are in scope for this release.
In our team we assumed logically that if you have Test sets associated to test cases and you map these Sets to a release the Tests scenarios will be automatically associated with this release.
If we have to run say full regression testing of all existing test cases in one release (like few thousands), that means that we have to link manually all these test cases to the release.
The link between the above work items should em improved and automated.
Test Runner screen is one of the layouts can't be customized at the moment in the app.
In addition, the default view of pre-set fields does not allocate much room for some fields like, worth case for us, Title (test case name) column.
For manual testing it requires to always resize this field prior execution and changes aren't save after window is closed.
In a big organization, with a large amount of test sets per QA, going through these few clicks can definitely result in a real time waster.
As discussed here:
We looked into providing a "screen layout" for the test runner and it proved to be a larger issue than we expected.
1) Is this feature in development or dropped? I can't find a related ticket to Kudo it.
2) Would it be possible that changes in size/length of fields are kept after the test runner is closed? So that opening a new test runner screen will display exact setting of previous screen.
I know that ALMComplete allows us to make custom dashboards and reports. That being said, most of us do not have the security clearance to do that. This is an area where we need a MAJOR improvement. Get rid of Crystal Reports! Tools that have a learning curve or require SQL knowledge leave many people behind. We don’t have that kind of time. A simple way of creating reports and dashboards is needed!
After a test run is completed (all steps marked either pass or fail) its outcome gets locked and can't be altered.
While the reason behind its logic and needed,
adding the ability of changing test outcome depending on access level (admin/lead access only) would allow more flexibility and would help in situations like:
Several test runs are completed as failed due to an issue that was overlook and in the end not an issue.
This mistake can not be amended and a new run is required, which will then require more work when extracting metrics etc.
Test outcome should be editable as long as End Run button was not pressed.
This would allow creation of a super Test Set, made up of other test sets without having to add each test to it individually, and prevent having to update multiple test sets when one test is added.
Many organizations rely on requirements documents in Word to communicate requirements to the business for review and approval. In my organization, for example, we do no have (nor is it likely we will ever have) the business log into QAComplete (or any other requirements/testing tool) to review requirements. All such reviews are conducted via Word documents.
Currently, my only option via QAComplete is to export project requirements to Excel and then copy/paste these into a Word template used for requirements. This works, but is awkward at best.
Adding the ability to generate a Word-based requirements document from a template would make the requirements part of the tool much more useful to business analysts.
When you copy tests from one project to another in QAC, all related items are copied as well, copying history, notes and attachments from the original items. We re use test cases frequently and are often asked to copy previously executed test cases over to new projects. It is inconvenient to go through the new tests and delete all history, notes and attachments. I believe the alternative is to upload fresh tests every time. Since we deal with thousands of tests the ability to copy tests from one project to another creating a brand new instance of that test is important.
Suggestion: A Fast Edit option to copy the test, or multiple tests only, without pulling over any history, notes or attachments from the copy source.
We have thousands of tests in our testing events that must be organized in folders by applications and often sub folders for Day 1, Day 2,Day due to the hospital workflows we test. Right now all folders must be created manually in QAC. When we were on HP ALM we were able to build our folder structure into the upload path and ALM would create the folders so we never had to do that manually. This manual effort can take quite awhile, and in a resource/time constrained environment every minute/hour counts.
Suggestion: Make it possible to define the folder names/structure in the upload *.csv file and enhance tool to create the folders automatically during the upload processing.
We currently have a number of teams using our on-prem instance of QAComplete. We have concurrent licenses, however some teams end up using all the licenses, stopping other teams from logging in to work on their projects.
It would be useful if we could set the maximum concurrent logins per project.
QAC makes entering Test Steps and Expected Results quick and easy. What is not quick is having to change tabs, upload a limited number of files, and specify the mapping for which file should be associate with a particular step. I would like to see this process streamlined and on the same tab as Test Steps.
Perhaps after tabbing past Expected Results, you could have a load file icon which would have the focus, the user could press the space bar, select the file, click Okay, and continue tabbing to enter the next Step and Expected Result. Below is a mockup of what it could look like.
Some feature requests for the dashboard:
1) A "Refresh button" for each individual dashboard widget. Sometimes I'm not sure if it's up to date and refreshing the whole page is pretty silly. So maybe a "refresh all widgets" or the ability to refresh and individual widget would be great.
2) Each dashboard widget should indicate how it is being filtered by some kind of header. In the attached image I can't tell that that widget is filtered by a single release that I am targeting (unless there is another widget I should be using?)
We need a method by which a step or series of steps can be written once and called from multiple tests. For example, I want to write a login procedure and then call it from other tests. Specifically, I want to have a test step something like “Call ‘Login’”. At execution, this step would be replaced by the steps in the test called “Login”. I do not want to copy the steps (as we do now), because then I have to maintain those steps in many places.
There are only 2 ways to link a test to a configuration...either
1. select the test (or test set), open the link to items, select the configuration folder, and check each sub item (e.g. Windows8>IE, and Chrome and Firefox), repeating for each config folder.
2. select the configuration, open the link to items, specify type and folder, and individually check every item (test) in the folder. Repeat for every configuration.
Even with just a few Windows environments, each with 3 or 4 different browsers, times our 2500 tests, and this is a monumental task. While we could have done it as we created tests, our focus was on creating the tests, and not linking..that was a mistake.
However now, we need to make these links, yet we are looking at nearly 10,000 clicks/steps.
If nothing else, in the selection of tests (or sets) drop down, add a "Select All" to check all tests in that folder. Then we would just need to repeat for each Configuration, each folder...a much more manageable task.
We have about 2500 tests at this time. As the process of connecting that many tests, even organized into test sets, is cumbersome (creating another suggestion for that issue), it would extremely helpful to be able to clone a release or iteration to another release or iteration, and carry across the linked items (we only use tests, not sprints or requirements). That way, even though I would have to manually link all the tests to a release, at least for the next release, I could clone and be done, and just add any new tests (or test sets).
Anytime a folder structure is displayed within QAC, I would like to be able to collapse folders. Additionally, I would like the initial view of the folder structure to default to a collapsed state. Navigating through a directory tree where all folders are expanded is time consuming and at times frustrating.
I would like a report similar to the 'Test Coverage Run by Test Set' however, I would like the report to display only the last result.
So, if I have a release with multiple builds this report currently consolidates all results. For example, if the release has 2 builds:
- The first build might result in 10 Passes and 10 Fails
- The second build might result in 10 passes – we may or may not re-run the already passed tests.
In this scenario the report for the release will display a total of 20 passes and 10 fails – I would like it to show just the last recorded status – irrespective of the build it was run in.