QAComplete and ALMComplete

New Idea

We are not used to use the am/pm notation and I keep getting confused if my scheduled tests will be run in the morning, or afternoon.

So I would like to have an option that the 24 hour notation is used everywhere.

There are so many different screens and buttons and choices when entering a Test Set Suite or a Test Case or a Test Step it can be confusing on where you are at times.

 

  1. add a new step, I select the test and go to steps.  The button only says Edit Steps and after you select it, only then do you get an option to add a step.  Would be nice if the button said Add/Edit steps like it does when it says Add/Edit Tests.
  2. when in a screen, when what user is doing is displayed at the very top left of the screen, would be nice if it were a bit more visible

Test Set --- Last run status is confusing

Status: New Idea
by sheba0907 on ‎05-16-2017 10:57 AM

The last run status on a Test Set can be confusing, especially for a newbie like me Smiley Happy

 

I started a test run, some tests failed and I ended the run

Run History shows status = failed (good and accurate)

Test set list in right nav shows failed (good and accurate)

I started the test run again, did not End Run, just Save & Close

Run History shows status = In Progress (good and accurate)

Test set list in right nav shows failed (not current, not accurate and confusing)

 

 

 

Currently (QAC 11.0 or older) in order to get complete Release report for example, you need to link manually all requirements and test cases which are in scope for this release.

In our team we assumed logically that if you have Test sets associated to test cases and you map these Sets to a release the Tests  scenarios will be automatically associated with this release.

If we have to run say full regression testing of all existing test cases in one release (like few thousands), that means that we have to link manually all these test cases to the release.

The link between the above work items should em improved and automated.

 

Example below:

 

Releases_mapping.png

Test Runner screen is one of the layouts can't be customized at the moment in the app.

In addition, the default view of pre-set fields does not allocate much room for some fields like, worth case for us, Title (test case name) column.

 

For manual testing it requires to always resize this field prior execution and changes aren't save after window is closed.

In a big organization, with a large amount of test sets per QA, going through these few clicks can definitely result in a real time waster.

 

As discussed here:

http://community.smartbear.com/t5/QAComplete-Feature-Requests/QAComplete-add-the-Priority-of-Test-Ca...

 

 We looked into providing a "screen layout" for the test runner and it proved to be a larger issue than we expected.

 

1) Is this feature in development or dropped? I can't find a related ticket to Kudo it.

 

2) Would it be possible that changes in size/length of fields are kept after the test runner is closed? So that opening a new test runner screen will display exact setting of previous screen.

Linking requirements to tests is a time-consuming task - there is no quick and easy way to do it. It would be great if you could include the path/name of the requirement(s) a test should be linked to in the CSV file that is used to upload tests into QA Complete.  The prerequisite would be that the requirements are loaded prior to loading the tests.

Many of our tests have more than six lines per step.  We can't see these in the Step frame of the right panel of the Test Runner without scrolling, which is annoying.  It's more important to us to see everything in the Step frame than see what's in the Expected and Actual Results panels. 

It's not practical for us to hover over the step in the left nav because sometimes we have to copy/paste text from the Step box.

Also, the lines in the test step were entered single-spaced in the Test Editor and the Test Runner shows it as double-spaced.

 

I suggest that you change this panel to allow the user to pull down the line in the center of the frame to act as dynamic resizing of the top of the frame that contains the Step box.  This will let the user see more in the Step box and less of the Expected and Actual Results boxes, or the reverse if they choose.

This would allow creation of a super Test Set, made up of other test sets without having to add each test to it individually, and prevent having to update multiple test sets when one test is added.

Add option to edit test outcome for a completed test run

Status: Accepted for Discussion
by PepeFocus ‎05-07-2015 12:25 PM - edited ‎05-09-2015 06:13 AM

After a test run is completed (all steps marked either pass or fail) its outcome gets locked and can't be altered.

 

While the reason behind its logic and needed,

adding the ability of changing test outcome depending on access level (admin/lead access only) would allow more flexibility and would help in situations like:

 

Several test runs are completed as failed due to an issue that was overlook and in the end not an issue.

This mistake can not be amended and a new run is required, which will then require more work when extracting metrics etc.

 

Test outcome should be editable as long as End Run button was not pressed.

Re: Update / modernize the UI: Reports and Dashboards

Status: Accepted for Discussion
by kwiegandt on ‎06-22-2015 07:12 AM - last edited on ‎06-22-2015 11:40 AM by Staff

I know that ALMComplete allows us to make custom dashboards and reports. That being said, most of us do not have the security clearance to do that. This is an area where we need a MAJOR improvement. Get rid of Crystal Reports! Tools that have a learning curve or require SQL knowledge leave many people behind. We don’t have that kind of time. A simple way of creating reports and dashboards is needed!

Many organizations rely on requirements documents in Word to communicate requirements to the business for review and approval. In my organization, for example, we do no have (nor is it likely we will ever have) the business log into QAComplete (or any other requirements/testing tool) to review requirements. All such reviews are conducted via Word documents.

 

Currently, my only option via QAComplete is to export project requirements to Excel and then copy/paste these into a Word template used for requirements. This works, but is awkward at best.

 

Adding the ability to generate a Word-based requirements document from a template would make the requirements part of the tool much more useful to business analysts.

We need a method by which a step or series of steps can be written once and called from multiple tests. For example, I want to write a login procedure and then call it from other tests. Specifically, I want to have a test step something like “Call ‘Login’”. At execution, this step would be replaced by the steps in the test called “Login”. I do not want to copy the steps (as we do now), because then I have to maintain those steps in many places.

QAC makes entering Test Steps and Expected Results quick and easy.  What is not quick is having to change tabs, upload a limited number of files, and specify the mapping for which file should be associate with a particular step.  I would like to see this process streamlined and on the same tab as Test Steps. 

 

Perhaps after tabbing past Expected Results, you could have a load file icon which would have the focus, the user could press the space bar, select the file, click Okay, and continue tabbing to enter the next Step and Expected Result.  Below is a mockup of what it could look like.

 

QAC - Edit Test Steps - Add File.png

When starting tests from test sets, the only way to view a tests description is to open the print preview.

 

However since our tests usually have a lot of different preconditions (e.g. which database to use or which steps to prepare) we need the description quite often.

 

It would be very helpful if a tests description and notes were accessible more quickly in the test runner.

There are only 2 ways to link a test to a configuration...either

1. select the test (or test set), open the link to items, select the configuration folder, and check each sub item (e.g. Windows8>IE, and Chrome and Firefox), repeating for each config folder.

2. select the configuration, open the link to items, specify type and folder, and individually check every item (test) in the folder.  Repeat for every configuration.

 

Even with just a few Windows environments, each with 3 or 4 different browsers, times our 2500 tests, and this is a monumental task.  While we could have done it as we created tests, our focus was on creating the tests, and not linking..that was a mistake.  

However now, we need to make these links, yet we are looking at nearly 10,000 clicks/steps.

 

If nothing else, in the selection of tests (or sets) drop down, add a "Select All" to check all tests in that folder.  Then we would just need to repeat for each Configuration, each folder...a much more manageable task.

QAC-SelectAll.bmp

 

 

Collapsible Folders/Directory Trees

Status: Implemented
by MCastle on ‎05-04-2015 04:15 PM

Anytime a folder structure is displayed within QAC, I would like to be able to collapse folders.  Additionally, I would like the initial view of the folder structure to default to a collapsed state.  Navigating through a directory tree where all folders are expanded is time consuming and at times frustrating.

 

QAC - Folders - Link to Item.png

 

QAC - Folders - Add Test Steps from Another Test.png

 

QAC - Folders - Parent Folder.png

 

 

Status: Implemented

Update / modernize the UI

Status: Accepted for Discussion
by ‎03-10-2015 09:06 AM - edited ‎03-10-2015 09:38 AM

Update / modernize the UI. 

Add ability to clone release

Status: New Idea
by johnc_1 on ‎05-05-2015 05:25 AM

We have about 2500 tests at this time.  As the process of connecting that many tests, even organized into test sets, is cumbersome (creating another suggestion for that issue), it would extremely helpful to be able to clone a release or iteration to another release or iteration, and carry across the linked items (we only use tests, not sprints or requirements).  That way, even though I would have to manually link all the tests to a release, at least for the next release, I could clone and be done, and just add any new tests (or test sets).

A key metric I am commonly asked for is "what is our test progress against plan?"

Right now, I have a custom field "planned test date" that I have to manually populate test by test (or fast edit).

The only way I can report on this is to then run a test library export (from legacy reports as the new reports module .csv includes excessive blank columns) in to an MS Excel dashboard where I then have a formula similar to this:

=COUNTIFS('QAC_Extract'![RANGE],[LOOKUP],'QAC_Extract'![RANGE],"<="&[TODAY]) (where LOOKUP is the Planned Test Date)

Used in conjunction with total number of tests, and the sum of the tests that have a Last Run Status I can determine how many were planned by today and how many were attempted.


This gives me stats as shown in the attachment.
It would be great if QAComplete could do all this tracking for me, using a dashboard line chart.QACMetrics.png

 

Reporting

Status: New Idea
by rachel_hughes on ‎07-04-2016 12:33 PM

I would like a report similar to the 'Test Coverage Run by Test Set' however, I would like the report to display only the last result.

 

So, if I have a release with multiple builds this report currently consolidates all results.  For example, if the release has 2 builds:

 -          The first build might result in 10 Passes and 10 Fails

-          The second build might result in 10 passes – we may or may not re-run the already passed tests.

 

In this scenario the report for the release will display a total of 20 passes and 10 fails – I would like it to show just the last recorded status – irrespective of the build it was run in.