Showing ideas with status Accepted for Discussion.
Show all ideas
Status:
Accepted for Discussion
Submitted on
05-07-2015
11:57 AM
Submitted by
PepeFocus
on
05-07-2015
11:57 AM
Test Runner screen is one of the layouts can't be customized at the moment in the app. In addition, the default view of pre-set fields does not allocate much room for some fields like, worth case for us, Title (test case name) column. For manual testing it requires to always resize this field prior execution and changes aren't save after window is closed. In a big organization, with a large amount of test sets per QA, going through these few clicks can definitely result in a real time waster. As discussed here: http://community.smartbear.com/t5/QAComplete-Feature-Requests/QAComplete-add-the-Priority-of-Test-Cases-to-the-Test-Runner/idi-p/98369 We looked into providing a "screen layout" for the test runner and it proved to be a larger issue than we expected. 1) Is this feature in development or dropped? I can't find a related ticket to Kudo it. 2) Would it be possible that changes in size/length of fields are kept after the test runner is closed? So that opening a new test runner screen will display exact setting of previous screen.
... View more
Status:
Accepted for Discussion
Submitted on
05-07-2015
12:25 PM
Submitted by
PepeFocus
on
05-07-2015
12:25 PM
After a test run is completed (all steps marked either pass or fail) its outcome gets locked and can't be altered. While the reason behind its logic and needed, adding the ability of changing test outcome depending on access level (admin/lead access only) would allow more flexibility and would help in situations like: Several test runs are completed as failed due to an issue that was overlook and in the end not an issue. This mistake can not be amended and a new run is required, which will then require more work when extracting metrics etc. Test outcome should be editable as long as End Run button was not pressed.
... View more
Status:
Accepted for Discussion
Submitted on
06-22-2015
07:12 AM
Submitted by
kwiegandt
on
06-22-2015
07:12 AM
I know that ALMComplete allows us to make custom dashboards and reports. That being said, most of us do not have the security clearance to do that. This is an area where we need a MAJOR improvement. Get rid of Crystal Reports! Tools that have a learning curve or require SQL knowledge leave many people behind. We don’t have that kind of time. A simple way of creating reports and dashboards is needed!
... View more
Status:
Accepted for Discussion
Submitted on
06-22-2015
07:07 AM
Submitted by
kwiegandt
on
06-22-2015
07:07 AM
We need a method by which a step or series of steps can be written once and called from multiple tests. For example, I want to write a login procedure and then call it from other tests. Specifically, I want to have a test step something like “Call ‘Login’”. At execution, this step would be replaced by the steps in the test called “Login”. I do not want to copy the steps (as we do now), because then I have to maintain those steps in many places.
... View more
Status:
Accepted for Discussion
Submitted on
05-05-2016
10:59 AM
Submitted by
dgossner
on
05-05-2016
10:59 AM
This would allow creation of a super Test Set, made up of other test sets without having to add each test to it individually, and prevent having to update multiple test sets when one test is added.
... View more
Status:
Accepted for Discussion
Submitted on
05-04-2015
04:53 PM
Submitted by
MCastle
on
05-04-2015
04:53 PM
QAC makes entering Test Steps and Expected Results quick and easy. What is not quick is having to change tabs, upload a limited number of files, and specify the mapping for which file should be associate with a particular step. I would like to see this process streamlined and on the same tab as Test Steps. Perhaps after tabbing past Expected Results, you could have a load file icon which would have the focus, the user could press the space bar, select the file, click Okay, and continue tabbing to enter the next Step and Expected Result. Below is a mockup of what it could look like.
... View more
Status:
Accepted for Discussion
Submitted on
05-05-2015
05:42 AM
Submitted by
johnc_1
on
05-05-2015
05:42 AM
There are only 2 ways to link a test to a configuration...either 1. select the test (or test set), open the link to items, select the configuration folder, and check each sub item (e.g. Windows8>IE, and Chrome and Firefox), repeating for each config folder. 2. select the configuration, open the link to items, specify type and folder, and individually check every item (test) in the folder. Repeat for every configuration. Even with just a few Windows environments, each with 3 or 4 different browsers, times our 2500 tests, and this is a monumental task. While we could have done it as we created tests, our focus was on creating the tests, and not linking..that was a mistake. However now, we need to make these links, yet we are looking at nearly 10,000 clicks/steps. If nothing else, in the selection of tests (or sets) drop down, add a "Select All" to check all tests in that folder. Then we would just need to repeat for each Configuration, each folder...a much more manageable task.
... View more
Status:
Accepted for Discussion
Submitted on
03-10-2015
09:06 AM
Submitted by
chrisb
on
03-10-2015
09:06 AM
Update / modernize the UI.
... View more
Status:
Accepted for Discussion
Submitted on
03-11-2015
05:24 AM
Submitted by
chrisb
on
03-11-2015
05:24 AM
I would like to be able to select an option in QA Complete for it to assign test runs to the next available host from my pool of test hosts rather than limiting test runs to one specific host, this is not an efficient use of resources.
... View more
Status:
Accepted for Discussion
Submitted on
06-22-2015
09:56 AM
Submitted by
kwiegandt
on
06-22-2015
09:56 AM
It would help us cut down on duplication if we could link to an item in another project.
... View more
Status:
Accepted for Discussion
Submitted on
05-26-2015
07:20 AM
Submitted by
GrumpyGit
on
05-26-2015
07:20 AM
Can you please reduce the SmartBear Banner at the top of the page. This will enable us to see more of what we have purchased the product for.. Can you also reduce the of the Customer Company logo box (bottom left hand corner). It's nice to have our logo on screen but if we can configure the size of the box or turn it off it makes the product more usabale. Thanx
... View more
Status:
Accepted for Discussion
Submitted on
10-30-2015
07:18 AM
Submitted by
vpaka
on
10-30-2015
07:18 AM
Color Code alert on all Linked Items when there is a change in existing Requirements has been created and assigned with a Standard priority. We will respond to you as soon as possible. [Case - 00140954] Most of the time Business Analysts modify requirements or add additional Information to the requirements. It would be really nice to have a feature which would highlight the change and affected Items for user and Project benefit. Recomendation - highlight all those linked Items with a "YELLOW" color so that all related Users, once logged into QAComplete, can see and acknowledge the change
... View more
Status:
Accepted for Discussion
Submitted on
08-17-2015
08:15 AM
Submitted by
vpaka
on
08-17-2015
08:15 AM
It would be a really good feature to have an abiltiy to link an existing active defect that causing the test to fail during the Test Execution from the Test Set. Currently all we can see is "Auto Create Defect" window. This would be really helpful in failing test case and continue to excute the restof the test cases from test set by simply adding an existing defect to it
... View more
Status:
Accepted for Discussion
Submitted on
06-22-2015
06:30 AM
Submitted by
kwiegandt
on
06-22-2015
06:30 AM
In the Test Library, I can display all tests linked to a release or not linked to a release by using a filter. This is not intuitive to many of our testers. To be consistent with the other entities in the system, it would be helpful to have a Releases tab in the navigation pane on the left before the Folders tab as is the case for Test Sets.
... View more
Status:
Accepted for Discussion
Submitted on
05-09-2016
10:16 PM
Submitted by
maximojo
on
05-09-2016
10:16 PM
Some feature requests for the dashboard: 1) A "Refresh button" for each individual dashboard widget. Sometimes I'm not sure if it's up to date and refreshing the whole page is pretty silly. So maybe a "refresh all widgets" or the ability to refresh and individual widget would be great. 2) Each dashboard widget should indicate how it is being filtered by some kind of header. In the attached image I can't tell that that widget is filtered by a single release that I am targeting (unless there is another widget I should be using?) Thanks m
... View more
Status:
Accepted for Discussion
Submitted on
05-06-2016
11:03 AM
Submitted by
ghuff2
on
05-06-2016
11:03 AM
The issue I'm facing is in Test Runner in regards to selecting the Test Host for an Automated Test to execute on. I don't set a Default Test Host for my tests because we use a set of different Test Hosts, usually depending on which environment we want to run the tests on or just whichever is available. So when launching a Test or Test Set we simply select the Test Host for each on the Test Runner screen using the < Select Host > drop-down: This works fine for running single tests, or test sets with only a couple automated tests. But we have some Test Sets with a larger number of tests, and we'd like to have even larger test sets. But it's very tedious to have to select the Test Host for each test in a test set when running the tests. What I'd like is an ability to select the Test Host for all of the Automated Tests in a Test set at the same time.
... View more
Status:
Accepted for Discussion
Submitted on
11-16-2015
09:24 AM
Submitted by
ekiza23
on
11-16-2015
09:24 AM
When associating a requirement to a Release (Release A), keep track of the version of the requirement and test cases that relate to this. If in further releases (Release B) a requirement or a test case changes, the Release B will be associated to the new version of the Requirement and/or Test Case, but Release A will not be affected by that. Optionally, offer through a dialog if any of the linked (and still active) releases needs to be updated or not to link to the new Req/TC version
... View more
Status:
Accepted for Discussion
Submitted on
10-14-2015
08:34 AM
Submitted by
chrisb
on
10-14-2015
08:34 AM
Building a library of automated tests in QA Complete is a time consuming process. The user must do a lot of manual data entry to add each test and a single typo in a file path can stop the test from executing. It would be much simpler if the user could open QA Complete in a browser on a machine that contains their tests then browse to their tests directory and drag and drop them in.
... View more
Status:
Accepted for Discussion
Submitted on
07-06-2015
09:48 AM
Submitted by
TBValdez
on
07-06-2015
09:48 AM
Recommending that the reporting feature within QA Complete be updated with more flexibility and user friendliness. One thing our QA team does for reporting is generate test metrics reports which list the Test Set Name or Test Name, the # of Tests (not runs), # passed, # failed, # blocked. While we understand the logic behind displaying the values off the # of runs for the tests for reporting purposes, our customers and project teams do not want to know that we executed a test set 3 times and it failed twice. They want to know the # of total tests (baseline #), # passed, # failed, # blocked and not based off of execution. They do not want to see that we ran Test A, itertion #30, 6 times. For example, if Test Set A has 5 tests but with tokenization has 300 test iterations the report needs to display 5 tests for this Test Set and note the # failed, # passed, # blocked. Right now the canned reports in QA Complete are displaying this based on the tokenization iterations which is too detailed. We have spent hours working with these reports but with reporting being built around Crystal Reports this limits our ability to build the report we need.
... View more
Status:
Accepted for Discussion
Submitted on
07-01-2015
09:12 PM
Submitted by
tlabl
on
07-01-2015
09:12 PM
There needs to be support for linking Agile Tasks to Test Sets. It is much more efficient to create test sets and assign them out and link to them than linking hundreds or thousands of tests to various agile tasks.
... View more