Add ability to customize Test Runner layout
Test Runner screen is one of the layouts can't be customized at the moment in the app. In addition, the default view of pre-set fields does not allocate much room for some fields like,worth case for us,Title(test case name)column. For manual testingit requires to always resize this field prior execution and changes aren't save after window is closed. In a big organization, with a large amount of test sets per QA, going through these few clicks can definitely result in a real time waster. As discussed here: http://community.smartbear.com/t5/QAComplete-Feature-Requests/QAComplete-add-the-Priority-of-Test-Cases-to-the-Test-Runner/idi-p/98369 We looked into providing a "screen layout" for the testrunnerand it proved to be a larger issue than we expected. 1) Is this feature in development or dropped? I can't find a related ticket to Kudo it. 2) Would it be possible that changes in size/length of fields arekept after the test runner is closed? So that opening a new test runner screen will display exact setting of previous screen.PepeFocus10 years agoRegular VisitorAccepted for Discussion13KViews22likes4CommentsImprove mapping between Release-TestCase-TestSet-Requirements
Currently (QAC 11.0 or older) in order to get complete Release report for example, you need to link manually all requirements and test cases which are in scope for this release. In our team we assumed logically that if you have Test sets associated to test cases and you map these Sets to a release the Tests scenarios will be automatically associated with this release. If we have to run say full regression testing of all existing test cases in one release (like few thousands), that means that we have to link manually all these test cases to the release. The link between the above work items should em improved and automated. Example below:datanasova9 years agoSenior MemberNew Idea10KViews16likes3CommentsAdd option to edit test outcome for a completed test run
After a test run is completed (all steps marked either pass or fail) its outcome getslocked and can't be altered. While the reason behind itslogic and needed, adding the ability of changing test outcome depending on access level (admin/lead access only) would allow more flexibility and would help in situations like: Several test runs are completed as failed due to an issue that was overlook and in the end not an issue. This mistake can not be amended and a new run is required, which will thenrequire more work when extracting metrics etc. Test outcome should be editable as long as End Run button was not pressed.PepeFocus4 years agoRegular VisitorAccepted for Discussion23KViews17likes11CommentsNew/Improved Testing Results Report based on a Release
Requesting a new Testing Results Report that provides the following: Contains a drop-down list of the Releases for the applicable QAComplete project. Once a Release is selected, the reportlists the Requirements linked to the selected Release. Under each linked Requirement, the reportlists the Test Cases (Test ID & Title)linked to the applicable Requirement(s). For each Test Case listed, the Test Results are included that only pertains to the Release selected. Via "Legacy Reports","Adhoc-Detail", "Requirements Report", we can generate the above report except it doesn't supportitem #4. Isubmitteda support ticket (Case #00235269) but was told it was unlikely any future work will occur in the Legacy Report section. As such Irequest the capability of creating this report in theReports UI but I need your votes to make it happen. Our organization is deeply committed withthe capability of linkingRequirements to Releases and linking Test Cases to the applicable Requirements. In fact, we've includedthe above report as part of our SDLCmaking it a requireddeliverable before a service request can be completed. We purchased QAComplete becausethis functionality (Requirements Traceability)supported our objective in a new Test Management tool.Overall, QAComplete supports this functionality but from our perspective falls short inacccurately reporting it. Thank you for your consideration.BillW4 years agoOccasional ContributorNew Idea3.3KViews9likes3CommentsRe: Update / modernize the UI: Reports and Dashboards
I know that ALMComplete allows us to make custom dashboards and reports. That being said, most of us do not have the security clearance to do that. This is an area where we need a MAJOR improvement. Get rid of Crystal Reports! Tools that have a learning curve or require SQL knowledge leave many people behind. We don’t have that kind of time. A simple way of creating reports and dashboards is needed!kwiegandt9 years agoOccasional ContributorAccepted for Discussion18KViews14likes7CommentsCall shared steps within a test and at execution, include the detail of the shared steps.
We need a method by which a step or series of steps can be written once and called from multiple tests. For example, I want to write a login procedure and then call it from other tests. Specifically, I want to have a test step something like “Call ‘Login’”. At execution, this step would be replaced by the steps in the test called “Login”. I do not want to copy the steps (as we do now), because then I have to maintain those steps in many places.kwiegandt3 years agoOccasional ContributorAccepted for Discussion17KViews14likes10CommentsAdd Ability to Generate Requirements Documents in Word
Many organizations rely on requirements documents in Word to communicate requirements to the business for review and approval. In my organization, for example, we do no have (nor is it likely we will ever have)the business log into QAComplete (or any other requirements/testing tool) to review requirements. All such reviews are conducted via Word documents. Currently, my only option via QAComplete is to export project requirements to Excel and then copy/paste these into a Word template used for requirements. This works, but is awkward at best. Adding the ability to generate a Word-based requirements document from a template would make the requirements part of the tool much more useful to business analysts.kmryner10 years agoOccasional VisitorNew Idea4.7KViews13likes0CommentsAllow adding Test Sets to Test Sets
This would allow creation of a super Test Set, made up of other test sets without having to add each test to it individually, and prevent having to update multiple test sets when one test is added.dgossner4 years agoNew ContributorAccepted for Discussion8.7KViews10likes2CommentsAbility to copy TestSet from one project o another.
At present we can copy Test Set from one project to another. But there is no way of copying TestSet. For organization they have mutiple module which might need to run across same Standards and Framework related testing. Each time for 50 plus module I have to manually create same Test Sets which I already created for others. There should be easier way to copy those TestCase as long as the folder structure is same as previous project. Or should be a way to import it.rkoshy4 years agoNew ContributorNew Idea3.2KViews5likes3CommentsAbility to include screenshots in TestSet print preview PDF. To send it to developers or anyone else
At present develoepers are not using QA complete to find out the defects. I have to send them the list of defects. I didn't find any way to send Screenshot along with TESTSETS that were ran for which we added screenshot under Actual Results. So, we have to manually create another word document to send it to Developers with all defects along with the Word document which has screenshots. I noticed under "Run History" for each test sets we can click on "Printer Friendly" option that can be saved as PDF to developers but it is missing a screesnhot which were added during run. Please find a way to include the screenshots with that "Printer Friendl"y report or find a way just to extract "Printer Friend" version of defects only with screenshots. That will help us eliminate creation of another word document which is time consuming.rkoshy6 years agoNew ContributorNew Idea4KViews5likes1CommentScreenshot / Screen capture tool for tests within the Test Library/Test Sets/Defects
We recently began using QAComplete, and are missing a screenshot tool that our formerQA app had available right on the test/defect ribbon bar. We could select the icon to launch afeature allowing us to capture our screen, or a portion of our screen, and thenautomatically attach the picture to the defect, test, or individual test step -all without having to navigate away from the tool or save an image. This is the #1 request we're getting from our users as we roll it out. Do we know if there are any plans to include a screenshot tool in a future release? Thank you! Jamiejamie_dswinc4 years agoNew ContributorNew Idea10KViews8likes7CommentsAdd Photos or Files to Test Steps While Editing Steps
QAC makes entering Test Steps and Expected Results quick and easy. What is not quick is having to change tabs, upload a limited number of files, and specify the mapping for which file should be associate with a particular step. I would like to see this process streamlined and on the same tab as Test Steps. Perhaps after tabbing past Expected Results, you could have a load file icon which would have the focus, the user could press the space bar, select the file, click Okay, and continue tabbing to enter the next Step and Expected Result. Below is a mockup of what it could look like.MCastle7 years agoEstablished MemberAccepted for Discussion8.5KViews10likes2CommentsLast run status on the test set listing should be appropriate for the release selected.
Let’s say that I have two releases, Alpha and Beta. I run my tests sets for the Alpha release and all test sets pass. I have not executed any test sets for the Beta release yet. In the navigation pane on the left, I select the release Alpha and choose last run status as one of the fields to display on the test set listing. All test sets show as being passed. So far so good. Now I switch the release in the navigation pane to Beta and the last run status column still shows all test sets as being passed. This is very confusing, as no test set has been run yet for the Beta release. The last run status should show “not run” or nothing, but not “passed” or “failed”. Now I execute all test sets for the Beta release and they all fail. In the navigation column on the left, I select the Alpha release. The last run status for all tests shows as failed, which is not appropriate for the Alpha release. This can be very confusing. It would be VERY helpful if the software could look at the run history records and determine the last run status that is appropriate for the release selected. This same issue exists on the test library dashboard chart called “Tests by Last Run Status (Current Project)”.kwiegandt3 years agoOccasional ContributorNew Idea3.1KViews9likes2CommentsBulk link items
I’d like to suggest a potential improvement of introducing bulk link items feature. For instance right now if I want to link a test to requirement I need to do it one by one. It would be great to have option in Fast Edit > Update multiple items > Link multiple items to Requirement/Agile Task etc.msternak4 years agoOccasional ContributorNew Idea2.9KViews5likes1CommentAdd ability to clone release
We have about 2500 tests at this time. As the process of connecting that many tests, even organized into test sets, is cumbersome (creating another suggestion for that issue), it would extremely helpful to be able to clone a release or iteration to another release or iteration, and carry across the linked items (we only use tests, not sprints or requirements). That way, even though I would have to manually link all the tests to a release, at least for the next release, I could clone and be done, and just add any new tests (or test sets).johnc_13 years agoNew ContributorNew Idea6.6KViews9likes2CommentsSpeed up linking of tests to configurations
There are only 2 ways to link a test to a configuration...either 1. select the test (or test set), open the link to items, select the configuration folder, and check each sub item (e.g. Windows8>IE, and Chrome and Firefox), repeating for each config folder. 2. select the configuration, open the link to items, specify type and folder, and individually check every item (test) in the folder. Repeat for every configuration. Even with just a few Windows environments, each with 3 or 4 different browsers, times our 2500 tests, and this is a monumental task. While we could have done it as we created tests, our focus was on creating the tests, and not linking..that was a mistake. However now, we need to make these links, yet we are looking at nearly 10,000 clicks/steps. If nothing else, in the selection of tests (or sets) drop down, add a "Select All" to check all tests in that folder. Then we would just need to repeat for each Configuration, each folder...a much more manageable task.johnc_18 years agoNew ContributorAccepted for Discussion14KViews8likes6CommentsTest Set --- Last run status is confusing
The last run status on a Test Set can be confusing, especially for a newbie like me :) I started a test run, some tests failed and I ended the run Run History shows status = failed (good and accurate) Test set list in right nav shows failed (good and accurate) I started the test run again, did not End Run, just Save & Close Run History shows status = In Progress (good and accurate) Test set list in right nav shows failed (not current, not accurate and confusing)sheba09076 years agoOccasional ContributorNew Idea6.2KViews5likes2CommentsCollapsible Folders/Directory Trees
Anytime a folder structure is displayed within QAC, I would like to be able to collapse folders. Additionally, I would like the initial view of the folder structure to default to a collapsed state. Navigating through a directory tree where all folders are expanded is time consuming and at times frustrating.MCastle9 years agoEstablished MemberImplemented13KViews7likes4Comments- chrisb6 years agoRegular ContributorAccepted for Discussion12KViews7likes6Comments
Rename Pause Run and Start Run
Rename Pause Run and Start Run in Test Runner to Pause Timer and Start Timer as the test run can be edited despite the buttons names.mijex719893 years agoOccasional ContributorNew Idea4.2KViews4likes3CommentsInactivate Requirements
We have the option to inactivate Releases and Test Cases, why not provide that capability for Requirements?BillW6 years agoOccasional ContributorNew Idea4.2KViews3likes0CommentsReporting
I would like a report similar to the 'Test Coverage Run by Test Set' however, I would like the report todisplay only the lastresult. So, ifI have a release with multiple builds this report currently consolidates allresults. For example,if the release has2 builds: - The first build might result in 10 Passes and 10 Fails - The second build might result in 10 passes – we may or may not re-run the already passed tests. In this scenario the report for the release will display a total of 20 passes and 10 fails – I would like it to show just the last recorded status – irrespective of the build it was run in.rachel_hughes9 years agoContributorNew Idea2.1KViews5likes0CommentsGive the option to use 24-hour time format QA-complete
We are not used to use the am/pm notation and I keep getting confused if my scheduled tests will be run in the morning, or afternoon. So I would like to have an option that the 24 hour notation is used everywhere.vvg4 years agoOccasional ContributorNew Idea2.9KViews4likes1CommentAllow users to specify whether tests should only run on a specific host or the next available host.
I would like to be able to select an option in QA Complete for it to assign test runs to the next available host from my pool of test hosts rather than limiting test runs to one specific host, this is not an efficient use of resources.chrisb8 years agoRegular ContributorAccepted for Discussion6KViews6likes5CommentsDon't make the 'Time Out' value mandatory when configuring an automated test run.
Dont make the 'Time Out' value mandatory in the 'Add Automation' window. I have tests that vary widely in the time they take to run, it is not possible to pick a magic value that works for all tests. Also, I have wait and auto timeout settings configured in Test Complete which are more than adequate for quitting a test if it doesnt repsond after a certain amount of time. This setting at best seems pointless, at worst is seems to be duplicating functionality that is already in Test Complete.chrisb3 years agoRegular ContributorNew Idea4KViews6likes3CommentsAllow horizontal dynamic resizing of the Step area in the Test Runner right panel
Many of our tests have more than six lines per step. We can't see these in the Step frame of the right panel of the Test Runner without scrolling, which is annoying. It's more important to us to see everything in the Step frame than see what's in the Expected and Actual Results panels. It's not practical for us to hover over the step in the left nav because sometimes we have to copy/paste text from the Step box. Also, the lines in the test step were entered single-spaced in the Test Editor and the Test Runner shows it as double-spaced. I suggest that you change this panel to allow the user to pull down the line in the center of the frame to act asdynamic resizing of the top of the frame that contains the Step box. This will let the user see more in the Step box and less of the Expected and Actual Results boxes, or the reverse if they choose.kimh3r34 years agoSenior MemberNew Idea5.2KViews4likes3CommentsAbility to Link Tests to Requirements Automatically when Importing Tests into QAComplete
Linking requirements to tests is a time-consuming task - there is no quick and easy way to do it. It would be great if you could include the path/name of the requirement(s) a test should be linked to in the CSV file that is used to upload tests into QA Complete. The prerequisite would be that the requirements are loaded prior to loading the tests.lrayTalen4 years agoContributorNew Idea3.2KViews4likes1CommentAllow linking to an item in another project
It would help us cut down on duplication if we could link to an item in another project.kwiegandt10 years agoOccasional ContributorAccepted for Discussion3.4KViews5likes1CommentReduce the size of the Smartbear banner and Customer Company Logo box
Can you please reduce the SmartBear Banner at the top of the page. This will enable us to see more of what we have purchased the product for.. Can you also reduce the of the Customer Company logo box (bottom left hand corner). It's nice to have our logo on screen but if we can configure the size of the box or turn it off it makes the product more usabale. ThanxGrumpyGit8 years agoNew ContributorAccepted for Discussion7.2KViews5likes4CommentsRemove Unncessary "Required" Field
QAC has predetermined required fields; denoted (Required). These cannot be made optional. While I understand the intetion behind this, I do not wish Work Phone to be a (Required) field when creating a user.MCastle9 years agoEstablished MemberNew Idea3.9KViews5likes1CommentAbility to assign tokenized values from test set
With our project, we have a very dynamic system. That being said in order to simulate proper users with realistic fields, we have to build around 50 + users to manually and automate test. QAComplete does not offer a way for us to utilize a single test case, lets say for the example of Logging into a system with username and password. The username and password can be tokenized however, it is tokenized once. We want a method in which we can define the parameters in a test case from a test set. The test set will simulate User1s experiance but it will constantly use the test case for logging in. Perhaps testcomplete offers something like this? We essentially want to treat specific test cases like functions with parametersNovari-QA4 years agoFrequent ContributorNew Idea5.7KViews3likes5CommentsAlert on all Linked Items when there is change in Requirement
Color Code alert on all Linked Items when there is a change in existing Requirements has been created and assigned with a Standard priority. We will respond to you as soon as possible. [Case - 00140954] Most of the time Business Analysts modify requirements or add additional Information to the requirements. It would be really nice to have a feature which would highlight the change and affected Items for user and Project benefit. Recomendation-highlight all those linked Items with a "YELLOW" color so that all related Users, once logged into QAComplete, can see and acknowledge the changevpaka9 years agoNew ContributorAccepted for Discussion3.8KViews4likes1CommentWould be nice to re-label some buttons for clarification
There are so many different screens and buttons and choices when entering a Test Set Suite or a Test Case or a Test Step it can be confusing on where you are at times. add a new step, I select the test and go to steps. The button only says Edit Steps and after you select it, only then do you get an option to add a step. Would be nice if the button said Add/Edit steps like it does when it says Add/Edit Tests. when in a screen, when what user is doing is displayed at the very top left of the screen, would be nice if it were a bit more visiblesheba09073 years agoOccasional ContributorNew Idea3.2KViews3likes2CommentsLinking and Existing defect during Test Execution from Test set itself
It would be a really good feature to have an abiltiy to link an existing active defect that causing the test to fail during the Test Execution from the Test Set. Currently all we can see is "Auto Create Defect" window. This would be really helpful in failing test case and continue to excute the restof the test cases from test set by simply adding an existing defect to itvpaka7 years agoNew ContributorAccepted for Discussion4.1KViews4likes2CommentsAdd Releases tab in the Test Library
In the Test Library, I can display all tests linked to a release or not linked to a release by using a filter. This is not intuitive to many of our testers. To be consistent with the other entities in the system, it would be helpful to have a Releases tab in the navigation pane on the left before the Folders tab as is the case for Test Sets.kwiegandt10 years agoOccasional ContributorAccepted for Discussion3.5KViews4likes1CommentMore active paid community support for QAComplete
While working with TestComplete and I come a crossed issue, I have no problem getting answers from the community. From people liketristaanogreHowever, with QAComplete I sometime have to wait weeks in order to get any response. I would like to see some dedicated paid QAComplete members help out the forms on a more regular basis. We are all in software development, so we all have deadlines.I am new to smart bear products and most of my issue are blocking me from continuing. Thus pushing back deadlines. I would like more support in the forums, if that means paying someone, then I'd find that extremely beneficialNovari-QA7 years agoFrequent ContributorNew Idea3.3KViews2likes2CommentsVisually Identify Token Values in Test Steps During Test Run
Currently when using tokens during the test run a person cannot tell what are the token values and what aren't. It would be useful to have token values visually identified in some way in the test step during a test run. They could be bolded, highlighted, underlined, different color font, or somthing.tlabl10 years agoContributorNew Idea1.9KViews3likes0CommentsTest Metrics Reporting
Recommending that the reporting feature within QA Complete be updated with more flexibility and user friendliness. One thing our QA team does for reporting is generate test metrics reports which list the Test Set Name or Test Name, the # of Tests (not runs), # passed, # failed, # blocked. While we understand the logic behind displaying the values off the # of runs for the tests for reporting purposes, our customers and project teams do not want to know that we executed a test set 3 times and it failed twice. They want to know the # of total tests (baseline #), # passed, # failed, # blocked and not based off of execution. They do not want to see that we ran Test A, itertion #30, 6 times. For example, if Test Set A has 5 tests but with tokenization has 300 test iterations the report needs to display 5 tests for this Test Set and note the # failed, # passed, # blocked. Right now the canned reports in QA Complete are displaying this based on the tokenization iterations which is too detailed. We have spent hours working with these reports but with reporting being built around Crystal Reports this limits our ability to build the report we need.TBValdez7 months agoMemberAccepted for Discussion4.2KViews3likes4CommentsAbility to Delete User Completely
QAC does not allow any user to be fully deleted from the system. Currently, deleting a user results in the "Active User?" field not haveing a green check mark. The reason behind this is maintaining historical recordif someone created, ran, or otherwise modified anything within the system. What if you have an account that neverexecuted tests, created tests, etc; i.e. not associated with the historical record of any object? In my case, I created a test account to try the security configuration after making changes, butprior to exposing end users to them. If I no longer need this account I would like to remove it entirely.MCastle10 years agoEstablished MemberNew Idea2.6KViews3likes0CommentsQAComplete - add the Priority of Test Cases to the Test Runner window
I would like to add the Test Case Priority parameter displaying at the Test Runner window. Now, is not displayed and it is not possible to change by application administrator settings. We need to see it, when we work with test cases at running test set, because we have the different priority test cases in the one test set and when run started, we choose the test cases one by one by their priority, ideally. And we don`tuse a way to dispense a test cases by a priority to separate test sets. Please join this proposal to change. Thanks Janajana_horackova3 years agoNew ContributorSelected for Development9.5KViews3likes4CommentsField Type: Tag
Many other systems offer a "tags" field that allows end users to quickly categorize our items by related ideas. The closest functionality I can find in QA Complete is a "Choice List (multiselect)" field type, but that does not allow end users to quickly add new items to the list of available options. Instead, it requires an administrator to do so. I'd like to create (or have built-in) a "Tags" field for my test cases that allows rapid addition of new options, guides the user to quickly find pre-existing values instead of creating near duplicates, and allows API clients to search for test cases that have one or more of the tags assigned.sdwire4 years agoNew MemberNew Idea2.2KViews2likes1CommentHow to make an 11.0 report public..
I have created an 11.0 report and would like to make it public to the project team.oloudos9 years agoRegular Visitor5.9KViews2likes1CommentAdd nbr_files attribute to each item in the test_run_results array for better performance
For future updates, it might be helpful for users if you add nbr_files attributeto each item in the test_run_results array that results from calling "/api/v2/projects/{ProjectId}/testruns/{Id}/items/". Then it would be possible to call any version of "get list of files" (SOAP or REST) only on items that actually contain files, which would be a good performance enhancement. See https://community.smartbear.com/t5/QAComplete-and-ALMComplete/Use-API-to-see-if-TestRunResult-has-file-s-attached/m-p/186349#M2333 for a discussion on this and why it would be a valuable addition for developers. Nastya_Khovrinakrian6 years agoNew ContributorNew Idea798Views1like0CommentsDashboard improvements - refresh widget(s) button, display filter title per widget
Some feature requests for the dashboard: 1) A "Refresh button" for each individual dashboard widget. Sometimes I'm not sure if it's up to date and refreshing the whole page is pretty silly. Somaybe a "refresh all widgets" or the ability to refresh and individual widget would be great. 2) Each dashboard widget should indicate how it is being filtered by some kind of header. In the attached image I can't tell that that widget is filtered by a single release that I am targeting (unless there is another widget I should be using?) Thanks mmaximojo8 years agoFrequent ContributorAccepted for Discussion11KViews2likes5CommentsThe ability to apply Token files to a Test before running a Test set.
I have a product that requires testing for multiple clients and will use tokenized tests to pass different data (usernames, field names etc). Rather than creating multiple tests that contain the same steps but different token files(1 token file per client) it would bemore efficient for me to have the single test set and select which token file to use. For example, a test that requires all fields on a form to be completed.Client A could have fields 1, 2 and 3 whereas Client B could have fields A, B and C. Each client would have different validation required and/or different character limits. Currently to use the same test I would need to edit the token file and re-upload or make a copy of the test (Which would make it harder keep updated with all the different copies if something changes). I was thinking this process could work in a similar way to attaching configurations/releases to a Test Set. The token files will be attached to thetest set and will be chosen upon running said test set. If anyone has any methods or work arounds for me, it would be great to hear them. Thanks.ReeceMonkey5 years agoNew ContributorNew Idea2.3KViews2likes1CommentTest Runner: Select Test Host for all Automated Tests
The issue I'm facing is in Test Runner in regards to selecting the Test Host for an Automated Test to execute on. I don't set a Default Test Host for my tests because we use a set of different Test Hosts, usually depending on which environment we want to run the tests on or just whichever is available. So when launching a Test or Test Set we simply select the Test Host for each on the Test Runner screen using the < Select Host > drop-down: This works fine for running single tests, or test sets with only a couple automated tests. But we have some Test Sets with a larger number of tests, and we'd like to have even larger test sets. But it's very tedious to have to select the Test Host for each test in a test set when running the tests. What I'd like is an ability to select the Test Host for all of the Automated Tests in a Test set at the same time.ghuff29 years agoContributorAccepted for Discussion7.3KViews2likes3CommentsLink a specific version of a requirement or test case to a Release
When associating a requirement to a Release (Release A), keep track of the version of the requirement and test cases that relate to this. If in further releases (Release B) a requirement or a test case changes, the Release B will be associated to the new version of the Requirement and/or Test Case, but Release A will not be affected by that. Optionally, offer through a dialog if any of the linked (and still active) releases needs to be updated or not to link to the new Req/TC versionekiza239 years agoContributorAccepted for Discussion3.5KViews2likes1CommentSimplify method for adding multiple automated tests to the test library.
Building a library of automated tests in QA Complete is a time consuming process. The user must do a lot of manual data entry to add each test and a single typo in a file path can stop the test from executing. It would be much simpler if the user could open QA Complete in a browser on a machine that contains their tests then browse to their tests directory and drag and drop them in.chrisb9 years agoRegular ContributorAccepted for Discussion4.9KViews2likes3Comments