When I perform a code review I mostly navigate from file to file without going back to the main view. It would be nice to show what the current file number is of the total number of files somewhere in the file view. I personally like to the left or right of the filename aligned to the edge. Just something as simple as File 4 of 27 would be fine.
When I go to the "display" screen, it is defaulted to "page Width." I want it to default to "full Page." I have to change it every time I go into the document. Is there a way to change the default value? Another example is when you go to the details page of a review. the output defaults to "hide Comments." I want to change it to "Show all Comments." It would be nice if there was an admin capability to change defaults based on my preferences, not SmartBears.
When you generate a report in the WebUI, you can print it, and you can copy the link to include in your bookmarks, email etc.
It would be nice if I could save these reports within the WebUI itself. They would be accessible from any location where you executed the WebUI, and possibly be available for other users to run.
On the basic side, users could have a ‘my reports’ section where saved reports could live. On the complex side, reports could be public/private (checkbox) and maybe restricted by group or public.
Reports are such a powerful tool for data extraction and analysis, yet managing them within the UI could be so much more efficient.
(I thought I had submitted this previously but I can’t find it in the forum…)
I would like to request that a freeform text field be added to the role assignment portion of the review screen that allows you to document the specific role assignment that each person has on the review. This is different than the moderator, reviewer, author, etc. It is in relation to the role this person has representing a particular effort. In our case, we have specific work products that require specific program involvement from specific roles like a SW lead, or a Project Engineer, or a Program Manager. When I identify someone in a review as a reviewer, I want to be able to identify them as the program manager, Project Engineer, etc. We don't want to have a separate template for every work product we review, we want to keep that generic enough to use the same template, but we want to be able to identify who each person is. We could create a custom field, but we want this to be tied to each review participant directly. A custom filed would not have that tie.
I want to be able to show that all of the required reviewers for this particular work product were participants. Unless I tie them to the actual participants that have time recorded, I can't prove that they were involved. See below for an example.
Review Role Assignment Program Role Asssignment
Author System Engineer
Moderator Lead Systems Engineer
Reviewer Quality Assurance
Reviewer Program Manager
I need to be able to show that the relevant stakeholders participated in the review. A custom field does not tie a program role with the time spent in the review, but if the review role could have a place to identify the program role, it could be on the same line in the review summary screen.
While attempting to add the path to a script as a first parameter to a trigger and all the needed arguments I noticed that all input boxes are limited to 255 characters. This limit is not allowing me to create complex triggers and offer robust solutions to practical issues for my users.
Please remove the 255 character limitation on all input boxes within the "Triggers" page.
Here is an HTML code fragment showing the limitation as seen with Internet Explorer Developer Tools.
<input name="triggerArgs1" class="PlainText x-form-text x-form-field" id="triggerArgs1" onchange="wizardConfirmNavigate = true;" type="text" size="60" maxlength="255" value="">
I guess there is a way to pull it from the API, but that is messy. We have Participant Custom Fields that we use, and want to get the information filled in those fields (or not) per participant in the reviews.
Is it possible to enable and support the automated install and configuration of the Collaborator server? There are some features with potential (-c, -q, -varfile) but these are very basic (database and ldap config only) and are not functional when attempting an unattended install on a new server. It would be neat if the server could be stood up without a licence or at least ship with an "admin only" (no login) access so that the server can be installed with LDAP/AD and database integration configured from a response file and potentially configure with API / REST interface. In our case the servers do not have internet access so this feature will need to accommodate that fact, in particular I would expect the request and install of the license to be a manual step conducted some time after the install and database and LDAP/AD configuration has completed.
When i want to go from one page to the next for a review material have to press the NEXT buttons. It means that when I go from page 1 to page 2 I have to press once the next button for the document from left and then the button NEXT for the document from right. I would like to have this switch from one page to another synchronized for both documents and this synchronization to be activated with a "tick".
Currently, if there are multiple individuals in a role (such as reviewer, which is extremely common), the participant in that role shows up as "(multiple users)". This list needs to be expanded to show the actual users.
Currently, when one creates issues and then sends to rework, the participant shows as "Approved" in the Participants module, which is highly confusing to our team.
Obviously the review did not approve this review as s/he submitted issues and sent to rework. The participant status should be "Waiting" or some new status "Reviewed", but "Approved" doesn't make sense.
I find it VERY HARD to find information on your site. For example, I have spent the last 15 minutes looking for the page that has all the versions and what was fixed in each. I can't find that, but I run into this: The link to the forum where release notifications are supposed to be posted -- the last entry there was 1.5 years ago. I had the link, but had to C/P something else and lost it. But it was referenced in a number of places.
So I want to submit a case on this. Can't figure out how to do that. There was a chat thingy that was bugging me to talk to a live salesperson, and there was an option there to submit a case, but as soon as I clicked it, it went away.
Gosh, I hope you guys are not obfuscating customer service to make us go away.
Maybe you need to get someone not familiar with this site to walk it for different scenarios.
Any change that I make in the display settings when reviewing (e.g. disable “Text: Show Markers” for document review or select “Orientation: Side by Side” for code review) will be gone when I go back to the overview screen. Please allow me to save my settings.
This enhancement applies to Collaborator, version 9.4
In document reviews, the new pushpins are being rendered with opaque background. The white background is covering the text under the pin. This makes it difficult to read the document when multiple pins are present.
Example: => "wi??n"
You can click "Hide Pins", but this hides all of the non-active pins, but not the one you are trying to read at the time.
Please make the white background of the pushpin transparent or allow me to update the pushpin graphic file on the server to include transparent background.
When reviewing a document, I often have to leave the review and come back later. MS Word has a new feature, when you open a document it opens to page one, but puts a little flag up saying ‘you were on this other page when you last opened this document – click here to snap to where you were’. This would be a very handy feature in Collaborator. Right now I have to write down what page I left off on – a bit of a low-tech solution.
The coloring scheme of the defect status does not help in getting a clear overview on where you are in the processing of defects. This is due the use of only bright colors that draws attention for everything; even for defects that are already processed (green). The defect list is a Christmas tree.
I have seen a lot of complaints from users about the coloring scheme that Collaboraot uses for a long time so apparently there is a philosophy behind it. Please allow a user to configure his own coloring scheme (and save it permanently).
Many of the users of the Collaborator application in my company want to use it to review changes made to Microsoft Word documents. However, the document Diff Viewer provided with Collaborator does not have the ability to "understand" the word documents in a their native format - as it first converts them to text files and then displays them as PDF files. Thus, the "context" of the changes is lost.
For instance, users would like to be able to ignore differences in versions of word documents caused by mere changes to document Header and Footer page numbers, for instance. This type of filtering is not provided by Collaborator's diff viewer, since it treats all differences between 2 Microsoft word documents the same (as basic text), whether they come from differences in body text, header text, footer text, table of contents text, etc.
Also, is there a way for Collaborator to see/create something similar to the "Document Map" that is provided in Microsoft word - this would simplify document navigation because then section numbers of the document could be navigated to directly. Also, if the section numbering, header, footers, body text, TOC of the document provided in the open xml format (namely the docx file) were parsed by Collaborator's Diff Viewer, and some functionality created to allow the Diff Viewer to potentially ignore changes in files caused by updates to Table of Contents, Header, Footer text, etc. Currently, this type of filtering of word document changes is not possible in Collaborator's file Diff Viewer. It would be nice if add-ins could be provided that would provide this type of functionality.
I've been tasked with creating "catch-up" reviews in transitioning from an old process to a new process. I have been building reviews by adding one or two revisions at a time, then seeing what the effect is on the total file count, and then possibly removing one of those revisions. I end up getting panicked e-mails from authors wondering why their code is going into review when they are not ready for it yet, usually after I've already removed the revision their code was in after realizing that revision included files that weren't slated for review yet. I'd like to delay notification of authors until I indicate that the review is ready to go.
We have developed an integration from a document-management system that uses one review custom field to store foreign document id and version information. When the review is created this information is provided from the other system through the JSON API.
Obviously we do not want the GUI Client users to change this field, but currently Collaborator does not allow this. To make life easier in this kind of use cases, I propose Collaborator admin section to have following extra (boolean) settings for review custom fields:
Additional notification schemes that can be set by user is requested. Even "minimal" is too much for some people, and they would like "minimal minimal". What specifically has been requested is to receive email when the review changes state only. Others may have different ideas when they want to receive emails.
This idea is raised as advised in Case #00185718.
Make search times much lower for big datasets. Either make the DB representation more efficiently searchable, or give users the ability to limit the scope of the search.
In our Collaborator instance, searches submitted in the web UI take minutes (three to five minutes is typical) to complete. SmartBear suggested adding RAM to the Collaborator server. After our own investigation, we concluded that Collaborator's DB search is very inefficient -- it boils down to a case-insensitive regex match against every row in two different tables/relations. Our version of the table MTDTVLSTRNG has over one million rows and will only grow. Collaborator's SQL queries take minutes to finish. (Warnings are output in collab.log with stats.) So this problem is bad and will only get worse.
Also, our database inquiry accounts for only part of the total amount of time it takes search results to be presented in the UI. We suspect that Collaborator is also searching the 'collaborator-content-cache' in a similar way but haven't bothered to prove it.