This idea is raised as advised in Case #00185718.
The ask:
Make search times much lower for big datasets. Either make the DB representation more efficiently searchable, or give users the ability to limit the scope of the search.
The background:
In our Collaborator instance, searches submitted in the web UI take minutes (three to five minutes is typical) to complete. SmartBear suggested adding RAM to the Collaborator server. After our own investigation, we concluded that Collaborator's DB search is very inefficient -- it boils down to a case-insensitive regex match against every row in two different tables/relations. Our version of the table MTDTVLSTRNG has over one million rows and will only grow. Collaborator's SQL queries take minutes to finish. (Warnings are output in collab.log with stats.) So this problem is bad and will only get worse.
Also, our database inquiry accounts for only part of the total amount of time it takes search results to be presented in the UI. We suspect that Collaborator is also searching the 'collaborator-content-cache' in a similar way but haven't bothered to prove it.