Forum Discussion

googleid_114358's avatar
googleid_114358
Contributor
11 years ago

TestComplete quality center

Hello.



We developped what I would call a Quality Center for TestComplete.



Basically, we are able to follow our test execution in real time within a web application by simply plug-in some script extensions within TestComplete engines.

It generates graphics (using a custom XML output file) and contains TestComplete MHT files.

As we are running our automate against multiple configurations, we are also able to search within our execution for a specific configuration (production version and browser used for the execution for instance). We also added the production installation logs and database structures within a TestExecution Instance.



We have been using this version for 2 years now and it works fine.

We are planning to expose it to the community.

However, it would definitely require some optimization to fit TestComplete community requirements (we are looking at ElasticSearch, LogStash, MongoDB to store our log files).



Thus, we are actually interested to know what you guys use to store and access your TestComplete execution logs.

With your answers, we could decide to work on a community edition or use an already existing solution.



I have been reading about HP Quality Center connector, Squash TA, and other quality center solution, but could not find anything that would match what are using, and willing to expose to you guys.



Hope you might help us.



Regards, 



Guillaume Théraud.

1 Reply

  • hlalumiere's avatar
    hlalumiere
    Regular Contributor
    We use a SQL database to store test info, machine configs, and logs. The TestLog table contains a field with the path to the log for this run. A "dashboard" application we developped here shows graphics for performance of the test pool and general status, test logs separated by status (failed, success, currently running, awaiting development, fixed, and test stack), shown in a groupable and sortable grid with other details like priority and statistical weight, test configuration and documentation, test machines configuration and status (through VMWare VIM). A client application running on startup on each test VM uses the database to fetch the next test to run by priority. Basically everything is centered on the database.