Forum Discussion
Hi,
If I am correct, Visual Studio Online does not support the "xml" and "mht" formats. So, I would tell that it's impossible to add the test results to the VSO dashboard.
However, I can be wrong. Maybe someone else will suggest an approach.
Hey Yuri,
Just to clarify this a bit more.
We're looking to have a quick visual of TestComplete scripts that ran during the build&release process. The exact detail of scripts (mht file) can be found in the release log, which is nice - it just takes some navigating around to find the mht file, then downloading it and viewing it.
What we really want is a widget, on the VSO dashboard, that would display something like;
# of tests run
# of tests (pass)
# of tests (fail)
Something our developers / managers can quickly view and see how their application looks on a day to day basis. Any additional level of detail would be great as well, like separating tests into groups so that they could see that, for example, the "log in" screen had 100% success, while the "order" screen had 50% failure.
Right now, when using Visual Studio to kick off TestComplete scripts, during build&release, we only see that "1 test ran" - the 1 test is referring the the tc12test file within Visual Studio. This file contains all of the tests that will be run, however VSO only see's the tc12test file as a test, instead of everything it contains.
I'll update once we hear more from the product owner - who we are in contact with. Having this functionality would be incredibly helpful for application development and maintenance.
- Colin_McCrae8 years agoCommunity Hero
It can be done. But not natively by TC.
You can use the TFS API. It works with on-site and online versions.
https://www.visualstudio.com/en-us/docs/integrate/api/test/overview
But you'll need to build your own code to do it. I have a set of script extensions I use for exactly this. All my TC tests that are linked connect to TFS using this and update a specific test case with pass/fail/notes etc.
But I'll warn you in advance, the API is a bit confusing when it comes to test items, due to the way a test "case" can be a test "item" in multiple test "suites", which get moved into a "run" when performing the actual tests. There are still a few bugs in it (the RESTful API is relatively new) - one of which involves updating "results" from a "run". And how you structure it will depend how you use TFS. My script extensions may or may not work for you. They were written to hook into TFS based on how my company uses it, rather than being truly generic. (And I would also need company permission to upload them here. IP and all that ...)
- Colin_McCrae8 years agoCommunity Hero
I can add a little more to this.
Obviously, TFS is not static. As your project evolves, things advance in TFS.
But at the same time, you don't want to be constantly updating your tests with hundreds of new identifiers every time you run it.
In our use case, the regression test suite is copied forward each time a new sprint is started. A sprint is a "plan" in TFS land.
So our structure is:
Project > Plan > Suite > Test
My config file contains the following configuration options and is read once at the start of the run:
- TFS Active - Boolean flag to say whether the run is using TFS. We don't always want to.
- Authorisation - Authorisation token allowing the connection via the API. TFS has a couple of options for how you Auth.
- Project - The ID (or name) of the project to use.
- Plan - The ID (or name) of the plan to use.
- API version - The currently installed version of the API. This gets bolted to the end of requests to the API. Not required, but TFS states it's best practice to do it. So I do.
These bits do require updating, but infrequently. And they are single entries in config. So it's manageable.
Then within my test "packs" (I use my a data + keyword driven framework), there are references to:
- Suite - The Suite NAME to use. Name is used rather than ID as it doesn't change when the whole lot is copied forward in the next sprint (Plan in TFS land - see above). Given when a new suite comes into use. Can change multiple times throughout one of my test "packs" if need be. But usually only set once at the start as tests within a "pack" tend to be logically grouped in much the same way as they are within TFS.
- Test - The ID of the test Item within the Suite. Given along with a "start" flag. Ended using an "end" flag (as multiple steps within a TC test "pack" tend to form a single TFS test. They are seldom single step entities in my setup).
These entries should not require updating. The only time they do is if someone (annoyingly) changes the name of a test Suite.
When it starts a suite, it searches the current Plan for a suite of that name. It then finds all the test Items (as TFS refers to them) within the suite. Each test Item contains a reference to the original test ID. You need to scan through them to link the Item to the original Test, and hence to the entry on my test "pack". You can't use the Item ID as it changes every time you copy the Suite forwards and you would have to update every single individual test. Not practical. Obviously. Suites are referenced by name for similar reasons. The name doesn't change when you copy it forward, but the ID does.
Technically, you should then create a Run in TFS. With test Items in it. You should then pass/fail the Items in the Run. When you complete the Run, the associated Suite should updates IT'S Items in line with the pass/fail's in the Run. But that's where the RESTful API is (or was last time I checked) broken. That link doesn't work. And Suites do not update in line with Runs as they should.
So, instead, I update each test Item in the Suite directly as it completes. As a stopgap. It works fine. Just makes grouping for reporting a little less intuitive as you don't have everything gathered in a single run. It's basically the same as a manual tester right clicking on a test within a Suite setting it's pass/fail status.
Hope this lot helps as understanding the data structures used, and how to use them in such a way that I don't have to update hundreds of identifiers every time I want to do a run, was probably the trickiest bit of using the API to figure out!
Related Content
- 4 years ago
- 8 years ago
Recent Discussions
- 19 hours ago