who does your test automation - manual testers or programmer
I am asking if there are a lot of folks out there for whom this is considered an engineering function vs a manual tester's function. Thanks.
It's just me using TC (there is 1 dedicated tester on our team, yours truly).
I'm a manual tester, not a programmer.
I stumble all over the place in TC and am learning the hard way, but I'm trying.
I've actually recently been looking at changing jobs and had 3 prospective offers for similar automated QA roles.
Most of whom desired a strong programming background AND extensive manual regression testing experience.
None of the companies I interviewed with considered this a role for a programmer.
The reasoning is rather simple in my opinion. QA personality types and programmers are drastically different.
The programmer: Assumes everything is working until proven otherwise.
The tester: Assumes everything is broken until proven otherwise.
Smart employers will embrace those differences and hire appropriately.
''-Praise the sun and Give Kudos.''
I would say both typical manual tester and tipical programmer would struggle to build good automation suit. Manual tester is lacking of programming skills and programmer lacking tester’s discipline/perception. But both could be converted to automation tester with obtaining those skills. How difficult would depend on person.
This is an interesting thread. I'm a developer come automated automated test designer / creator myself. As other's have expressed there have been struggles along they way and not sure I'm out of woods just yet.
My homemade approach has been to start with no records in the database except for a single admin account. The tests then 1) adds record(s) via the UI in order of dependencies, 2) runs the transactional tests, and 3) removes record(s) according to dependencies. The end-to-end always leaves the database the way it started. If the delete tests aren't run the add tests don't need to be run and we can "rattle around" in the transaction tests throughout the week. Our tests currently run within a 24 hour period of time and we run the end-to-end everyday. At times I have questioned this approach.
Our biggest struggle right now is cross-browser compatibility. Some weeks / months tests run fine in Chrome and not Firefox. And then either SmartBear or Chrome/Firefox makes a change and tests work in Firefox and not Chrome. There's only ever been brief periods where both work reliably (less than 5%). IE has never worked. Devs say it's the testing software - SmartBear says it's our App. Our technology stack is quite large but shouldn't matter given it's all HTML in the end. I use "WaitAliasChild" in lieu of random "Delays" and always use "alias.browser...wait" when loading a page. Grids and sys.refresh when reloaded grid due to filtering.. Individual tests are small and re-usable where possible. And I ALWAYS map object through the object browser.
As a tool, it's a love / hate relationship with TC and TE. Some days I give praise - others I curse them and vow to drag their name through the mud. I've almost lost my job because of them a time or two. Other times I want to give up and go back to development -- I don't do both (well) at the same time. YMMV (Your Mileage May Vary).
Both in my case.
I developed the framework (all script extensions and a single driver script) which all our test projects run from.
I then build functions to manipulate the particular application or site under test (with as much re-use from project to project as is feasible). These then form what I call "test lego" which the manual testers then use to build tests from (everything is keyword + data driven from Excel sheets). The bulk of the stuff they build is existing manual regression tests which they map onto automated tests using the "test lego" I provide them.
I tend to only get involved after this stage for maintenance, questions and enhancements.
I did study programming at university ...... but that was a long time ago! When I was uni, C was just C. Delphi was still Pascal. And I did a lot of COBOL. Object Orientation was a big new concept but not widely used at that stage.
I ended up working on automation as I was working in a manual test role. They wanted someone to try out some automation. I was the best immediately available tester to do it and it just spiralled from there. Been automating for about 8 or 9 years now. Also do load and performance based work.
In my company, I'm technically part of the dev department rather than the test one, but I work closely with both. I'm the only one actually writing automated test code with TestComplete. We also use SpecFlow/Selenium, SoapUI and some CodedUI stuff as well. Oh, and I do some Python based stuff using cloud computing (AWS) for real browser load tests.
Currently I am the only person doing automated testing for our company. I am not a developer. Our company just started using TestComplete about 2 months ago. Our IT Department is over 200 people, but our testing team is about 10 people. Up until now we were an all manual shop. I am the only one developing our automated tests. I have no development experience, nor does anybody on our team.
We chose SmarBear because the plans were the non-technical QA group would be creating the tests but the group wasn't technical enough to use the product. There were other factors at play as well. I came from the programmer world and currently the only one creating tests. I would like to become more educated about testing in general in hopes it would improve test automation but feel we may be too far down the road to change course now.
Good discussion all.