Mobile App’s Testing Methods on Broad Device Line: Cost-effective and Headache-free.

How many devices shall we support?

We always pose this eternal question to the customer when developing mobile apps. And we nearly always get the answer: “Let’s do as many as possible!”.

It goes without saying: high quality for a wide сellphone and tablet line is a MUST with the customer. And the key question here: how can we do it and at what cost? The customer wants to get reliable software regardless of the model of a device.

Costs literally skyrocket if we aim to provide the best functionality for most Android devices available on the market.

Mobile-Phone-Testing2-700x290

How to handle this?

Very efficient way to address this problem is partial or maximal usage of automated testing as a pillar of quality assurance. Experience of instinctools EE labs shows that this approach can bring significant economic value as well as ensure constantly high level of quality. How is this achieved?

We apply 2 types of automatic tests in our projects:

  • unit tests for testing different functional modules
  • UI Automation tests for “thru” tests

UI Automation tests are supposed to rerun the same typical scenarios in app usage. They may be launched either from the developer/tester workstation or remotely on HW testing farms like AppThwack.

There are several approaches for developing automated tests for iOS and Android platforms.

For Android platform we have analysed available test techniques and chose a Robotium framework . Robotium tests are based on JUnit and provide ample opportunities to interact with the UI. They also allow monolithic integration of tests directly in the project. In this way we are getting an opportunity to run automated tests directly on a developer workstation, as well as several ways to use remote HW farms like AppThwack .

As the most optimal solution in iOS platform we see a standard way of automated testing provided by Xcode. It is based on development of special javascript scenarios. AppThwack also supports this kind of tests.

How does Development/Quality Assurance process look like?

You bet it ain’t rocket science. The preparation process for automatic testing may be divided into the following stages:

  • Preparing the infrastructure to develop and run automated tests

  • Formalization of the application requirements,

  • Writing manual test cases and scenarios for functional testing

  • Qualification of test scenarios for automation

  • Programming and debugging test scripts

  • Run Automated Tests (on development, QA environments or HW farms)

  • Processing of test results

And the bottom line…

After the launch of tests the above services will provide in-depth reporting (failed tests and hardware which caused the failed tests as well as screenshots on different devices and screen resolutions when needed).

testlog

 

Lost in Approaches

 There are no advantages without disadvantages – this is not a perfect world after all. So, what would be the disadvantages in our case?

  • No 100% automated test coverage. Unfortunately not all test cases can be automated. As well as for some automations some additional test-related infrastructural components have to be developed (test mocks, test infrastructures and s.o.). This means also that certain set of test cases will stay manual.

  • Increased cost of initial development. To enable automation test driven approach more efforts have to be invested in test development. This can be 30-70% of development of initial application. Good news here is that this investment is paying back (on manual QA savings) with 2nd or 3rd application release already.

  • Automated tests are becoming obsolete and have to be maintained.  With every new feature development we need to plan for 30-50% of additional development efforts (developing new tests and maintaining the old). But again this is paid back generously by consistent SW quality,  wide HW coverage guarantee and happy customers.

Economical impact

Based on experience of instinctools EE labs, despite the increase in costs on initial Stage of the development, this approach proved to be paying off already with the second application release.

The following economic benefits are also worth mentioning here:

  • a cost decrease in manual testing and re-testing (regression testing)

  • decrease of  “human factor” impact on the project quality

  • a broader line of different hardware variants support becomes more plausible (especially on Android)

  • stable reliability and quality levels appreciated by the customer

The above depicted is rather our ball game. And how do you help your customers support a broader line of mobile apps?

 


Gary Weaver
E-mail: gary.weaver@instinctools.ru
Skype ID: gary.weaver360
Senior Business Development Manager
*instinctools EE Labs

Leave a Reply