Skip to content
QualityLogic logo, Click to navigate to Home

Adventures in Test Automation – Real Device Testing vs. Virtualization

Home » Blogs/Events » Adventures in Test Automation – Real Device Testing vs. Virtualization

There’s a clear difference of opinion on how to test mobile apps with automation. The common ground is how to drive the tests. There seems to be solid consensus that tests should be written using the popular Selenium Web Driver library and then pushed to devices using several different interfaces.

The argument begins on whether it is good enough to focus testing on simulators / emulators or if it is essential to test on real devices. A couple of services are out there that seem to champion both sides of this discussion.

Virtual Devices for Mobile App Testing

Both Apple and Google provide virtual devices for testing apps on their mobile platforms, and there is at least one third party providing optimized Android virtual device images. The immediate benefit of virtual devices is clear: free is a price that appeals to any manager’s pocket book, and the tools that Apple and Google provide are exactly that, free.

The catch comes when you boot them up and take a closer look. Let us begin with Apple.

Test Automation on an iOS Simulator

Note the word Simulator. That is very important. What Apple provides is something that gives the impression of being iOS, but it is a pure simulation. You will notice there are no apps, no app store, and most of the standard settings are trimmed way back. It does not even emulate the ARM processor, and code is compiled as x86.

In my mind, this does not make for an ideal solution. While Apple heavily sandboxes apps inside iOS, we have seen manual test results vary by carrier on what seem to be identical devices. This implies that the simulators should only be relied on for very basic build verification testing.

Test Automation on an Android Emulator

Google bundles an emulator with their SDK. This is a much better approach, but as you will soon discover, it has its hiccups as well.

The biggest positive for the emulator is that it is a real Android OS spinning up on top of a hardware emulator. This means you have an OS that will more closely match the software on a real device. That is about where the good points end.

When you spin up an emulator for the first time using an ARM processor, you are going to see what I mean. The words “watching paint dry” and “The watched pot never boils” start trotting through your brain while you watch the cool Android logo. Boot times can be longer than five minutes, and some I simply gave up on and never found out just how long they would have taken. Longer than 15 minutes at least. This is hardly efficient and certainly not practical for test automation. When things are booted up the situation does not improve.

I wrote a short test script that deployed the open source FDroid package manager App onto the Android device and then simply waited for everything to load in the UI. I ran the test on an Android ARM-based emulator that was configured as a Nexus 7 with 768mb of RAM. I then ran the same scenario on a Samsung Galaxy Tab S 10.5. On the emulator the test ran for 420 seconds before things loaded up. On the real device it took between 6 and 8 seconds.

The other alternative is to run the emulator on top of an x86 Intel Atom with Intel HAXM enabled. This is pretty impressive from a performance standpoint. On my core I7 quad core laptop with 16GB of RAM and an SSD, times came in around 20 seconds. Definitely usable for test automation, though we are again back to the problem of the code being tested in x86 rather than in the ARM format that most real devices use.

Mobile App Testing on Real Devices

As can be seen from my example in the Android emulator, real devices give a massive time saving in execution time, but it comes at a price. Staying current with all of the latest devices from both of the major operating systems is an expensive prospect. There are simply thousands of Android devices and it seems like there is a new set of Apple devices each year.

Automated Apple…

The desire to use real devices is sensible. Real devices are what your customers have, and the best QA is to try and match them. I’ve already mentioned this is an expensive prospect, but there is another caveat to running automation on iOS.

The principal means we are using to drive our test cases is Appium, but there is a major limitation that seems to be caused by Apple. Due to the way Apple instrumentation is configured, only one iOS device can be used by a single Mac at a time. This means that we are literally talking about a 1:1 hardware ratio with 1 device dedicated to 1 Mac. If you want to support two or three versions of iOS on the variety of supported hardware, you are probably looking at reserving a dozen Mac systems. This hardly seems cost effective!

Unrestrained Android…

The story is quite different with Android. I don’t know if there is a limit to the number of Android devices that can be connected to one system, but it is entirely possible to spool up separate Appium instances and run concurrent tests on a single PC with multiple connected devices. Performance is good and mostly stable, though there are some reports that Samsung devices occasionally lose their USB connectivity for no apparent reason. In our testing this did not come up but from the noise on the web it seems like it will happen sooner or later.

Who Will Win the Battle?

The reality of the situation is striking home. Companies that are using Emulators / Simulators for the bulk of their testing are starting to look at the bug reports and are turning to real devices, and others that focus on cost prohibitive hardware are looking to save a little by using emulators. In our opinion middle ground has to be sought.

With Compromise Comes Peace

The answer to this conundrum can be solved, however. We recommend a test approach driven by analytics and practicality. The Device emulators and simulators should be incorporated into the build process to catch major programming errors. Testing should be performed on real devices to get a clear picture of just how well the app will do once it is in the real world.

Once the app is launched publicly, analytics should be gathered to determine what devices are most commonly used with the app and then the devices should be incorporated into the test automation framework.

This blended approach will allow excellent coverage and give a high confidence that future releases will be stable and ready for the mobile world.

Let Us Help with Your Test Automation Project

Setting up test automation can be a daunting project, and can be even harder to staff. This is an area where QualityLogic can help take the burden off of your developers. We have a large library of devices that we continually update as new devices hit the market place. Our dedicated test automation engineers are ready to help implement the test automation best suited to your environment.

Contact Us