I am working on my thesis in testing mobile applications. What I am trying to do is identify the failure modes of mobile applications running on 2G/3G/802.11 family/Bluetooth and so on.
I would be obliged if you will supply me some pointers on testing these applications not only with respect to security but also with respect to other quality attributes like usability, functionality, performance, reliability and so on.
I am also trying to identify the failure modes in the components like the failures in the mobile database, synchronization issues, WAP gateway failures and so on.
Any help or suggestions in this regard would be highly appreciated.
In some cases, testbeds can easily be isolated -- for example, testing Microsoft file sharing between PDAs and laptops over 802.11b to a Windows 2000 server located on an Ethernet LAN directly connected to a wireless AP. Testbeds that require use of subscriber networks and the public Internet are much more difficult, because tests tend not to be nearly as repeatable and results are influenced by other (non-test) activity. Ideally, you want to keep conditions constant and vary only one or two parameters at a time so that you can compare results.
For example, I have attempted to measure throughput and latency of file transfers conducted over a VPN tunnel from a Palm PDA using GSM, accessing an FTP server located on my perimeter firewall's DMZ. To be statistically meaningful, I needed to repeat the same test many times for each representative file, averaging results and eliminating min and max values. I could use my results to compare throughput/latency with and without VPN -- but I couldn't know if my results were typical of GSM without having tested multiple service provider networks from a variety of locations. Even then, I should probably test with several PDAs with different CPU speeds and RAM to deduce what effect the platform has on throughput vs. what effect GSM has on throughput.
Testing functionality is typically a matter of enumerating the functions that an application should support, then defining a set of tests that exercise those functions, with pass/fail results. Problems that you might encounter running functional tests can be input to evaluating usability. For usability, you want to have several users with different skill levels attempting to accomplish a given set of business objectives, producing subjective ratings that indicate how easy or hard the task was. Performance test results may be easier to quantify, but can be very difficult to interpret. For example, wireless throughput is always higher in the lab under ideal conditions than in real life, so be very careful about the conclusions you draw from performance tests. To test failure modes in components, you'd want to enumerate a number of possible failure conditions and simulate them. You must also identify what you're measuring -- for example, when measuring time to re-establish the connection in the event of network loss-of-signal, do you measure network connection resumption or mobile application connection resumption?
Depending upon the mobile application, device, and network you want to test, you can probably find test tools on the Internet to help you. For example, you can find a Netperf WLAN benchmarking tool at Atheros. If you want to simulate web traffic, you might consider using a tool like Web Polygraph. You can find general guidance on benchmarking network protocols at the IETF's Benchmarking Methodology workgroup page. These are just a couple of URLs to get you started on your research into this wide-open topic.