This post will take a look at another way that we can gain a greater insight into what is happening between users and eGov services. Using analytics we can supplement our tests by focusing on specific issues or focusing on specific areas where analytical results indicate that something might be amiss.
In our previous post we discussed some of the things to be taken into consideration when using web logs and what questions to ask and then to apply this to our tests. In the same way we can now take a look at analytics, specifically Google Analytics since it is being used by the local eGov services over here to try and understand a few of the pros and cons of using analytics and what questions we can ask ourselves to make sure that eGov services are working reliably when using FOSS. This is by no means a guide on Google Analytics but more a supplement, in conjunction with web log analysis where we can retrieve a finer granulation of information to help us become more efficient and smarter testers of eGov services. The knowledge gained here can also be applied to other web based services that can be tested, and as such, should be seen as another component of the “Testing Toolkit”.
Google Analytics is a popular tool that gets used to analyse what is going on when it comes to websites but there are some immediate disadvantages that arise when taking a closer look:
Some disadvantages to Google Analytics
Can stop working or slow down for brief periods of time
Spider and bot traffic does not get tracked
Server response codes are not stored. As seen with web log analyses the server response codes can contain very valuable information.
Log files create greater consistency
Some advantages to Google Analytics
Reports are customizable which can reduce time when focusing on some particular service
An ability to measure internal site search. This is one that might help improve eGov services if for eg a lot of people are searching for the same thing. This means that whatever testing we do to check if there are problems with the service or not, if the service is difficult to find some users might just see it that the “eGov services don’t work”. If this is the case we can save time and effort by identifying these problems first before looking into false positives.
Using profiles to create long term segmentation analysis can also be very useful when it comes to testing out a service that consists of a multi stage process. These services are above all the crucial ones. If there is some inconsistency detected somewhere we can pinpoint with a great deal of accuracy if the failure is a result of the browser, OS or other factors such as updates. This can quickly enable testers to focus on specific areas tailor-made for these “alarm” zones without having to use a more traditional approach like testing all services. Please take into consideration when reading this that we are discussing sites that have a very large amount of services that need to be tested and it would not be feasible to test every service which is exactly why we are looking for a more efficient means to test.
- Monitoring the amount of mobile and tablet traffic. This could also save in time and effort to see how much time should be put into testing on mobile platforms. Is the user base large enough to warrant testing? More importantly, do the users that use the services with mobile devices actually do any effective transactions or are they just browsing for information like the current weather? If we know this it might save us some time, for eg if the amount of mobile users account for only 2% of the traffic it makes more sense to invest time and resources fixing and improving services that are being used by more traditional browsers found on laptops and desktops. It is common sense, but without the analysis we could base too much of this purely on assumption.
We as a team can certainly perform a list of heuristic tests since we work with testing on a daily basis. However there is still another problem that remains and that is when a task has been completed for eg. a five step process to pay for your garbage collection, it gets sent to the back office making it official, in other words we can simulate the user experience only until a certain point, lets say step four and then we cannot continue because if we do the process has been tagged as an official transaction.
There is another possibility however that could help improve services and that involves group heuristic evaluations. The idea here would be to get a diverse group of people into a room with diverse browsers and operating systems and have them go through a check list. The more diverse the feedback the better, and if there is something wrong with a service the data that gets collected could provide us with a pattern to understand where things go wrong, or what aspects of usability needs to be improved since it could be that the logs and analytical data shows that a process was not completed but this was in no way related to browser or operating system incompatibility but rather something in the user interface that was maybe not clear.
An interesting read concerning the subject matter, Jakob Nielsen’s 10 usability heuristics can be found at the following link:
As with web log analysis which can give us valuable insight so can analytics and heuristics. Firstly before we get all excited we need access to the actual logs and analytical data (not to mention privacy issues) but since we are fairly convinced we can get access it should open up many new possibilities and greatly enhance our effectiveness when testing eGov services. We can combine traditional testing with the information we get from web logs and analytics to improve our testing significantly. More to come once we have access to some data.