Another approach to eGov testing: Web log analysis



In parallel to conventional Selenium testing we need to look at some other forms of testing in order to get as much valuable information as we can. Since there are so many services that need to be tested we cannot truly proceed in a conventional way, meaning systematically testing each service. It would be a monumental task that can be done but would not be very efficient.

One way to get some more useful information would be looking at the weblogs and looking for useful information there. There are some useful tools out there that can be used to analyse weblogs, one that we heard was rather useful was logparser which has an sql-like interface.

This post will not focus on how to actually read weblogs since these can vary. Besides that we need access to the logs first before we can understand what we can use to our advantage. However we realise the important information we can extract from there but there are other issues to be made aware of such as privacy. It is crucial to make local authorities understand that there is no breach in privacy  and that the information used will be only used to improve services and enhance the testing that needs to be done.  If privacy does become a major concern it should be made clear to to the owners of the site being tested that they take the ownership and that we are only there to give advice or recipes on how to be more effective at testing.

Gathering data and patterns on how the real site is being used today should be really helpful in evaluating where possible errors might arise. This could be a combination of weblogs and analytics, if both compliment each other we have a degree of consistency and if they don't we know something is wrong somewhere, at least it can help us to narrow down a possible error from the many services and areas that need to be tested.

Some of the benefits of analyzing web logs is that the user agent string that is available as well as well as other useful information such as status codes. Since almost all eGov services are “services” there is mostly a task which needs to be completed before we can say a transaction was successful. This process is commonly divided into different steps where online forms need to completed, user confirmation is required, submission of data etc etc. This leads us to one interesting KIP (key performance indicator) that we can take a look at and think about.


Time on site compared to task completion rate:


We can measure task completion rate and see what percentage of users that start in the beginning reach the end, or to put it differently finish the transaction or process. This leads us immediately to other questions:

  • How do the different browsers compare when looking at task completion rates?

  • When the user begins a transaction like paying some sort of utility bill, is the task completion rate higher on some browsers than on others?

  • If so why?

  • Could this mean that when some services are used that somewhere along the process something goes wrong with some browsers?


Since there are many services to test, asking questions like the above can help determine where there are potential problems. This could be much faster than manually testing all services. But it also helps us ask more questions:


  • Are these users that are returning or are they unique visitors?

  • Is there any sort of pattern here where for eg. unique visitors have a lower completion rate, and this is not because the browser does not work but because the user simply could not complete the task?

  • How does `Bounce Rate` play into all this (this is the % of traffic that leaves instantly or for anyone that stays less than 10 seconds on a page).

  • Could it be that users cannot complete a task because it is so complicated and has nothing to do with the browser being used? In other words the user experience needs to be improved, so simplify some of the steps that are needed. This could provide valuable feedback but would have no impact on what FOSS technology is being used to access the eGov services.

  • What else do we need to ask ourselves?

Negatively influencing weblogs through testing machine

By using testing machine we are negatively influencing the weblogs of the local public administration. We run the same tests at the same time using the same browsers and operating systems. The services that are being tested need to know that the access from distinct IPs can be ignored. Including mobile testing with Android clients. In conclusion we need to help improve the services and not complicate things further. By using the data from web logs we can get a lot of useful information without writing hundreds of tests. In another post we will post some findings and conclusions once we have access to some data.



No comments yet. Be the first.