Within today’s complex, integrated and multi-faceted mobile workforce scheduling systems there exists an incredible amount of opportunities for increased efficiencies. The technology is truly amazing. Schedulers and dispatchers can quickly take action invoking complex business logic execution on backend servers. The display of tasks, resources, and assets via integrated mapping systems provides visibility at a glance. Business Analytics and Reporting systems have access to vast amounts of business information and mobile users are interacting with more and more real-time data all the time.
Unfortunately, this opportunity comes with an equally great possibility that the complex, integrated system you’ve invested a ton of effort implementing is woefully inefficient RIGHT NOW.
It seems a strange paradox that while the use of mobile workforce management solutions by users in the field grows more and more seamless, unified and user-friendly, the level of integration and systemic complexity required has introduced a myriad of possibilities for sub-optimal system performance requiring near super-human technology and business savvy to optimize. This ever increasing complexity with which, we as implementers must not only cope with but excel at, offers a nearly endless pool of variables to solve for within the performance and load testing equation.
Mobile workforce scheduling performance
A well-known truism in the realm of technology, science or management is:
“If you can’t measure it, you can’t improve it”
So the question arises: with so many integration touchpoints, disparate technology stacks, backend processes, system demand fluctuations and hardware choices (have you heard about this new cloud thing?), how is a company supposed to get their arms around these variables and produce meaningful performance benchmarks? How can you execute tests whose output is truly meaningful and provides solid direction for tangible improvement?
The answer is actually quite complex once we begin to look beneath the surface. It may be one thing to request thousands of appointment booking requests from a scheduling server and measure the response time or to run some optimization scenarios and evaluate hardware performance. But what about the data?
- Have we considered the impact of the geographic dispersal of service points relative to the time constraints for each of the service requests?
- What are the impacts of the sizes of appointment windows we are requesting?
- Could it be that the shifting of service or billing related workload is cyclical in nature and impacting the way in which our scheduling system load is distributed?
- What about the mobile request processing and data transfer traffic jams that can occur during peak or exceptional times of the day due to business norms such as morning truck rolls, storm outage initiation or back office user habits?
There are of course technical areas to consider as well. While I list a few below this is by no means a comprehensive list of the opportunities which exist for improving overall system performance:
- Has your DB been optimized? You should spend some time tuning the database layer via indexing, analyzing long running queries and managing table sizes.
- Ensuring your GIS server is scaling to its full potential. There’s no point in using an 8 core server to host a single-threading GIS configuration.
- Is your GIS Cache table exceedingly large? If so, we should probably do something about that.
- When was the last time the evaluation order of business logic was analyzed? Wonder how this is impacting the quality of your optimization?
- Properly configuring load balancers for your specific implementation. Typically load balancers are used to balance stateful website demands which is a very different need than a ClickSchedule implementation.
- How is your optimization divided up? Consider effective decomposition of optimization and scheduling problems.
- Analyze customizations which are inefficient, causing bottlenecks or leaking memory
- Did all those changes make a difference? Calculate mobile transaction volumes based on predicted need and accurately simulate those volumes in a repeatable fashion.
Each of the above items represents a small piece of the performance puzzle. Diabsolut has developed a performance and load testing approach that is able to isolate and identify the root cause of bottlenecks and slowdowns, even identifying a problem which may exist but has not yet become critical.
Diabsolut recommends putting effort into performance testing and subsequent improvements that are much more than just a report. Our fusion of business with technical expertise, implementation specific nuance and industry experience, along with our holistic testing and improvement approach addresses the problem from a diverse set of perspectives. Combing our proven performance improvement capabilities with our industry experience will truly reduce risk and raise confidence in the complex ecosystem that is your mobile workforce management system.