I often tell students “If the model’s behaviour doesn’t match reality, there’s something wrong, so fix it before moving on” I should add that one of two things could be wrong …
- there’s a mistake in the model, or
- the model is telling you something you did not know about the real world.
In a friend’s model, service staff productivity had historically been rising as those staff gained experience, from ~3 customer-tickets fixed per day to ~10 tickets/day. But in recent times, in spite of low staff turnover, productivity was actually falling.
It turned out that customer relationship managers – trying to be helpful – were emailing support issues direct to the support staff, and not going through the ticket system. Two benefits from this:
… fixing the issue helped the service team leader much better manage the work pressure on his team
… much more important: product-development, marketing, sales and support teams all realised their product had bigger quality issues than they had thought (the issues from the CRMs were way more serious than the mostly routine ticket-issues)
This had big implications for their mid-term business development strategy. Should they pause winning new customers while they fixed those quality issues in the product – or keep going and risk annoying both existing and new customers alike?
I hope you find these insights useful. If so, please sign up for my weekly Briefings, which build up, in bite-size pieces, the whole logical rationale that makes these models so reliable and powerful.