For Creative Operations and Project Managers (a.k.a. Traffic Managers, Studio Operations) that are tasked with delivering marketing/creative assets on time the Review & Approval stage of the creative production process can be likened to running a race. Your Creatives and Reviewers are like the runners on a relay team, and the Assets are the baton that is passed between them. However much as a relay team doesn’t just accept finishing a race as good enough, simply getting Assets produced shouldn’t be good enough either. Runners strive for the fastest time, and the Review and Approval Process should be no different. Runners know what important metrics to look at when determining their performance, and how to interpret those metrics to get faster times. The question becomes: what should be measured in the Review and Approval Process, and why?
In this post I’ll outline the must have, or building block review & approval metrics, that you can use to identify opportunities to improve the speed of your review & approval race. Keep in mind these metrics (like any metrics) are not meant to make decisions for you. There is no single value that is ideal for all processes; they are intended to help you ask better questions - "Why did it take 35% longer to complete the review process in September?" for example - so you can determine the root cause and make the required changes.
Average Review Cycle: This is the big review & approval metric, which measures the average time the Review Process takes from start to finish. In our analogy, it’s the race time. Now obviously, faster is better. It’s easy to just say go faster, but without knowing where improvements can be made you could end up sacrificing quality and compliance along the way. Telling the relay team to get to the finish line faster could end up with some runners cutting corners. In the same way, telling your Creative Team to make shorter deadlines might end up with crucial steps not taken. So how do you know where to focus your efforts? Let’s delve into more specific metrics that not only break down the Average Review Cycle into parts but each target specific improvements that can be made.
Average Time to First Response: This looks at the time it takes for Reviewers to provide their feedback on creative work. If this number seems high, or if you want this number to get lower, consider making sure Reviewers are getting their notifications in a timely manner. Depending on the Team Response Rate (to be discussed down below) there could be a bottleneck in the number of Reviewers.
Feedback Turnaround Time: Now that feedback has been gathered, how quickly can creatives react to the changes needed. This is essentially the reverse of the Time to First Response, but with the focus on Creatives, instead of Reviewers. A large number here can mean that Creatives aren’t aware that feedback has been completed or that there are too few Creatives for the volume of work going through at a time.
Changes per Version: It might seem that the goal should be that creatives make assets that are perfect on the first version. Arguably a more realistic goal is that Assets should be approved by the SECOND version. In that vein, a healthy metric is one where as Versions increase, the number of changes requested decreases.
Number of Versions per Approved Asset: As mentioned with the Changes per Version, this metric is healthier the lower the number. This is distinct from the Changes per Version metric, in that this captures potential wasted versioning due to factors such as feedback being provided on later versions that could have been addressed on earlier versions.
Team & Individual Response Rates: Up until now the metrics discussed have focused on the Review Cycle itself, breaking it down into its constituent parts. This metric takes a step back and focuses on the people themselves. Reviewers will naturally fall into groups or teams (i.e. Legal, Marketing, Merchandising, etc.). Measuring the Response Rates of these groups will aid in identifying bottlenecks; is Legal taking 2x longer to provide feedback and does the data identify they are understaffed relative to other groups? Or breaking down teams you may trace bottlenecks to individuals; you may find that Rich in Legal never looks at Version 1 (hoping other teams address required changes) and that usually results in feedback on later versions and extra revision rounds.
By using these metrics, you can ask better questions, identify root cause and make required changes and achieve a more efficient Review and Approval cycle. This will save time, free up capacity and reduce your costs. In my next post I’ll share some advanced review & approval metrics and talk about tying the metrics you care about to big goals that move the needle (and that your CMO and CFO) will care about.