5 minute read

Developing and implementing a monitoring framework

Implementing an effective monitoring system for your design code requires early planning as well as good engagement and crosscollaboration with various stakeholders. For more information, see ‘From policy to practice’ and ‘Bridging siloed working’.

Ideally, you should be thinking about how you will check that desired objectives are achieved in practice from the very start of the project. Thereafter, that thinking should be integrated into every aspect of your design code’s development.

Only monitor factors that support your vision, including progress against your design code objectives, the impact in relation to wider corporate policies, and validation evidence. To do so, you must have baseline information against which to measure change. One key baseline measure is to know what percentage of planning applications, considering different development types and scales, are refused consent on design grounds. These refusals are time-consuming to both applicants and the LPA, and so a reduction in refusals is likely to be one of the chief objectives of a design code. Knowing the rate of planning refusals before the design code is adopted will help you to quantify the extent to which the code improves the situation.

You must also have defined your desired outcomes in a way that can be measured either quantitatively or qualitatively.

Data that you can usefully capture quantitatively (i.e. in numbers) include not just refusal rates but also many others, for example the number of people who use the code to make applications. If your code is web-based, you can also capture useful information automatically using web tools that record impressions, clicks, pageviews, unique visitors and bounce rates.

Qualitative data arise from analysing concepts that can’t be measured in numbers and so have to be generated from other sources. These include simple judgements or opinions (from surveys, perhaps) about whether, for example, the design quality of the areas covered by your design code has improved, or whether your local authority vision for an area is being achieved.

It helps to set the most critical objectives for the success of your code as key performance indicators (KPIs). KPIs are measurable criteria that allow you to monitor the main objectives of your code and can be structured as a checklist used to steer the development, implementation and management of your design code.

Another aspect to consider when developing a monitoring framework relates to the simplicity and ease with which it is possible to convert design coding objectives to measurable qualitative criteria.

The Gedling Borough Council Pathfinder project demonstrated the importance of upfront analysis to understand business as usual. The scope and development of their design code was informed by an analysis of development management officers’ case work. This analysis revealed that, over the four years prior to producing a code, 77% of refused applications were based on design grounds. This analysis helped to collect baseline data from progress that can be monitored and to define the scope of the code, setting the agenda for reprioritising the design code’s objectives and, therefore, reconfiguring the draft that was current at the time.

This reality-check paid off. When the Gedling team’s development management officers tested the revised document, they were won over by how easily it could be incorporated into the application process and how user-friendly it was.define the scope of the code, setting the agenda for reprioritising the design code’s objectives and, therefore, reconfiguring the draft that was current at the time.

Measuring priority criteria

The task of monitoring the performance of design codes is made easier when the status of their requirements is mandatory – you must do something – as opposed to preferred – you should do something.

There are several reasons unrelated to monitoring why being clear in this way is helpful. Applicants know what they can or cannot do, a level of certainty that makes negotiations much more straightforward. Development management officers can also assess applications much more quickly, saving time and resource.

define the scope of the code, setting the agenda for reprioritising the design code’s objectives and, therefore, reconfiguring the draft that was current at the time.

This reality-check paid off. When the Gedling team’s development management officers tested the revised document, they were won over by how easily it could be incorporated into the application process and how user-friendly it was.

In terms of helping with monitoring, making as many requirements as possible mandatory allows them to be assessed in questions to which the answer is either ‘yes’ or ‘no’. These binary questions can then be compiled into a checklist that, once answered, allow you to see the extent to which applications meet the requirements of the design code. See, for example, the Bournemouth, Christchurch and Poole Council Pathfinder team’s ‘compliance checklist’ on page 50.

It is a very easy step from there to aggregate the yes/no answers in a monitoring system and use it to track improvements in the numbers of refusals on design grounds –one of the most important measures of a code’s success.

This is exactly what the Lake District National Park Authority Pathfinder team has done. Using software called ‘Swift’ –already used by officers to feed data into the annual monitoring report related to the local plan – the team were able to monitor the numbers of refusals since the code was adopted. The monitoring system told the

LPA that, of the 42 applications they received in the first quarter of 2024 where the code had been applied, only five did not comply – and were refused. Having this monitoring framework in place allowed the team to track and quantify changes resulting from the adoption of the design code.

This article is from: