Office for Standards in Education (Ofsted)
Printable version

Commentary on curriculum research - phase 3

Amanda Spielman provides a commentary on phase 3 of Ofsted's research into the school curriculum.

In January, I will consult on our new education inspection framework (EIF). As I have already announced, the heart of our proposals will be to refocus inspections on the quality of education, including curriculum intent, implementation and impact.

To ensure that inspection of the quality of education is valid and reliable, I commissioned a major, 2-year research study into the curriculum. I would like to thank the school leaders and teachers who have contributed to this work. We visited 40 schools in phase 1, 23 schools in phase 2 and now 64 schools in phase 3. When you add the focus groups, reviews of inspection reports and other methods, it’s clear that this is a significant study and we can be confident in its conclusions.

To recap, in phase 1 of the research, we attempted to understand more about the current state of curricular thinking in schools. We found that many schools were teaching to the test and teaching a narrowed curriculum in pursuit of league table outcomes, rather than thinking about the careful sequencing of a broad range of knowledge and skills. This was disappointing but unsurprising. We have accepted that inspection itself is in part to blame. It has played too great a role in intensifying performance data rather than complementing it.

Having found that some schools lacked strong curricular thinking, phase 2 sought to look at the opposite – those schools that had invested in curriculum design and aimed to raise standards through the curriculum. Although we went to schools that had very different approaches to the curriculum, we found some common factors that appear related to curriculum quality:

  • the importance of subjects as individual disciplines
  • using the curriculum to address disadvantage and provide equality of opportunity
  • regular curriculum review
  • using the curriculum as the progression model
  • intelligent use of assessment to inform curriculum design
  • retrieval of core knowledge baked into the curriculum
  • distributed curriculum leadership

In phase 3, which is the subject of this commentary, we wanted to find out how we might inspect aspects of curriculum quality, including whether the factors above can apply across a much broader range of schools.

We also wanted to move beyond just looking at curriculum intent to looking at how schools implemented that thinking and what outcomes it led to. There has been some debate since we published my commentary on phase 2 about whether this would lead to an Ofsted-approved curriculum model. However, to reiterate there will be no ‘Ofsted curriculum’. We will recognise a range of different approaches.

Phase 3 of our curriculum research shows that inspectors, school leaders and teachers from across a broad range of schools can indeed have professional, in-depth conversations about curriculum intent and implementation. Crucially, the evidence also shows that inspectors were able to make valid assessments of the quality of curriculum that a school is providing. Both parties could see the distinction between intent and implementation, and inspectors could see differences in curriculum quality between schools and also between subject departments within schools.

Importantly, what we also found was that schools can produce equally strong curricula regardless of the level of deprivation in their communities, which suggests that our new approach could be fairer to schools in disadvantaged areas. This is distinctly encouraging as we move towards the new inspection framework. You can read the full findings of this research study. I have summarised the research design and main findings below.

Curriculum study – phase 3

In phase 3, we wanted to design a model of curriculum assessment that could be used across all schools and test it to see whether it produced valid and reliable results. Based on the phase 2 findings, discussions with expert HMIand our review of the academic literature, we came up with several hypotheses (detailed in the full report) and 25 indicators of curriculum quality to test (detailed at the end of this commentary). These indicators will not be directly translated into the new inspection framework. First, they were only tested in schools, not early years provision or further education and skills providers. Second, 25 indicators is too many for inspectors to use on an inspection, especially given the short timescales of modern inspection practice. What we were aiming to do was first to prove the concept (i.e. that it is possible to make valid and reliable assessments of quality) and second, to find out which types of indicators did that most clearly.

The 25 indicators were underpinned by a structured and systematic set of instructions for inspectors about how to use them for the research. Using conversations with senior leaders and subject leaders and collecting first-hand evidence of implementation, inspectors were able to make focused assessments of schools against each of the indicators. Inspectors used a 5-point scale, where 5 was the highest, to help distance inspectors’ thinking from the usual Ofsted grades. The full descriptors are at the end of this commentary, but by way of illustration:

  • a score of 5 means ‘this aspect of curriculum underpins/is central to the school’s work/embedded practice/may include examples of exceptional curriculum’
  • a score of 1 means ‘this aspect is absent in curriculum design’

Within each school, inspectors looked at 4 different subjects: 1 core and 3 foundation. This allowed us to look at the level of consistency within each school, but also to find out more broadly which subjects, if any, had more advanced curricular thinking behind them. Inspectors also gave each school an overall banding, again from 5 to 1.

This gave us 71 data points for each school, based on all the evidence gathered. While this approach would not be suitable for an inspection, what it allowed us to do was to carry out statistical analyses to look at the validity of our research model and to refine and narrow the indicators to those that more clearly explained curriculum quality.

We visited 33 primary schools, 29 secondaries and 2 special schools. The sample was balanced in order to test the validity of our curriculum research model across a range of differing school contexts. The main selection criteria were: previous inspection judgements (outstanding, good and requires improvement (RI) only), geographical location (Ofsted regions) and school type (local authority (LA) maintained/academies), although we over-sampled for secondary schools and schools that were judged outstanding or RI at their last routine inspection. We ensured a wide spread in terms of performance data. Importantly, we also took care to select a range of institutions across an area-based index of deprivation. This meant that we had roughly equal numbers of schools in more and less deprived areas.

Click here for the full article


Channel website:

Original article link:

Share this article

Latest News from
Office for Standards in Education (Ofsted)

46% of Councils Exposed to Cyber Security Threats