Misuse of Building Performance Simulation

Misuse of Building Performance Simulation

A rethink is called for how building data is modelled and the purposes simulation is used for. Better to use models for design decisions than validating compliance?

Michael Donn (Victoria University Wellington) asks: What are appropriate roles and uses for building performance models? What would be better goals and uses for models and the data they generate?

If it is true that ‘All models are wrong, but some are useful’ can building performance models be trusted? Most users seem to agree that these models reliably show the scale of differences between design options. But, most modellers also agree that they do not predict actual performance. In this case, are these models really suitable for compliance?

The author of the above aphorism, George Box, wrote:

‘Since all models are wrong the scientist cannot obtain a ‘correct’ one by excessive elaboration. On the contrary following William of Occam he [sic] should seek an economical description of natural phenomenon.’ (Box 1976)

In developing modern tools of building performance simulation this principle seems to be forgotten. Is our angst about the ‘gap’ between reality and the ‘predictions’ of our lighting, thermal, acoustic and air quality performance simulation models evidence of a misconception about the role of models?

What are simulation models for?

Simulation models are constantly being elaborated in an attempt to ensure they are fit-for-purpose, so revisiting Box’s idea is of increasing importance. For example, some codes require virtual light level “measurements” in daylit buildings at 300 mm centres. For a 10 x 10 m building this entails 900 sensor calculation points – an unnecessary precision. A 50 x 60 m plan entails 33,000+ separate calculations. Ridiculous.

Although it is possible to model complex networks of interconnecting rooms to examine natural ventilation flow via openable windows and internal doors, modellers seldom know the wind pressures driving these flows. Some certification systems persist in use of single-zone thermal models for compliance-checking at great risk to the overall health of the future occupants. Openable windows are designed for cooling in complete isolation from users’ safety, privacy, insect or acoustic concerns. These are merely the tip of a vast iceberg of varying modelling approaches that represent differing world views as to what is important as a measure of performance or the level of detail to model. The list of elaborations published annually is endless.

Box (1976) also wrote:

‘Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad.’

As a statistician, Box identified the key issue with the emphasis on the ‘accuracy’ of performance simulation tools. Building performance models have been elaborated over decades of coding, linking together the basic experimental physics and the physiology of ‘comfort’ with differing degrees of precision. In the process modellers exclude, for example, any allowance for cognition of inhabitants’ ‘comfort’ goals. Analysis of what is important in this elaboration appears to elude both the code writer and the code user. Ignoring cognition in our models provides a plausible explanation of the performance gap.

If Occam’s dictum to seek economical models is followed, then the question arises of how to determine what is an appropriate level of simplification? As models are simplifications of reality, what simplifications are acceptable? Since the 1980s, many papers have demonstrated that the same calculation model in different hands can produce wildly different answers. Indeed, two different tools in the hands of the same person can also produce vastly different answers. What are the key or ‘killer’ variables (Leaman and Bordass 1999) for achieving high (energy / carbon / visual acuity) performance? Also, for compliance, certifiers are now focused on the reliability of the actual number generated, not what modellers are sure of which is the comparative performance of two options in design analysis. Can we develop a minimum modelling capability, and a set of agreed ways of using a model to enable it to be used for code compliance checking against a defined target?

Are assumptions and inputs reliable?

In response to these issues, since the late 1990s, building performance simulation papers have referenced ‘quality assurance’ regularly. But what does this involve and are the assumption and inputs reliable?

  • High quality input data?

Is this REALLY the product specified, or is it a generic default value in the model, or a standard? Can a means be developed to assure compliance checkers that the ‘tiger’ / ‘killer’ variables in the model are reliable values?

  • Experienced modellers?

How can their models be cross-checked by their colleagues when there is a myriad of inputs? Who has not looked at model outputs and stated ‘What must be happening here is …’ with no proof? Can a means of automating models be developed that allow the modelling the richness of human habitation, such as opening windows for cooling, when it is not noisy?

  • Reliable climate data?

Local climate issues can affect energy performance by 20-30%: urban heat islands, wind speed variations with height and between urban buildings, natural atmospheric temperature changes with height, microclimatic conditions. These are not routinely modelled in design, and are often excluded in compliance calculations. Can a means of automating models be developed to encourage appropriate modelling of local climate, height and wind?

  • Plausible models of human behaviour?

There is a risk of creating inappropriate models of human behaviour.  For example, where the heating turns on below 17.5oC and off above 18.5 oC; or modelled windows that open for cooling stay open for a full hour even when it is cold outside. This would significantly increase the energy for heating, whilst reducing the cooling. Or, blinds are operated against glare with a similar precision. A plausible level of user involvement can be reliably modelled during design so that performance options can be discussed with clients. This is the significant advantage of moisture models that look at mould growth on surfaces over a 3 year hourly interval modelling cycle; or spring/autumn overheating risks over a 12 month hourly cycle; or the risk of summer over-heating during plausible hourly data of Design Summer years;or risks of glare for particular fractions of the ~4000 hours of daylight. However, encouraging this depth of analysis in a manner that also avoids gaming the system, and that can be checked readily for compliance against targets requires work on both the targets and the modelling processes.

Rethinking the uses of simulation

Dynamic simulation is currently misused. In energy performance, 8760 hourly energy balance calculations are created but only a single annual energy performance index for compliance is reported. The analysis does not examine the risk of mistaken assumptions. For example: if a heat recovery ventilation system fan is too noisy for the householder and they disable it, then what is the health risk in an airtight house? In climate-based daylight (dynamic) modelling, 4000+ hourly radiation balance calculations are generated, but a single performance index is reported. In design this type of behaviour is unforgiveable. Rich time-based data enables discussion about quality of the lived experience. This is, and should be, the focus of design discussions. However, for run-of-the-mill designs, models are for compliance checking, not design analysis.

For compliance checking, far more sophisticated performance criteria are needed than the current practice of simplistic single-number targets. These targets should comprise at least a best and a worst-case operational scenario. Even energy efficiency reporting for cars provides different figures for town and country driving conditions. We also need models that are constrained to:

  • report the target compliance while identifying the ’killer’ variables;
  • reduce the potential for gaming the compliance system by controlling the range and type of inputs;
  • incorporate an input data reliability score.

References

Box, G.E.P. (1976). Science and statistics. Journal of the American Statistical Association, 71 (356): 791–799. https://doi.org/10.1080/01621459.1976.10480949

Leaman, A. & Bordass, B. (1999). Productivity in buildings: the ‘killer’ variables. Building Research & Information, 27(1), 4-19.

Latest Peer-Reviewed Journal Content

Journal Content

Heat stress: adaptation measures in South African informal settlements
J M Hugo

The urban expansion of Berlin, 1862–1900: Hobrecht’s Plan
F Bentlin

Common sources of occupant dissatisfaction with workspace environments in 600 office buildings
T Parkinson, S Schiavon, J Kim & G Betti

Urban growth in peri- urban, rural and urban areas: Mexico City
G M Cruz-Bello, J M Galeana-Pizaña & S González-Arellano

Overcoming the incumbency and barriers to sustainable cooling
J Lizana, N D Miranda, L Gross, A Mazzone, F Cohen, G Palafox-Alcantar, P Fahr, A Jani, R Renaldi, M Mcculloch & R Khosla

Assessing climate action progress of the City of Toronto
K R Slater, J Ventura, J B Robinson, C Fernandez, S Dutfield & L King

Meeting urban GHG reduction goals with waste diversion: multi-residential buildings
V MacLaren, E Ikiz & E Alfred

Climate action in urban mobility: personal and political transformations
G Hochachka, K G Logan, J Raymond & W Mérida

Transformational climate action at the city scale: comparative South–North perspectives
D Simon, R Bellinson & W Smit

Stretching or conforming? Financing urban climate change adaptation in Copenhagen
S Whittaker & K Jespersen

Embodied carbon emissions in buildings: explanations, interpretations, recommendations
T Lützkendorf & M Balouktsi

Pathways to improving the school stock of England towards net zero
D Godoy-Shimizu, S M Hong, I Korolija, Y Schwartz, A Mavrogianni & D Mumovic

Urban encroachment in ecologically sensitive areas: drivers, impediments and consequences
M H Andreasen, J Agergaard, R Y Kofie, L Møller-Jensen & M Oteng-Ababio

Towards sufficiency and solidarity: COP27 implications for construction and property
D Ness

Local decarbonisation opportunities and barriers: UK public procurement legislation
K Sugar, T M Mose, C Nolden, M Davis, N Eyre, A Sanchez-Graells & D Van Der Horst

Integrating climate change and urban regeneration: success stories from Seoul
J Song & B Müller

Canadian cities: climate change action and plans
Y Herbert, A Dale & C Stashok

Energy, emerging technologies and gender in homes [editorial]
Y Strengers, K Gram-Hanssen, K Dahlgren & L Aagaard

See all

Join Our Community

Latest Commentaries

Dismantling Power and Bringing Reflexivity into the Eco-modern Home

Can renewable and smart energy technologies in the home avoid negative consequences for gender, power, and nature-society relations within the domestic sphere? Olufolahan Osunmuyiwa, Helene Ahlborg, Martin Hultman, Kavya Michael and Anna Åberg comment on ‘Masculine roles and practices in homes with photovoltaic systems’ (Mechlenborg & Gram-Hanssen, 2022) – published in a recent Buildings & Cities special issue ‘Energy, Emerging Tech and Gender in Homes’.

The Launch of SURGe at COP27: Breakthrough or Déjà Vu?

The overall outcomes of COP27 (held in Sharm El-Sheikh, Egypt) have been reported by some as disappointing. However, leading city networks such as C40 and ICLEI claim that subnational governments and cities have made a significant breakthrough with the launch of the Sustainable Urban Resilience for the Next Generation initiative (SURGe). This commentary explores how much of a breakthrough this really is.