Misuse of Building Performance Simulation

Misuse of Building Performance Simulation

A rethink is called for how building data is modelled and the purposes simulation is used for. Better to use models for design decisions than validating compliance?

Michael Donn (Victoria University Wellington) asks: What are appropriate roles and uses for building performance models? What would be better goals and uses for models and the data they generate?

If it is true that ‘All models are wrong, but some are useful’ can building performance models be trusted? Most users seem to agree that these models reliably show the scale of differences between design options. But, most modellers also agree that they do not predict actual performance. In this case, are these models really suitable for compliance?

The author of the above aphorism, George Box, wrote:

‘Since all models are wrong the scientist cannot obtain a ‘correct’ one by excessive elaboration. On the contrary following William of Occam he [sic] should seek an economical description of natural phenomenon.’ (Box 1976)

In developing modern tools of building performance simulation this principle seems to be forgotten. Is our angst about the ‘gap’ between reality and the ‘predictions’ of our lighting, thermal, acoustic and air quality performance simulation models evidence of a misconception about the role of models?

What are simulation models for?

Simulation models are constantly being elaborated in an attempt to ensure they are fit-for-purpose, so revisiting Box’s idea is of increasing importance. For example, some codes require virtual light level “measurements” in daylit buildings at 300 mm centres. For a 10 x 10 m building this entails 900 sensor calculation points – an unnecessary precision. A 50 x 60 m plan entails 33,000+ separate calculations. Ridiculous.

Although it is possible to model complex networks of interconnecting rooms to examine natural ventilation flow via openable windows and internal doors, modellers seldom know the wind pressures driving these flows. Some certification systems persist in use of single-zone thermal models for compliance-checking at great risk to the overall health of the future occupants. Openable windows are designed for cooling in complete isolation from users’ safety, privacy, insect or acoustic concerns. These are merely the tip of a vast iceberg of varying modelling approaches that represent differing world views as to what is important as a measure of performance or the level of detail to model. The list of elaborations published annually is endless.

Box (1976) also wrote:

‘Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad.’

As a statistician, Box identified the key issue with the emphasis on the ‘accuracy’ of performance simulation tools. Building performance models have been elaborated over decades of coding, linking together the basic experimental physics and the physiology of ‘comfort’ with differing degrees of precision. In the process modellers exclude, for example, any allowance for cognition of inhabitants’ ‘comfort’ goals. Analysis of what is important in this elaboration appears to elude both the code writer and the code user. Ignoring cognition in our models provides a plausible explanation of the performance gap.

If Occam’s dictum to seek economical models is followed, then the question arises of how to determine what is an appropriate level of simplification? As models are simplifications of reality, what simplifications are acceptable? Since the 1980s, many papers have demonstrated that the same calculation model in different hands can produce wildly different answers. Indeed, two different tools in the hands of the same person can also produce vastly different answers. What are the key or ‘killer’ variables (Leaman and Bordass 1999) for achieving high (energy / carbon / visual acuity) performance? Also, for compliance, certifiers are now focused on the reliability of the actual number generated, not what modellers are sure of which is the comparative performance of two options in design analysis. Can we develop a minimum modelling capability, and a set of agreed ways of using a model to enable it to be used for code compliance checking against a defined target?

Are assumptions and inputs reliable?

In response to these issues, since the late 1990s, building performance simulation papers have referenced ‘quality assurance’ regularly. But what does this involve and are the assumption and inputs reliable?

  • High quality input data?

Is this REALLY the product specified, or is it a generic default value in the model, or a standard? Can a means be developed to assure compliance checkers that the ‘tiger’ / ‘killer’ variables in the model are reliable values?

  • Experienced modellers?

How can their models be cross-checked by their colleagues when there is a myriad of inputs? Who has not looked at model outputs and stated ‘What must be happening here is …’ with no proof? Can a means of automating models be developed that allow the modelling the richness of human habitation, such as opening windows for cooling, when it is not noisy?

  • Reliable climate data?

Local climate issues can affect energy performance by 20-30%: urban heat islands, wind speed variations with height and between urban buildings, natural atmospheric temperature changes with height, microclimatic conditions. These are not routinely modelled in design, and are often excluded in compliance calculations. Can a means of automating models be developed to encourage appropriate modelling of local climate, height and wind?

  • Plausible models of human behaviour?

There is a risk of creating inappropriate models of human behaviour.  For example, where the heating turns on below 17.5oC and off above 18.5 oC; or modelled windows that open for cooling stay open for a full hour even when it is cold outside. This would significantly increase the energy for heating, whilst reducing the cooling. Or, blinds are operated against glare with a similar precision. A plausible level of user involvement can be reliably modelled during design so that performance options can be discussed with clients. This is the significant advantage of moisture models that look at mould growth on surfaces over a 3 year hourly interval modelling cycle; or spring/autumn overheating risks over a 12 month hourly cycle; or the risk of summer over-heating during plausible hourly data of Design Summer years;or risks of glare for particular fractions of the ~4000 hours of daylight. However, encouraging this depth of analysis in a manner that also avoids gaming the system, and that can be checked readily for compliance against targets requires work on both the targets and the modelling processes.

Rethinking the uses of simulation

Dynamic simulation is currently misused. In energy performance, 8760 hourly energy balance calculations are created but only a single annual energy performance index for compliance is reported. The analysis does not examine the risk of mistaken assumptions. For example: if a heat recovery ventilation system fan is too noisy for the householder and they disable it, then what is the health risk in an airtight house? In climate-based daylight (dynamic) modelling, 4000+ hourly radiation balance calculations are generated, but a single performance index is reported. In design this type of behaviour is unforgiveable. Rich time-based data enables discussion about quality of the lived experience. This is, and should be, the focus of design discussions. However, for run-of-the-mill designs, models are for compliance checking, not design analysis.

For compliance checking, far more sophisticated performance criteria are needed than the current practice of simplistic single-number targets. These targets should comprise at least a best and a worst-case operational scenario. Even energy efficiency reporting for cars provides different figures for town and country driving conditions. We also need models that are constrained to:

  • report the target compliance while identifying the ’killer’ variables;
  • reduce the potential for gaming the compliance system by controlling the range and type of inputs;
  • incorporate an input data reliability score.

References

Box, G.E.P. (1976). Science and statistics. Journal of the American Statistical Association, 71 (356): 791–799. https://doi.org/10.1080/01621459.1976.10480949

Leaman, A. & Bordass, B. (1999). Productivity in buildings: the ‘killer’ variables. Building Research & Information, 27(1), 4-19.

Latest Peer-Reviewed Journal Content

Journal Content

Evaluating mitigation strategies for building stocks against absolute climate targets
L Hvid Horup, P K Ohms, M Hauschild, S R B Gummidi, A Q Secher, C Thuesen, M Ryberg

Equity and justice in urban coastal adaptation planning: new evaluation framework
T Okamoto & A Doyon

Normative future visioning: a critical pedagogy for transformative adaptation
T Comelli, M Pelling, M Hope, J Ensor, M E Filippi, E Y Menteşe & J McCloskey

Suburban climate adaptation governance: assumptions and imaginaries affecting peripheral municipalities
L Cerrada Morato

Urban shrinkage as a catalyst for transformative adaptation
L Mabon, M Sato & N Mabon

Maintaining a city against nature: climate adaptation in Beira
J Schubert

Ventilation regulations and occupant practices: undetectable pollution and invisible extraction
J Few, M Shipworth & C Elwell

Nature for resilience reconfigured: global- to-local translation of frames in Africa
K Rochell, H Bulkeley & H Runhaar

How hegemonic discourses of sustainability influence urban climate action
V Castán Broto, L Westman & P Huang

Fabric first: is it still the right approach?
N Eyre, T Fawcett, M Topouzi, G Killip, T Oreszczyn, K Jenkinson & J Rosenow

Gender and the heat pump transition
J Crawley, F Wade & M de Wilde

Social value of the built environment [editorial]
F Samuel & K Watson

Understanding demolition [editorial]
S Huuhka

Data politics in the built environment [editorial]
A Karvonen & T Hargreaves

European building passports: developments, challenges and future roles
M Buchholz & T Lützkendorf

Decision-support for selecting demolition waste management strategies
M van den Berg, L Hulsbeek & H Voordijk

Assessing social value in housing design: contributions of the capability approach
J-C Dissart & L Ricaurte

Electricity consumption in commercial buildings during Covid-19
G P Duggan, P Bauleo, M Authier, P A Aloise-Young, J Care & D Zimmerle

Disruptive data: historicising the platformisation of Dublin’s taxi industry
J White & S Larsson

Impact of 2050 tree shading strategies on building cooling demands
A Czekajlo, J Alva, J Szeto, C Girling & R Kellett

Social values and social infrastructures: a multi-perspective approach to place
A Legeby & C Pech

Resilience of racialized segregation is an ecological factor: Baltimore case study
S T A Pickett, J M Grove, C G Boone & G L Buckley

See all

Latest Commentaries

Time to Question Demolition!

André Thomsen (Delft University of Technology) comments on the recent Buildings & Cities special issue ‘Understanding Demolition’ and explains why this phenomenon is only beginning to be understood more fully as a social and behavioural set of issues. Do we need an epidemiology of different demolition rates?

Where are Women of Colour in Urban Planning?

Safaa Charafi asks: is it possible to decolonialise the planning profession to create more inclusive and egalitarian urban settings? It is widely accepted that cities are built by men for other men. This male domination in urban planning results in cities that often do not adequately address challenges encountered by women or ethnic and social minorities. Although efforts are being taken to include women in urban planning, women of colour are still under-represented in many countries, resulting in cities that often overlook their needs.

Join Our Community