Misuse of Building Performance Simulation

Misuse of Building Performance Simulation

A rethink is called for how building data is modelled and the purposes simulation is used for. Better to use models for design decisions than validating compliance?

Michael Donn (Victoria University Wellington) asks: What are appropriate roles and uses for building performance models? What would be better goals and uses for models and the data they generate?

If it is true that ‘All models are wrong, but some are useful’ can building performance models be trusted? Most users seem to agree that these models reliably show the scale of differences between design options. But, most modellers also agree that they do not predict actual performance. In this case, are these models really suitable for compliance?

The author of the above aphorism, George Box, wrote:

‘Since all models are wrong the scientist cannot obtain a ‘correct’ one by excessive elaboration. On the contrary following William of Occam he [sic] should seek an economical description of natural phenomenon.’ (Box 1976)

In developing modern tools of building performance simulation this principle seems to be forgotten. Is our angst about the ‘gap’ between reality and the ‘predictions’ of our lighting, thermal, acoustic and air quality performance simulation models evidence of a misconception about the role of models?

What are simulation models for?

Simulation models are constantly being elaborated in an attempt to ensure they are fit-for-purpose, so revisiting Box’s idea is of increasing importance. For example, some codes require virtual light level “measurements” in daylit buildings at 300 mm centres. For a 10 x 10 m building this entails 900 sensor calculation points – an unnecessary precision. A 50 x 60 m plan entails 33,000+ separate calculations. Ridiculous.

Although it is possible to model complex networks of interconnecting rooms to examine natural ventilation flow via openable windows and internal doors, modellers seldom know the wind pressures driving these flows. Some certification systems persist in use of single-zone thermal models for compliance-checking at great risk to the overall health of the future occupants. Openable windows are designed for cooling in complete isolation from users’ safety, privacy, insect or acoustic concerns. These are merely the tip of a vast iceberg of varying modelling approaches that represent differing world views as to what is important as a measure of performance or the level of detail to model. The list of elaborations published annually is endless.

Box (1976) also wrote:

‘Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad.’

As a statistician, Box identified the key issue with the emphasis on the ‘accuracy’ of performance simulation tools. Building performance models have been elaborated over decades of coding, linking together the basic experimental physics and the physiology of ‘comfort’ with differing degrees of precision. In the process modellers exclude, for example, any allowance for cognition of inhabitants’ ‘comfort’ goals. Analysis of what is important in this elaboration appears to elude both the code writer and the code user. Ignoring cognition in our models provides a plausible explanation of the performance gap.

If Occam’s dictum to seek economical models is followed, then the question arises of how to determine what is an appropriate level of simplification? As models are simplifications of reality, what simplifications are acceptable? Since the 1980s, many papers have demonstrated that the same calculation model in different hands can produce wildly different answers. Indeed, two different tools in the hands of the same person can also produce vastly different answers. What are the key or ‘killer’ variables (Leaman and Bordass 1999) for achieving high (energy / carbon / visual acuity) performance? Also, for compliance, certifiers are now focused on the reliability of the actual number generated, not what modellers are sure of which is the comparative performance of two options in design analysis. Can we develop a minimum modelling capability, and a set of agreed ways of using a model to enable it to be used for code compliance checking against a defined target?

Are assumptions and inputs reliable?

In response to these issues, since the late 1990s, building performance simulation papers have referenced ‘quality assurance’ regularly. But what does this involve and are the assumption and inputs reliable?

  • High quality input data?

Is this REALLY the product specified, or is it a generic default value in the model, or a standard? Can a means be developed to assure compliance checkers that the ‘tiger’ / ‘killer’ variables in the model are reliable values?

  • Experienced modellers?

How can their models be cross-checked by their colleagues when there is a myriad of inputs? Who has not looked at model outputs and stated ‘What must be happening here is …’ with no proof? Can a means of automating models be developed that allow the modelling the richness of human habitation, such as opening windows for cooling, when it is not noisy?

  • Reliable climate data?

Local climate issues can affect energy performance by 20-30%: urban heat islands, wind speed variations with height and between urban buildings, natural atmospheric temperature changes with height, microclimatic conditions. These are not routinely modelled in design, and are often excluded in compliance calculations. Can a means of automating models be developed to encourage appropriate modelling of local climate, height and wind?

  • Plausible models of human behaviour?

There is a risk of creating inappropriate models of human behaviour.  For example, where the heating turns on below 17.5oC and off above 18.5 oC; or modelled windows that open for cooling stay open for a full hour even when it is cold outside. This would significantly increase the energy for heating, whilst reducing the cooling. Or, blinds are operated against glare with a similar precision. A plausible level of user involvement can be reliably modelled during design so that performance options can be discussed with clients. This is the significant advantage of moisture models that look at mould growth on surfaces over a 3 year hourly interval modelling cycle; or spring/autumn overheating risks over a 12 month hourly cycle; or the risk of summer over-heating during plausible hourly data of Design Summer years;or risks of glare for particular fractions of the ~4000 hours of daylight. However, encouraging this depth of analysis in a manner that also avoids gaming the system, and that can be checked readily for compliance against targets requires work on both the targets and the modelling processes.

Rethinking the uses of simulation

Dynamic simulation is currently misused. In energy performance, 8760 hourly energy balance calculations are created but only a single annual energy performance index for compliance is reported. The analysis does not examine the risk of mistaken assumptions. For example: if a heat recovery ventilation system fan is too noisy for the householder and they disable it, then what is the health risk in an airtight house? In climate-based daylight (dynamic) modelling, 4000+ hourly radiation balance calculations are generated, but a single performance index is reported. In design this type of behaviour is unforgiveable. Rich time-based data enables discussion about quality of the lived experience. This is, and should be, the focus of design discussions. However, for run-of-the-mill designs, models are for compliance checking, not design analysis.

For compliance checking, far more sophisticated performance criteria are needed than the current practice of simplistic single-number targets. These targets should comprise at least a best and a worst-case operational scenario. Even energy efficiency reporting for cars provides different figures for town and country driving conditions. We also need models that are constrained to:

  • report the target compliance while identifying the ’killer’ variables;
  • reduce the potential for gaming the compliance system by controlling the range and type of inputs;
  • incorporate an input data reliability score.

References

Box, G.E.P. (1976). Science and statistics. Journal of the American Statistical Association, 71 (356): 791–799. https://doi.org/10.1080/01621459.1976.10480949

Leaman, A. & Bordass, B. (1999). Productivity in buildings: the ‘killer’ variables. Building Research & Information, 27(1), 4-19.

Latest Commentaries

Mombasa City, Kenya. Photo: Sebastian Wanzalla

Brian Dean and Elizabeth Wangeci Chege (Sustainable Energy for All) respond to the Buildings & Cities special issue Alternatives to Air Conditioning and explain why thermal comfort is not only a construction industry problem to solve but needs to be placed in the policy agenda on global warming. Thermal adequacy is still not understood as an essential need for human survival and that governments have an essential role.

Image: Dedraw Studio, Getty Images

Tom Hargreaves and Nickhil Sharma (University of East Anglia) comment on contributions of the Buildings & Cities special issue Energy, Emerging Technology and Gender in Homes on the role of gender in technology development and the energy transition. This must be broadened further to social justice issues. A failure to do so risks fuelling resistance and pushback to new and emerging energy technologies. Three key avenues for future research and practices for a just energy transition and emerging technologies are set out.

Join Our Community