After several posts about pollution sources and effects of street pollution, I thought that it would be interesting to look at how researchers observe and track pollution. One way that they do so is by modelling; modelling is a useful tool to have for long term predictions, but as this paper by Berkowicz et al (2006) will show, there is still room for improvement for these models.
The paper summaries comparisons between data obtained from traffic pollution modelling with the COPERT model and OSPM mdoel (Operational Street Pollution Model) and that of actual street level measurements. They found that there are significant underestimations present in the modeled data, and proposed a new set of parameters (traffic emission factors, to be precise) which appears to give more accurate results.
Why is modelling prone to inaccurate data? This is because of what modelling actually is. For instance, modelling looks at data on source emissions, which is often calculated based on 1) traffic data and 2) vehicle specific emission factors. These are sometimes not actual measurements; for instance, the authors note that vehicle specific emission factors are often estimated with different methods and are found not to just vary according to vehicle type, but also on driving conditions — this is something that had not been taken into account.
Parameters assigned to different models may also be problematic. For instance, the COPERT modelling in European context aims to be a simple method for estimation of national emissions of traffic related pollutants. It uses vehicle emission factor, which is a function of vehicle speed; differentiation between vehicle types, fuel used, engine capacity or weight, emission legislation category. It also includes correction for cold starts and degradation of emission reduction equipment with mileage. This sounds good, and simplified, but unfortunately the authors note that the parameters used may be erroneous.
For example, they thought that emissions predictions on the national is not quite possible because it is difficult to isolate source pollutions on the national level — there are many variables that complicates pollution levels, as compared to merely street measurements.
Meanwhile, the Danish OPSM model (Berkowicz, 2000; cited in Berkowicz et al, 2006), is a simple parametrized model (parametrization of flow and dispersion conditions — how air pollutants are dispersed in the atmosphere — in street canyons). The good thing about this model is that it requires little CPU time so that the model can be modeled for longer time periods — this is useful because this is precisely what models try to do (predicting what will happen in the long term given a certain set of conditions)
However, an important result from this paper is that while models provide a good estimation for street level pollution, they are rather unreliable when it comes to providing data urban /regional studies. This is because it is easier to isolate pollution sources on the streets, as compared to that of the whole region. Despite so, traffic pollution modelling is a tricky process. Factors involved in modeling may be inaccurate, and may result in underestimation of data (for instance, too low emissions attributed to heavy diesel traffic (5% of traffic) resulted in underestimations of NOx on the street of Jagtvej, Copenhagen, by almost 30%, and an underestimation of CO2 by 60%.
The authors concluded the paper by warning readers that it is essential to note any potential biases from inaccurate emission values attributed to different vehicles, and calls for more comparison between models and actual street level measurements. These information will better improve existing models.
Berkowicz, R., Winther, M. and Ketzel, M. (2006). Traffic pollution modelling and emission data. Environmental Modelling and Software 21: 454-460