March 16, 1994 MEMORANDUM SUBJECT: Air Quality Model Evaluation Protocol for Cyprus Northshore Mining Company FROM: Joseph A. Tikvart, Chief Source Receptor Analysis Branch, TSD (MD-14) TO: Rebecca Calby, Regional Meteorologist Region V In response to your request to Dean Wilson, the Model Clearinghouse has reviewed your comments on the subject protocol. Also, as requested, we have reviewed the protocol itself. Based on discussions within the Office of Air Quality Planning and Standards, we generally agree with your comments on the modeling protocol. However, as noted in our comments below, there are several areas we believe need additional information or clarification. General comments are provided first followed by specific comments on the protocol. General We are not totally comfortable with the number of monitors used in this evaluation. It is far fewer than any evaluations we have been involved with in the past. This is especially true given that this is not a real simple situation; both downwash and terrain interaction effects are being evaluated. Also, the presence of a nearby large body of water is likely to complicate the transport and dispersion environment. However, we do not have specific recommendations regarding the placement of additional monitors and are willing to defer to your judgment that coverage is adequate. The proposed model, which excludes terrain and downwash effects is inconsistent with the physical phenomena expected to occur under certain conditions at the Cyprus facility. The proposed model is not consistent with current technical opinion in the modeling community, as evidenced by widespread practice in modeling under these conditions. While we are willing to allow the proposed model to be entered into competition against the reference model, we believe that the general public should be made aware of its technical supportability. We recommend that a technical comparison exercise be conducted as described in Section 2.6 of the "Interim Procedures for Evaluating Air Quality Models (Revised)" and detailed in the "Workbook for Comparison of Air Quality Models." Because the reference model is Industrial Source Complex (ISC2) and the proposed model is ISC2 without terrain and downwash effects, it is not necessary to conduct the technical comparison for all application elements described in the "Workbook." However, as indicated in the Workbook, "a simulation model should attempt to describe mathematically the effects of all relevant physical phenomena expected to have a significant effect on air quality in the application of interest." Thus, a technical comparison of the treatment of terrain and downwash effects is recommended. The results of this comparison should be included in the information made available for public comment as part of the permit revision process. Specific Comments: 1. Emissions Data - Page 6. The protocol is unclear on how and what emissions data were collected from Power Boiler 1. However, your comments plus the information in Appendix E clarifies this issue. Also, it is unclear what is being described in the second paragraph, first sentence. Is this for pelletizer or Power Boiler 1? 2. Ambient Data - Page 6. The protocol does not mention a Quality Assurance and Quality Control (QA/QC) protocol applied to the ambient air quality and meteorological data monitoring program. This is especially important for a model evaluation study. However, it is noted that the existing monitoring program was designed under a Prevention of Significant Deterioration (PSD) monitoring program. Thus, we presume the appropriate QA/QC protocol has been applied to the data and that there was at least 90% valid data recovery for twelve calendar months. 3. Program Objectives - Page 8. One objective of the model evaluation is to select the most appropriate model for future permitting assessments for future expansions and/or emission increases. The applicant should be alerted that as noted on page 61 of the "Interim Procedures for Evaluating Air Quality Models (Revised)," "the "proven" model is only applicable for the source-receptor relationship for which the performance evaluation was carried out..... Significant differences in the source configuration, e.g., doubling the stack height from those in existence during the model technical test, may necessitate a new evaluation." Therefore, depending on the source-receptor configurations for the future permitting assessments, a new evaluation may be needed. 4. Determination of Background Concentrations - Page 16. The background concentrations were determined from the lowest hourly values for June 1990 - June 1992 or not fully concurrent with the model evaluation period of March 1992 - August 1993. It is suggested that the determination of background concentrations also encompass the model evaluation period given that the ambient data are available. 5. Meteorological Data - Page 31. The protocol does not specify how the 60-meter wind speed will be corrected to 10 meters for sigma theta calculations. This is to substitute for missing 10-meter wind speed data. Presumably a power-law adjustment will be done but this should be specified in the protocol. 6. Reference Model - Page 32. It is indicated that ISCST was found to be more conservative than the complex terrain model, i.e., Complex I. It is not clear how that determination was made. We believe it would be appropriate to rerun the preliminary estimates with a model that conforms with the Environmental Protection Agency's intermediate terrain policy as stated in the "Guideline on Air Quality Models (Revised)." This is to ensure that the proper reference model supports the same conclusions regarding monitor location as does just modeling with ISCST. 7. Overpredictions - Pages 33-i and 33-iv; Tables 5-1 and 5-2. Were the model estimates made at the monitor locations or over a large grid system? It is not appropriate to compare estimates at large number of grid points with measures at only a very few locations. If that was the case then the overprediction claim is misleading. 8. Chapter 6 - Pages 34-44. On page x of the protocol Overview, it is implied that two separate evaluations will be done for SO2 and NOx. However, the separate evaluations are not succinctly described in Chapter 6. We suggest that this be clarified in Chapter 6. The protocol describes a procedure to exclude from the evaluation hours with wind directions outside the range of 35 degrees through 215 degrees. There is no description of a procedure to determine predicted and observed 3-hour and 24-hour average concentrations for use in the statistical evaluation that accounts for hours when the wind direction is outside this range. Also, the threshold check procedure described on page 41 is inconsistent with the purpose of the threshold check described on page 7 of the "Protocol for Determining the Best Performing Model." The application of a threshold check is not to eliminate hours from the evaluation but to establish the values (observations and predictions) used in calculating the robust highest concentration (RHC). The number of values (N) used in calculating the RHC is nominally set at 26 but may be lower if the number of values exceeding the threshold value is less than 26. If N is less than 3, the RHC statistic is set equal to the threshold value. Instead of excluding these hours, we suggest including all hours and wind directions in the statistical evaluation. Because the test statistic is based on the high end of a frequency distribution of predictions and observations, it is not expected that very low predictions and observations would bias the results. Also, the purpose of the evaluation is the comparison of the models. The composite performance measure for each model is used in the model comparison. Thus, including all hours in the predictions for both models should not bias the composite performance measure for each model. We recommend that more detailed information be provided in Section 6.5.2 on the procedures for determining the operational performance measure component. Although it appears implied in this Section that the procedures follow the recommendations in the "Protocol for Determining the Best Performing Model," there is no description of using the robust highest concentration for each monitoring station for the operation component measure. Also, there is no description of the method to select the best performing model using the model comparison measures and standard error. In general, a more thorough description of the performance statistics and methodology seems warranted. The procedure for adjusting model underpredictions described at the bottom of page 43 is unclear. The "Interim Procedures for Evaluation Air Quality Models: Experience with Implementation" describes procedures that have been used in previous model evaluations for adjusting model predictions. These procedures were based on paired or unpaired in time and space comparisons. Also, it is unclear whether a determination of underprediction would be made separately for SO2 and NOx and whether separate adjustments would be made to the model predictions. Conclusions In summary, we recommend that the issues noted in our comments be clarified by the applicant prior to approval of the model evaluation protocol. Provided these issues can be clarified, we agree with your assessment that the protocol appears to basically conform to the guidance provided in the "Interim Procedures" document. If you have any questions or comments, please contact Dennis Doll at (919) 541-5693 or Dean Wilson at (919) 541-5683. cc: G. Blais D. Doll R. Robinson D. Wilson