Reliability Measures are Unreliable!

Introduction

For over 40 years concerns about the definitions of traditional measures and indices of power plant reliability have been raised. While these concerns are most often focused on peaking and cycling technologies, base-load technologies are also affected. In recent years the need to develop and apply new reliability indices which more accurately reflect the market place’s value of power plant reliability has taken on a high degree of urgency. Company decision-making at all levels is affected and the old “technical” definitions of reliability are being modified to incorporate economics in order to link plant reliability with the actual cost (or profit) of electricity supply. Instead of measures that are calculated over both demand and non-demand periods, new reliability terms consider only the hours that the plant would have been dispatched plus the financial consequences to the company’s bottom line from the failure to generate during those hours.

The Historic Problem

Among the traditional measures of plant reliability in many countries have been the Equivalent Availability Factor (EAF), the Forced Outage Factor (FOF) and the Equivalent Forced Outage Rate (EFOR). In other countries the Unit Capability Factor (UCF) and the Unplanned Capability Loss Factor (UCLF) are used. Those measures that are “factors” (EAF, FOF, UCF, UCLF, etc.) use as their denominator the entire time period being considered (typically one month or one year) without regard to whether or not the unit was required to generate. Therefore, for non-baseloaded units, these factors can lose their relevance (and the more cyclic the demand is, the greater the effect). For example if a simple cycle Gas Turbine unit is used exclusively for meeting peak demand periods it may only be required to generate just a few hundred hours a year. If it were unavailable during 25% of those hours it would still have a high EAF and UCF and a low FOF and UCLF. If it was required to generate 100 hours per year but experienced forced outages during 25 of those demand hours (and no other outages over the 8760 hours in the year) it would still have a EAF and UCF of 99.71% (8760-25)/8760 x 100 and a FOF and UCLF of 0.29% (25)/8760 x 100. Those numbers might look good on paper but the reality is that the unit could only produce 75% of the power required of it. So these factors don’t come close to describing the unit’s ability to produce its rated capacity when demanded. Of course for true baseloaded units such as most nuclear units who generate every hour they are available or even for gas turbine units in countries where they are near base-loaded, these factors come much closer to depicting the unit’s “real” reliability.

The terms Forced Outage Rate (FOR) and Equivalent Forced Outage Rate (EFOR) were introduced in an attempt to resolve these difficulties (FOR and EFOR differ only in that EFOR considers the “equivalent” impact that forced deratings have in addition to the full forced outages that is all FOR considers. In this simple example with only full forced outages I will only examine the FOR).

The equation for FOR from the IEEE -762 Standard is:

FOR = ((Forced Outage Hours) / (Forced Outage Hours + Service Hours)) X 100

For the example given above the actual service hours are 75 so that the FOR would be:

FOR = ((25) / (25 + 75)) X 100 = 25%

The complement of the FOR might be considered to be the unit’s reliability so that

Reliability = 100% – 25% = 75%

So it appears that FOR (and EFOR when forced deratings are present) are good measures of a unit’s reliability.

However, in actual practice it is extremely unlikely that all of the forced hours that a peaking unit experiences during the course of a year are during its demand period. (In our example all 25 Forced Outage Hours were assumed to occur during the 100 demand hours). Most times a forced outage will have some (if not most) of the hours required to restore the unit to service occur during non-demand periods and some during demand periods. In this example the unit might have experienced five forced outage during 25 hours of its demand period (out of 100 hours total demand). However, it is likely that the time to restore to unit to full capability would average more than the five hours each during demand periods. It is much more probable that the total forced outage hours would be several times higher as some previous studies suggest that the average restoration time for a gas turbine forced outage is on the order of 24 hours. Therefore, if we use 24 hours as the average down time then the total forced outage hours reported would be 5 X 24 = 120 Hours. Now the FOR would be

FOR = ((120) / (120 + 75)) X 100 = 61.5% and the unit’s reliability = (100-61.5) = 38.5%, both of which are obviously unrealistic when attempting to use these statistics to make decisions requiring the expected reliability of units to be used (if gas turbine reliability was really only 38% I would never fly on an airplane unless it had dozens of redundant jet engines strapped onto its wings – would you?). And yet these values are very close to actual FOR and EFOR statistics being reported for peaking generators!

Does that mean that FOR and EFOR should not be used? Absolutely not! They are in fact reasonable indicators for baseload or near baseload types of generating units. However, for cycling or peaking units they are inadequate and new metrics were needed.

A few years ago a modification of EFOR was introduced by the IEEE to attempt to resolve this problem. The term Equivalent Forced Outage Rate-Demand, EFOR(d), was developed and incorporated into the IEEE-762 Standard (EFORd has been used by some companies in North America for many years prior to incorporation into IEEE 762). EFOR(d) only uses that portion of a unit’s forced outage (or derating) that occurred during demand periods. As we saw in the earlier example, that would resolve the issue nicely. However, demand periods are not currently part of standard reporting systems so that an approximation technique was devised using a MARKOV approach. Although not perfect, this technique does result in a reasonably accurate calculation of EFOR(d).

The New Problem

As the industry moves into a more competitive market-based business environment, the reliability indicators must be able to reflect a direct linkage between a plant’s reliability and the corporate or portfolio cost and/or profit of electricity. They should incorporate the large (often very large) variation in the value of a unit’s reliability. For example, in previous efforts I have made to quantify the value of availability improvement of an individual unit within a large company, I found that even for efficient coal-fired units there can be a factor of 100 or more between the unit’s value during a low demand period compared to its value during a high demand period when other generators are having unexpected forced outages. Even nuclear units have a significant variability in their value during different times. This variation will inevitably result in different economically optimal decisions at different times.

The following example was developed from actual data at one large generating company using value-based availability to measure its plants’ performance:

On a Tuesday morning a small boiler tube leak was detected at one of its large efficient coal-fired units that ran close to baseload. Two options were identified:

1. Remove the unit from service immediately and repair the tube as quickly as possible so as to minimize the downtime of the unit.
2. Continue to operate the unit until the weekend when the demand is lower and the cost impact of the unit’s unavailability is less per hour. However, the unit would be exposed to a risk of a longer duration outage due to possible additional tube damage.

When the plant staff evaluated these options for an event that occurred during the non-peak season, it was found that the cost was minimized by choosing Option 1. This was because the differential between the weekday and weekend-day cost per hour of this unit’s unavailability at this time was not enough to offset the likelihood of a longer outage if the unit was operated until the weekend.

Total Cost – Non-Peak Season
1) Option 1 – $115,000
2) Option 2 – $184,000

This choice also had the effect of minimizing the forced outage hours resulting from this event.

However, when the exact same event occurred during the company’s peak season the cost were:

Total Cost- Peak Season
1) Option 1 – $354,000
2) Option 2 – $265,000

In this case, Option 2 (waiting until the weekend) is clearly the best economic choice, even though it had the effect of reducing the plant’s availability beyond that of Option 1.

As we can see, often the objective of maximizing a unit’s “technical” performance (in this case availability) can bring it into direct conflict with the company’s goal of minimizing cost and/or maximizing profitability.

This example is only one of hundreds of decisions that a power plant staff must make every year and illustrates the vital importance in developing performance metrics that establish direct linkages between the plant’s goals and the company’s overall financial objectives. In this way we will be encouraging the plant staff to make optimal economic decisions from the corporate perspective and not just ones that maximize their local technical goals.

The following instance of local decision making exemplifies this situation: I was escorting the Minister of Energy from India and his staff to visit a new, large, efficient and reliable coal-fired plant. The plant staff made a presentation describing the first time they opened unit 2’s turbine for the required manufacturer’s inspection. The complete turbine inspection was finished in only 17 days! After the presentation I asked a member of the plant staff how they accomplished such an amazing feat. He replied that he personally had spent most of his time during the preceding year planning the outage, including evaluating the best use of lay-down space, overhead cranes, etc. and developing contingency plans for virtually every eventuality. In addition the plant staff worked two-twelve hour shifts every day; there were numerous turbine manufacturer field representatives on hand; they staged all parts that might be needed; and, perhaps most importantly, they did not find any unexpected equipment problems when they opened the turbine.

He went on to say that the plant management and staff were substantially rewarded with bonuses, promotions, and system-wide recognition as a prime example of the type of performance others should try to emulate. However, when I asked him how the unit subsequently performed, he said that it was not required to generate for over two months! This was due to the plant being designed to burn only high cost, low sulfur coal, resulting in a high dispatch cost so that the unit was seldom economic to generate except during the peak season. A quick calculation indicated that the plant had spent an extra $1 million to achieve this availability improvement during a time (the non-peak season) where the plant’s availability had essentially no value to the company. And yet the company’s existing goal’s system encouraged and rewarded everyone in the production organization (including the production executives) for achieving this clearly uneconomic result.

None of the traditional statistics such as EAF, UCF, FOF, UCLF, FOR, EFOR or even EFOR(d) adequately make the linkage between technical and economic goals. However, some companies have begun using a new measurement technique called Commercial Availability that promises to do exactly that!

I will conclude this topic in my next post by examining the concept of Commercial Availability and begin to consider some of the practical implications I have uncovered when using this statistic.

Leave a Reply