Decision Making in a Competitive Business Environment

The electric power generation industry has been in a highly regulated business environment for most of its existence. Over the last several years, however, much of the industry has been moving to a different, more market-driven business environment and companies have had to make a hard adjustment in their basic thinking and their approach to risk. For better or worse much of the world’s electricity generating industry now finds itself in an increasingly competitive business environment and decision-making must evolve from a risk-adverse to a risk-management style.
For most of the 100+ years of its existence our industry has been in a highly regulated business environment. That is because of the very high cost of developing the infrastructure required to generate, transmit and distribute electricity. Electricity is also a commodity that cannot be easily stored so that when you turn on a light switch some generating unit somewhere must increase its output.
So to ensure that electricity would be available to all at a reasonable price and that the investors in electric companies would get a reasonable return on their investments each investor owned company was granted a monopoly in its service area but with a high degree of regulation and oversight. Each company had a compact with its regulating agency such that if the company spent money prudently (building and maintaining a plant, transmission lines, distribution transformers. etc.) the regulating agency would allow that cost plus a set profit to be collected by the company from its customers and distributed to its investors. The basic equation governing decision making was therefore:

Cost (prudent) + Profit (mandated) = Price

Under this compact there is little incentive for any company to take any risks. As an example, consider two mutually exclusive options that a company could invest in. Option “A” will guarantee a savings of twice the cost while option “B” will yield ten times the savings but only have a 50/50 chance of success. The proper decision for a regulated company would always be to chose Option A since they could not keep any of the additional “upside” of option B (beyond the allowable rate of return) if it were successful and may have to completely “write-off” the cost if option B were unsuccessful (a 50/50 possibility).
Hence, the appropriate decision making mindset in a regulated industry is to avoid risk! And historically this behavior has often been observed throughout the management chain in every regulated company.
Consider now what changes take place when competition is introduced to the generation industry. The governing equation now changes to the following:

Price (market) – Cost (total) = Profit

Now the company’s profit is the dependent variable (not price as in the regulated model) and that profit is dependent upon 1) the price set by the market (with little control by the company) and 2) each company’s total cost, including successful and unsuccessful investments and their total (not just regulated) returns.
Since no one is evaluating the prudency of how the company spends its money (except its investors), the company should now always choose the 10-1 option B (even with only a 50/50 chance of success) over the 2-1 “sure thing” of option A since the company now gets to keep all of the upside when it is successful, making the mathematical odds of option B a 5-1 return (50% change of a 10-1 = 5-1), much better that option A (2-1 return). Eventually, companies consistently making these managed risk decisions will put competing companies that make avoided risk decisions out of business. The better all employees are at evaluating their decision options are in terms of their Reward-to-Risk Ratios the better the results will be.

In addition companies that create a mindset of identifying, quantifying and managing risk will continually examine their failures (as well as successes) to try to learn why it failed (or succeeded) and apply that knowledge to the next group of decisions. Risk-adverse companies have a motivation to “bury their mistakes as deep as possible” and are often slower at improving.

Reward/Risk Decision – Example 1

This is a simple example of how to evaluate options while playing a video poker game from the mindsets of 1) risk avoidance and 2) risk management.
You are playing a “jacks or better” video poker game and you have been dealt an ace-high straight which pays $20 for a $5 investment. Four of the cards in the straight are hearts (the 10, jack, king and ace) while the queen is a spade.
You now have the choice of 1) keeping all five cards you were dealt and guaranteeing a $20 payoff (a 4-1 return) or 2) discarding the queen of spades and hope to be dealt the queen of hearts, making a royal flush with a payoff of $2000!

What do you do?

Risk = $5.00
Reward
Option 1 – $20 @ 100% probability = $20.00
Option 2 – $2000 @ 1/47 probability = $42.55

Reward/Risk
Option 1 – $20/$5 = 4
Option 2 – $42.55/$5 = 8.51 (Actually the ratio would be slightly higher since other winning cards could be drawn; i.e. a different heart for a flush or a different queen for a straight or a non-heart jack, king or ace for a pair that will pay off.

We can readily see that high risk – high reward Option 2 is over a 2-1 better option than Option 1 (8.51 / 4) = 2.12. This is true even though Option 2 will only pay off one time out of every 47 times this deal occurs whereas Option pays off every time! But Option 2 makes so much money when it does hit it is well worth the risk.

A decision-maker in a regulated market will always choose Option 1 since he can’t keep but a small percentage of the huge upside Option 2 offers when it does hit. However, in a market business environment his company can keep all of Option 2’s upside when it hits and the total payoff more than makes up for the many times it fails.

Reward/Risk Decision – Example 2

If I change the example so that you have been dealt a king-high straight which still pays $20 for a $5 investment? Four of the cards in the straight are hearts (the 9, 10, jack and king) while the queen is a spade.
Again you have the choice of 1) keeping all five cards you were dealt and guaranteeing a $20 payoff (a 4-1 return) or 2) discarding the queen of spades and hope to be dealt the queen of hearts, making a straight flush but now with a payoff of only $250 instead of the $2000 payoff for a royal flush.

Now what do you do?

Risk = $5.00
Reward
Option 1 – $20 @ 100% probability = $20.00
Option 2 – $250 @ 1/47 probability = $5.32

Reward/Risk
Option 1 – $20/$5 = 4
Option 2 – $5.32/$5 = 1.06 (Actually the ratio would be slightly higher since other winning cards could be drawn; i.e. a different heart for a flush or a different queen for a straight or a non-heart jack, king or ace for a pair that will pay off.

We can readily see that high risk – high reward Option 2 is a much worse option than Option 1. This is because the reward is much lower while the risk is the same.

These simple examples demonstrate that the best decisions can only be determined by considering the rewards as well as the risk and the Reward/Risk ratio is a good way to evaluate investment options.

WINNING AT THE HORSE TRACK

What strategy should you employ to maximize your chance of winning at the horse track? It depends on how you will be graded! If you are being graded on your winning percentage (how often you cash a winning ticket) you should bet the favorite to come in third place. That way if the favorite (probably the best horse) comes in 1st, 2nd or 3rd you will win! However, this very risk-adverse strategy is almost guaranteed to lose money over the long term. But that’s not the way I said you will be graded!!! So you shouldn’t care since you will have a high winning percentage.

If, however, you were told that you would be graded on how much money you have at the end of the evening you should adopt a much different strategy. You would start by trying to forecast the “true odds” of each horse winning (using either forecasts from the track handicappers or your own method) using all available data.
However, now you will need to factor in the payoff for each horse and not simply choose the horse with the best chance of winning. Just before the start of the race you can check the “tote board” that has information about the payoff for each horse based on which horses other people at the track have bet on. This is called pari-mutual wagering which takes the total amount bet on each horse, extracts a set percentage, usually ~ 18%, for distribution to the winning horses plus state taxes and track expenses and profits, etc. and then divides the remainder by the number of tickets sold on that horse. Dividing each horse’s payoff by the odds of its winning will give you the Reward to Risk Ratio. The horse that has the highest Reward/Risk Ratio is the one you should bet on!

Horse Track Reward/Risk Example Risk-Management Winning Strategy

Below are the horses in the first race and the odds that they will win the race (as set by the track handicapper) and the payoff rates (as set by the betting public)
Horse Odds Payoff Reward/Risk Ratio
Secretariat 2-1 1.5/1 0.75/1
Whirlaway 4-1 6/1 1.5/1
Citation 5-1 10-1 2/1
Gallant Fox 7-1 10-1 1.43/1
Alysheba 8-1 4-1 0.5/1
Seattle Slew 10-1 17-1 1.7/1
Northern Dancer 12-1 20-1 1.67/1
Swaps 15-1 25-1 1.67/1
War Admiral 17-1 30-1 1.76/1
Aristides 20-1 35-1 1.75/1

Which horse would you bet on?

From the table we can see that “Citation”, while only the third most likely horse to win at odds of 5-1, will have a payoff of 10-1 when it does win. Therefore if this exact scenario was repeated 10 times with you betting $10 on Citation each time and Citation did win twice (once every 5 races as you predicted) then you would collect $200 (2 wins X $10 per bet X 10/1 payoff) and you had bet a total of $100 (10 races at $10 per race).
Even though Citation only won 2 out of the 10 races and the favorite, Secretariat, won half (5) times, betting on Secretariat would have only paid $75 (5 wins X $10 X 1.5 payoff) for your total bet of $100. So your “Secretariat” .500 “batting average”, while it would guarantee that you would get into the Hall of Fame if this were baseball, would have a far inferior financial return compared to “Citation’s” .200 “batting average” (which would have ensured that you would never even make it to the major leagues at all) but a much larger profit.

As a company’s management moves away from risk-adverse decision making toward risk-management decision-making you should expect certain changes to occur.
1) The staff should begin to identify all technically viable decision options, even ones that may only promise to solve less than 100% of the problem (remember the 80/20 rule that states that 80% of the problem can be solved for 20% of the resources required to solve 100% of the problem – so don’t always request the “perfect” solution – use incremental B/C analysis to ensure the best use of company resources across the fleet.
2) The uses of historical performance data to forecast future performance with and without the proposed option will become even more important in estimating each option’s Reward/Risk Ratio.
3) All staff must get used to using Reward/Risk Ratios when making decisions and compare results to expectations instead of ignoring your mistakes since in a market business environment you should be willing to examine your decision making processes and seek to get better at identifying, quantifying and evaluating all viable decision options using Reward/Risk analyses.

In concluding this case study I am reminded of a story I told several years ago at a workshop I held for the United Nations Development Program in China. It seemed that two men were hiking in the woods when they came to a clearing and saw a large bear attacking them. One man turned to run away while the second man took off his hiking shoes and changed into his track shoes. The first man said “You can’t outrun that bear” to which the second man replied “I don’t have to, I just have to outrun you”! So as that big bear of competition starts attacking your company remember, you don’t have to be the fastest to change, you just can’t be the slowest!!!

Peak Season Reliability

Peak Season Reliability

Background

Some years ago I was attending a meeting between my company’s Planning and Operations departments when operations made a comment that our generating plants had higher reliability goals during their peak season (in our case the summer). I realized that if the plants actually achieved higher reliabilities during the peak season our Planning department would be overestimating the amount of “Expected Unserved Energy” (EUE) during this period because my reliability department only provided “annual” estimates of each plants future unreliability. Since EUE is the key component in establishing the optimal Reserve Margin Criteria and since the substantial majority of EUE occurs during the peak season, incorporating higher reliabilities forecasts during the summer into the planning models could significantly reduce the amount of peaking generation we needed while maintaining economically optimal customer service reliability (defined as the point where the incremental cost of increased reliability is equal to the incremental value the customer receives as a result of that increased reliability). My Reliability department decided to investigate to find out if this trend was historically true and if we could confidently predict it to continue into the future.

Data Collection and Analysis

Reliability data for each of the ~100 generating units was collected for both the system’s “peak season” and “non-peak season” for the previous five years, an easy task using the North American Electric Reliability Corporation’s (NERC) Generating Availability Data System (GADS) program. This data was then compiled for groups of units, depending on their duty type (base load, load following, cycling, and peaking) and comparisons were made between the plants’ reliabilities during the peak versus non-peak seasons. Statistical analysis indicated a very high probability that the plants were in fact exhibiting higher reliabilities during peak seasons than simply a random variation. Furthermore, the differences were greatest for units used primarily for peaking duty and least for base-loaded units, even though the nuclear units (the most base-loaded units) exhibited the same trend but to a lesser degree. Therefore, forecasts of future plant reliability incorporating seasonal variations could be made with a high degree of confidence.

Results

My Reliability Engineering department began providing the Planning department with two sets of reliability forecasts, one for the peak season and one for the non-peak season and the Planning department modified their generation expansion optimization programs to incorporate those sets of forecasts. The new programs showed that the economically optimal reserve margin was reduced by one full percentage point with no reduction in customer service reliability. For the company’s 30,000 + MW of capacity at that time the effect was to avoid building 300 MW of new peaking capacity and no one had to do anything differently from what they had been doing! At the time this represented a cost savings of $100 million. Following publication of our work, the analysis was extended to the industry by NERC’s Generating Availability Trend Evaluation (GATE) Working Group with similar results. It pays for reliability engineers to keep their ears open!

References

1) Richwine, R.R.; Lofe, J.J.; Decreasing System Peak Reserve Margin Requirements
2) Lofe, J.J.; Bell, F.J.; Curley, G.M.; Seasonal Performance Trends; NERC publication

Using Commercial Availability

USING COMMERCIAL AVAILABILITY

Introduction
In my last posting I examined why traditional measures of a power plant’s availability/reliability (EAF, FOF, UCF, UCLF and EFOR) were inadequate in today’s increasingly competitive business environment. I also indicated that a new measure, EFOR(demand), while an improvement, did not totally address the problem. In this posting I will review a different measure, Commercial Availability (CA), which has begun to be used by some generating companies around the world. This statistic attempts to measure the impact a plant’s availability has on the company’s cost of generating electricity (and its profitability when the company operates in a market-type environment). I will also discuss some of the implications that result from the adoption of Commercial Availability as the primary availability measure as well as how to calculate and benchmark CA.

Background
The term Commercial Availability originated in the United Kingdom in the early 1990’s following the deregulation of its power industry into a “market” system. Since a plant’s availability only had value to its company if it could generate power at a profit, its availability was only measured during the times the market price was above the plant’s variable cost. Initially CA was not “weighted” with respect to the magnitude of this price/cost gap so that each hour when the unit was economically viable (its cost was below the market price) had the same influence on CA. Over time some users of CA have evolved the term to include the influence of the price/cost gap magnitude so that it can be a more accurate indicator of the plant’s impact on the company’s profitability. (E.g. during hours when the gap is $20/MW-HR the plant’s actual availability would have ten times the influence as an hour in which the gap was only $2/MW-HR). Therefore, CA attempts to measure the actual profit delivered by the plant relative to its potential profit if it had been able to deliver every MW-HR required of it at the actual market price (profit here is defined as gross margin, generally the difference between the plant’s variable production cost and the market price, or the system marginal cost in the case of regulated companies).
Although numerous companies in many countries have begun using Commercial Availability as one of their primary measures of availability, there is currently no standard definition for its calculation. In fact, at a recent meeting of an industry group, a survey of those attending revealed that about 1/3 of the companies represented were using Commercial Availability at some level, but none calculated CA in exactly the same way. Clearly the industry is in great need of a standard definition, but until then each company is free to define CA in any way that they choose.

Implications of using Commercial Availability
There will be a wide range of impacts on the way a company evaluates and manages its power plants resulting from the adoption of Commercial Availability and other tools/processes required to address market dynamics. This requires a different mindset and approach in applying data and new tools in both day-to-day and performance assessment decisions. Measures and actions must consider ways to quantify and respond to different situations with differing economics. Yet the fundamentals of benchmarking remain relevant, although in new, modified forms as discussed below.

Benchmarking – peer group selection – Over the past few decades benchmarking has become a key tool in most top performing generating companies performance improvement efforts. Good technique is to first identify other “peer” plants whose design and operational characteristics are similar to the one we wish to benchmark. NERC and I have used this advanced statistical technique, simultaneously analyzing over 50 plant design and operational features, to identify peer units from and then to compare their “traditional” reliability indices (I will be discussing this technique in more detail in future posts). Benchmarking Commercial Availability will require a new aspect of the plant to be included in the analysis to determine the optimal peer group. That new aspect is some indicator of the plant’s economic incentive to generate at different times. Since the greater the economic incentive to generate is, the better the plant’s reliability can be managed to meet the demand (I will be posting a future case study indicating that management is the largest influence on the plant’s reliability), then we will need a statistic that measures the unit demand and incorporate that into the peer group analysis.

Benchmarking – comparisons – After we have selected the best peer group for our benchmarking analysis, what will we compare? The actual calculation of Commercial Availability (whichever definition is finally adopted) is likely to be highly dependent on the precise market price (or marginal cost for a regulated or controlled business environment) that exists per hour (or parts of each hour) and matched against the unit’s availability in those hours. Since that price (or cost) can and does fluctuate widely over the course of each day, week, month or year we would have to create a massive new database containing market prices in order to make the CA calculations. Furthermore, even if we did create such a database the actual CA’s will probably not be appropriate to compare since the actual market prices in different regions would be likely to be very different. One of the approaches I advocate is to calculate a term called Conditional Probability (CP). CP represents the likelihood (that’s the probability part) that the unit can deliver the requested amount of energy during a specified time period corresponding to that unit’s demand profile (that’s the conditional part). CP, then, would be similar to the Equivalent Forced Outage Rate (demand) statistic but would likely be different during different demand periods. So what we would be doing would be to “benchmark” Conditional Probabilities of peer units and then select a goal CP as perhaps the best quartile or best decile or “Optimal Economic Availability” from the CP distributions of the peer units. Combining the goal CP and our unit’s unique economics we can then arrive at a “goal” Commercial Availability objectively without having to create any new data collection processes. In a future case study I will describe the specific steps to develop this process.

Maximizing Commercial Availability – this focuses ones attention on being available to generate when required by the market and when the income and profit potential is highest. Generating units are only maintained and manned to meet market need. The logical converse of this is that stations need not be maintained and manned at the same levels for periods when they are not required by the market. The daily, weekly, and annual variations in demand for electricity means that it may be possible to reduce generating costs by allowing the units to remain unavailable overnight, at weekends, and for certain parts of the year. The plant is not required by the market and although technically unavailable, such periods have no effect on Commercial Availability.

Design – New plant design is likely to be affected since we are no longer concerned with maximizing traditional measures of availability or reliability, but in maximizing profitability (or minimizing cost). One outcome of this different design philosophy in some cases will be to reduce the dependency on expensive equipment redundancy and instead install advanced equipment monitoring equipment. Since we are only interested in being available “when the plant is needed”, being able to better anticipate imminent equipment problems will give needed flexibility to plant management. Furthermore, even if we cannot control the timing of the event, communication of the increased likelihood of an outage will allow others in the organization (dispatch, trading, marketing, etc.) to take appropriate steps to minimize the financial impact of the outage. Operational “flexibility” also needs to be considered in design. With the addition of advanced control systems and online performance optimization tools it is possible to increase the plant’s capability to meet demanding load schedules, ramp rates, etc., thereby increasing the potential for sale of additional MW-HRS without compromising plant availability. In addition, since different regions have different economic conditions, the optimal economic design is likely to be different.

Other implications – There will be many other implications associated with adopting Commercial Availability including modifying the overall goals system for the plant to include the financial impacts of other performance parameters such as efficiency, Operations and Maintenance costs and environmental impacts, fuel quality, capital costs, etc. Decision analysis tools using information scattered throughout the organization are needed that combine the technical consequences of various courses of action with their economic impact on the corporate bottom line to give the decision maker all relevant information they need to make the best decision. Finally, it is necessary for the industry to recognize one likely result of using Commercial Availability in place of the traditional indices; that is, these traditional measures will almost surely appear different. All stakeholders including regulatory agencies, financial institutions, insurance carriers and even the company’s own executives, board members, stockholders and customers must be included in the change process and “buy into” the new metric. Otherwise, how can we expect them to believe that although the measures they are used to monitoring are no longer important the company is actually delivering a lower cost and more profitable product?

Goals Systems using Traditional metrics or Commercial Availability – It is my opinion that any company considering using Commercial Availability, however it is calculated, must decide to use either the traditional metrics or CA. You can’t use both in a goals system as there will often be conflicting decision options that will give different (sometimes radically different) performance results (my next posting will demonstrate these differences in a sample case study). Of course you might monitor and compare both sets of metrics before deciding which one to use but once you decide you will have to commit to one or the other.

Calculating Commercial Availability – There are many different versions of calculating Commercial Availability (CA) and the industry has not yet settled on one definition. However, most companies measuring Commercial Availability use some version of the ratio of actual gross margin (or reduced cost) that the plant delivered relative to the total potential gross margin (or reduced cost) if the plant had been able to deliver every MW-HR that was required of it. I will use the following equation to demonstrate the concept:

CA = ((Actual Value) / (Potential Value)) X 100%

The following example is for a random sample of 10 hours during a typical year for a mid-merit generating unit. Hours 1, 2, 3 &4 are hours of mid expected value (perhaps weekdays during the non-peak season), hours 4, 5, 6, &7 are hours of high expected value (weekdays during the peak season) and hours 9 & 10 are hours of low expected value (perhaps weekends). These values actually reflect the magnitude of the “gap” between the market price of power (or the marginal cost for regulated companies) and the unit’s variable cost to produce power. So in addition to seasonal variations in the gap due to demand conditions there will also be variations due to supply conditions (e.g. many units suffering unplanned outages, etc.). Plus we can expect substantial volatility in the value during many of the hours in any particular season.

EXAMPLE 1 – 300MW mid-merit fossil steam unit

Hour Market Unit Cost Gross Margin EAF Gross Margin
Price Potential Actual
$/mwh $/mwh $ % $

1 40 30 3000 100 3000
2 25 30 0 100 0
3 35 30 1500 0 0
4 50 30 6000 100 6000
5 70 30 12000 100 12000
6 110 30 24000 100 24000
7 90 30 18000 100 18000
8 60 30 9000 100 9000
9 20 30 0 0 0
10 25 30 0 0 0

Total 73500 72000

During these 10 hours the unit was available for 7 hours so that its EAF was EAF = (7/10) X 100 = 70% and its EFOR was EFOR = (1/8) X 100 = 14.3%, indicating poor performance.
However, its CA was CA= ($72000/$73500) X 100 = 98% with only $1500 of lost margin, indicating great performance.

If you were the plant manager how would you want to be measured and evaluated, especially since your actual economic performance connects directly to the company’s bottom line?

If I were to assume that all 10 hours were available except hour 6 where the margin is highest then EAF would have been EAF = (9/10) X 100 = 90%); a good “looking” result. But CA would have been
CA= ($73500 – $24000)/$73500) X 100=67.3% and the lost margin would have been $24000, a very bad result.

We can easily see that CA is much more closely linked to the company’s bottom line financial goals that the traditional metrics of either EAF or EFOR and the plant management must find ways to maximize the chance that the unit is available when it has the most value to the company such as using condition monitoring equipment to give forewarning of imminent outages and to use other data analysis techniques such as programs to avoid, detect and mitigate High Impact – Low Probability (HILP) events as I discussed in my second case study posted earlier on my website.

Benchmarking Commercial Availability – One of the problems of Commercial Availability is that the resulting numbers are not comparable between all similarly designed units due to the fact that each individual unit will likely be operating in different economic business environments. Therefore, how will we know if a plant manager that achieves a certain level of CA deserves a pat on the back or a kick in the pants?

We can benchmark CA using a term called Conditional Probability (CP). By dividing the year into different demand periods when there are likely to be different optimum economic levels of reliability (see my first case study posted on my website), we can develop probability distributions of peer units being able to deliver generation. By superimposing our own actual (or forecast) economics onto a CP goal, we will then be able to develop a CA benchmark (or goal) for our unit operating in our unique business environment.

Conditional Probability (CP) can be defined as
1) When required (that’s the conditional part)
2) What is the likelihood (that’s the probability part) that the unit will be able to generate at its rated capacity?

These Conditional Probabilities (CP) can be used, regardless of the definition of Commercial Availability you choose to use.

Selecting a goal from these distributions (companies often choose the best quartile reliabilities of their units’ peers) during different demand periods is the starting point for benchmarking Commercial Availability. We can get these distributions from the NERC-GADS database for our unit’s technical peers by using 1 – EFORd (demand EFOR).

For our sample 10 hours I have chosen as reliability goals:
1) 92% for hours 1, 2, 3 & 4;
2) 98% for hours 5, 6, 7 &8;
3) 90% for hours 9 & 10

By multiplying each hour’s Conditional Probability Goal (CPG) by that hour’s Gross Margin Potential we can calculate that hour’s Gross Margin Goal (GMG). Summing each hour’s GMG will give us the total GMG for that time period.

Note: If the unit provides other value to the company in addition to its gross margin (such as ancillary power, etc.) that value should be included.

Hour Gross Margin Conditional Probability Gross Margin
Potential Goal Goal
$ % $

1 3000 92 2760
2 0 92 0
3 1500 92 1380
4 6000 92 5520
5 12000 98 11760
6 24000 98 23520
7 18000 98 17640
8 9000 98 8820
9 0 90 0
10 0 90 0

Total 73500 71400

For the example the Commercial Availability Goal would be
CA Goal = ($71400 / $73500) X 100 = 97.1%

For the first example the CA actual of 98% is above the CA goal of 97.1% but for the second example the CA actual of 67.3% is far below the CA goal. Of course this example uses only a few hours of times when the market price is high. In actual conditions with 8760 hours in the year there will be many other hours with high prices to “make up” for the times the unit was unavailable.

Setting CA goals for your power plants
1) Use some statistically valid process to identify each unit’s design and operational peers (I will be posting a case study on this subject soon).
2) Determine the peak season periods for your plants’ peer units. Then develop Conditional Probability (CP) distributions (I recommend 1-EFORd) during demand periods that are similar to yours).
3) Estimate your units’ Optimum Economic CP (discussed in my first case study posting on my website) during each demand period (many companies use the top quartile or top decile from the CP distributions of their peers).
4) Using the template shown earlier, apply those CP goals to first your unit’s forecasted economics (for planning purposes) and then backcasted economics (for actual evaluations), using whatever definition of CA you choose.

Conclusion
For many years the electric generating industry has been aware that traditional measures or plant reliability need improvement, especially for cycling and peaking types of technologies. However, it has usually remained of academic interest to those of us closely involved in Reliability Engineering. However, new times are requiring new, more appropriate measures that link technical performance with financial results. The catalyst for this new interest in reliability measures is the evolving market-based business environment brought on by the need of our customers for lower electricity prices to help them meet the demands of the competitive global economy. In my opinion Commercial Availability, coupled with advanced decision support tools that accurately forecast future your plant’s value of availability, will result in better decision making leading to lower generation cost and higher profitability.

Reliability Measures are Unreliable!

Introduction

For over 40 years concerns about the definitions of traditional measures and indices of power plant reliability have been raised. While these concerns are most often focused on peaking and cycling technologies, base-load technologies are also affected. In recent years the need to develop and apply new reliability indices which more accurately reflect the market place’s value of power plant reliability has taken on a high degree of urgency. Company decision-making at all levels is affected and the old “technical” definitions of reliability are being modified to incorporate economics in order to link plant reliability with the actual cost (or profit) of electricity supply. Instead of measures that are calculated over both demand and non-demand periods, new reliability terms consider only the hours that the plant would have been dispatched plus the financial consequences to the company’s bottom line from the failure to generate during those hours.

The Historic Problem

Among the traditional measures of plant reliability in many countries have been the Equivalent Availability Factor (EAF), the Forced Outage Factor (FOF) and the Equivalent Forced Outage Rate (EFOR). In other countries the Unit Capability Factor (UCF) and the Unplanned Capability Loss Factor (UCLF) are used. Those measures that are “factors” (EAF, FOF, UCF, UCLF, etc.) use as their denominator the entire time period being considered (typically one month or one year) without regard to whether or not the unit was required to generate. Therefore, for non-baseloaded units, these factors can lose their relevance (and the more cyclic the demand is, the greater the effect). For example if a simple cycle Gas Turbine unit is used exclusively for meeting peak demand periods it may only be required to generate just a few hundred hours a year. If it were unavailable during 25% of those hours it would still have a high EAF and UCF and a low FOF and UCLF. If it was required to generate 100 hours per year but experienced forced outages during 25 of those demand hours (and no other outages over the 8760 hours in the year) it would still have a EAF and UCF of 99.71% (8760-25)/8760 x 100 and a FOF and UCLF of 0.29% (25)/8760 x 100. Those numbers might look good on paper but the reality is that the unit could only produce 75% of the power required of it. So these factors don’t come close to describing the unit’s ability to produce its rated capacity when demanded. Of course for true baseloaded units such as most nuclear units who generate every hour they are available or even for gas turbine units in countries where they are near base-loaded, these factors come much closer to depicting the unit’s “real” reliability.

The terms Forced Outage Rate (FOR) and Equivalent Forced Outage Rate (EFOR) were introduced in an attempt to resolve these difficulties (FOR and EFOR differ only in that EFOR considers the “equivalent” impact that forced deratings have in addition to the full forced outages that is all FOR considers. In this simple example with only full forced outages I will only examine the FOR).

The equation for FOR from the IEEE -762 Standard is:

FOR = ((Forced Outage Hours) / (Forced Outage Hours + Service Hours)) X 100

For the example given above the actual service hours are 75 so that the FOR would be:

FOR = ((25) / (25 + 75)) X 100 = 25%

The complement of the FOR might be considered to be the unit’s reliability so that

Reliability = 100% – 25% = 75%

So it appears that FOR (and EFOR when forced deratings are present) are good measures of a unit’s reliability.

However, in actual practice it is extremely unlikely that all of the forced hours that a peaking unit experiences during the course of a year are during its demand period. (In our example all 25 Forced Outage Hours were assumed to occur during the 100 demand hours). Most times a forced outage will have some (if not most) of the hours required to restore the unit to service occur during non-demand periods and some during demand periods. In this example the unit might have experienced five forced outage during 25 hours of its demand period (out of 100 hours total demand). However, it is likely that the time to restore to unit to full capability would average more than the five hours each during demand periods. It is much more probable that the total forced outage hours would be several times higher as some previous studies suggest that the average restoration time for a gas turbine forced outage is on the order of 24 hours. Therefore, if we use 24 hours as the average down time then the total forced outage hours reported would be 5 X 24 = 120 Hours. Now the FOR would be

FOR = ((120) / (120 + 75)) X 100 = 61.5% and the unit’s reliability = (100-61.5) = 38.5%, both of which are obviously unrealistic when attempting to use these statistics to make decisions requiring the expected reliability of units to be used (if gas turbine reliability was really only 38% I would never fly on an airplane unless it had dozens of redundant jet engines strapped onto its wings – would you?). And yet these values are very close to actual FOR and EFOR statistics being reported for peaking generators!

Does that mean that FOR and EFOR should not be used? Absolutely not! They are in fact reasonable indicators for baseload or near baseload types of generating units. However, for cycling or peaking units they are inadequate and new metrics were needed.

A few years ago a modification of EFOR was introduced by the IEEE to attempt to resolve this problem. The term Equivalent Forced Outage Rate-Demand, EFOR(d), was developed and incorporated into the IEEE-762 Standard (EFORd has been used by some companies in North America for many years prior to incorporation into IEEE 762). EFOR(d) only uses that portion of a unit’s forced outage (or derating) that occurred during demand periods. As we saw in the earlier example, that would resolve the issue nicely. However, demand periods are not currently part of standard reporting systems so that an approximation technique was devised using a MARKOV approach. Although not perfect, this technique does result in a reasonably accurate calculation of EFOR(d).

The New Problem

As the industry moves into a more competitive market-based business environment, the reliability indicators must be able to reflect a direct linkage between a plant’s reliability and the corporate or portfolio cost and/or profit of electricity. They should incorporate the large (often very large) variation in the value of a unit’s reliability. For example, in previous efforts I have made to quantify the value of availability improvement of an individual unit within a large company, I found that even for efficient coal-fired units there can be a factor of 100 or more between the unit’s value during a low demand period compared to its value during a high demand period when other generators are having unexpected forced outages. Even nuclear units have a significant variability in their value during different times. This variation will inevitably result in different economically optimal decisions at different times.

The following example was developed from actual data at one large generating company using value-based availability to measure its plants’ performance:

On a Tuesday morning a small boiler tube leak was detected at one of its large efficient coal-fired units that ran close to baseload. Two options were identified:

1. Remove the unit from service immediately and repair the tube as quickly as possible so as to minimize the downtime of the unit.
2. Continue to operate the unit until the weekend when the demand is lower and the cost impact of the unit’s unavailability is less per hour. However, the unit would be exposed to a risk of a longer duration outage due to possible additional tube damage.

When the plant staff evaluated these options for an event that occurred during the non-peak season, it was found that the cost was minimized by choosing Option 1. This was because the differential between the weekday and weekend-day cost per hour of this unit’s unavailability at this time was not enough to offset the likelihood of a longer outage if the unit was operated until the weekend.

Total Cost – Non-Peak Season
1) Option 1 – $115,000
2) Option 2 – $184,000

This choice also had the effect of minimizing the forced outage hours resulting from this event.

However, when the exact same event occurred during the company’s peak season the cost were:

Total Cost- Peak Season
1) Option 1 – $354,000
2) Option 2 – $265,000

In this case, Option 2 (waiting until the weekend) is clearly the best economic choice, even though it had the effect of reducing the plant’s availability beyond that of Option 1.

As we can see, often the objective of maximizing a unit’s “technical” performance (in this case availability) can bring it into direct conflict with the company’s goal of minimizing cost and/or maximizing profitability.

This example is only one of hundreds of decisions that a power plant staff must make every year and illustrates the vital importance in developing performance metrics that establish direct linkages between the plant’s goals and the company’s overall financial objectives. In this way we will be encouraging the plant staff to make optimal economic decisions from the corporate perspective and not just ones that maximize their local technical goals.

The following instance of local decision making exemplifies this situation: I was escorting the Minister of Energy from India and his staff to visit a new, large, efficient and reliable coal-fired plant. The plant staff made a presentation describing the first time they opened unit 2’s turbine for the required manufacturer’s inspection. The complete turbine inspection was finished in only 17 days! After the presentation I asked a member of the plant staff how they accomplished such an amazing feat. He replied that he personally had spent most of his time during the preceding year planning the outage, including evaluating the best use of lay-down space, overhead cranes, etc. and developing contingency plans for virtually every eventuality. In addition the plant staff worked two-twelve hour shifts every day; there were numerous turbine manufacturer field representatives on hand; they staged all parts that might be needed; and, perhaps most importantly, they did not find any unexpected equipment problems when they opened the turbine.

He went on to say that the plant management and staff were substantially rewarded with bonuses, promotions, and system-wide recognition as a prime example of the type of performance others should try to emulate. However, when I asked him how the unit subsequently performed, he said that it was not required to generate for over two months! This was due to the plant being designed to burn only high cost, low sulfur coal, resulting in a high dispatch cost so that the unit was seldom economic to generate except during the peak season. A quick calculation indicated that the plant had spent an extra $1 million to achieve this availability improvement during a time (the non-peak season) where the plant’s availability had essentially no value to the company. And yet the company’s existing goal’s system encouraged and rewarded everyone in the production organization (including the production executives) for achieving this clearly uneconomic result.

None of the traditional statistics such as EAF, UCF, FOF, UCLF, FOR, EFOR or even EFOR(d) adequately make the linkage between technical and economic goals. However, some companies have begun using a new measurement technique called Commercial Availability that promises to do exactly that!

I will conclude this topic in my next post by examining the concept of Commercial Availability and begin to consider some of the practical implications I have uncovered when using this statistic.

Avoid, Detect & Mitigate HILPS

Is Your Power Plant Headed for a HILP?

How to Avoid, Detect and Mitigate High Impact – Low Probability (HILP) Events

Bob Richwine

HILP events are those events which do not happen often but when they do occur they can cause extended unplanned outages. HILPs include catastrophic events such as turbine water induction, boiler explosions and major fires, generator winding failures and many, many other types of events. I have heard HILPs referred to as “first time events” but while a specific type of HILP event might not have occurred at your plant previously, it is very likely to have occurred at another similar plant. Some companies have established successful HILP reduction programs using data from the North American Electric Reliability Corporation’s (NERC) – Generating Availability Data System (GADS) which contains event data from 1982 for over 7500 units of all technologies. Recently I, along with Mike Curley, formally with NERC’s GADS Services, and Scott Stallard, Vice President of Black and Veatch, wrote a technical paper describing in detail how to establish a process to benchmark your plant’s unreliability due to HILPs and then to create a HILP reduction program to identify ways to avoid, detect and mitigate HILP events. I will be happy to provide a copy upon request.

A generating unit’s Forced Outage Rate (FOR) and Equivalent Forced Outage Rate (EFOR) can be thought of as consisting of two types of events:

  1. Events that are expected (they have previously happened with some degree of regularity) and cause the unit to have short or medium outages or deratings.
  2. Events that are unexpected and cause extended duration outages or deratings.

By separating these event types and calculating and benchmarking their impacts on your plant’s FOR and EFOR you can gain a new perspective for prioritizing your problem solution identification efforts.

 

g1

 

As an example we can consider two units’ historic annual FORs shown in table above (for simplicity this example will only consider FOR but EFOR could also be used). While both units have averaged a 10% FOR, the type of events making up their FOR are very different.

Reliability data for Unit A shows many short to medium duration events so that the focus for improvement should be on reducing the frequency or duration of these type events.

Reliability data for Unit B, however, tells a much different story. Here the unit had far fewer short to medium duration events (~60% fewer) so that most of the time its FOR only averaged 4%. However, it experienced one major forced outage event (a HILP) that caused the unit to be out of service for 3 weeks. When both type events were included in the FOR calculation the annual average FOR = 10%. It should be clear from this reliability data that the failure modes for the two units are very different and therefore your investigation focus should probably be different for the two. For Unit B it might be best to implement a formal HILP reduction program using the steps we described in our technical paper (below):

 

g2

 

It is very important that you initially select a peer group that balances the need for the units in the peer group to closely match you unit’s design and operating characteristics that most strongly influence its reliability with the need to maintain an adequate peer group size for statistical validity (we normally require a population of at least 30 units). This topic will be covered in future case studies that I will publish on this website.

After calculating the FORs for your unit and the units in the peer group due to all event types, you will need to decide on the minimum duration for a HILP. Since there is no standard industry definition, you are free to select any duration you wish. Keep in mind that the longer duration you select the fewer the events that will meet those criteria. In fact you might want to consider doing the activity in stages, starting with a long minimum duration (say, 3 months) and reducing it to 1 month and perhaps finally to 1 or 2 weeks.

With your HILP duration criteria set you can then use NERC’s PC-GAR-MT computer program to determine the peer group’s number of full forced outage hours for events with durations greater than your HILP duration criteria and calculate the FOR due to HILPs. You can then benchmark you unit’s FOR(HILP) to the peer group’s FOR(HILP) distribution to determine how large your HILP problem is.

As a simple example, I selected a peer group of subcritical fossil steam units that are base-loaded and coal fired. When these criteria are input into pc-GAR for a five year period 529 units were found (if I had not selected coal as the primary fuel burned, there would have been 592 units in the population and if I had not also selected base-loaded as a criteria, there would have been 1044 units). Running pc-GAR for the 529 units for a recent 5 year period, I got 2640.08 unit years of data giving a mean Forced Outage Rate of 4.61%. The mean service hours was 7442 hours per unit year and the total Full Forced Outage Hours was 360 hours per unit year.

Now I ran Nerc’s pc-GAR-MT software for the peer units previously identified and found that there were 22,644 full forced outage events, of which 21,849 had outage duration of less that 1 week (168 hrs). The remaining 795 events had an average Time to Repair (TTR) of 398.18 hours per HILP event (with HILP defined as longer than 1 week). This resulted is a 1.59 FOR due to HILPs or about 1/3 of the total mean FOR of the group, indicating that HILPs are a very significant part of this peer group’s average unreliability. If this peer group was similar to your unit being studied, you could then calculate your unit’s FOR(HILP) and benchmark it against this distribution of its peers.

The table below gives the make-up of the FOR(HILP) by system. While I choose to use NERC’s pre-defined system groupings of cause codes, you have the option to group cause codes in any way you choose. For example, you might want to only consider boiler tube leaks or even just superheater tube leaks.

As we can see for this example, the Boiler and Turbine systems are the leading contributors to FOR (HILP) followed by Generator and Balance Of Plant and Other (External, Performance and Personnel Errors). Drilling down to individual cause codes or groups of codes can further define the problem areas of most significance for HILPs.

 

g3

 

The following is a summary of ways to assess your unit’s susceptibility to various HILP events. Details can be found in the full technical paper, provided upon request.

 

g4

 

You should try to identify a wide variety of options to reduce HILPs by either:

  1. Preventing the HILP event
  2. Detecting the HILP event or
  3. Mitigating the impact of the HILP

After identifying improvement options you should gather sufficient information to be able to forecast the impact of each option. Then an economic analysis should be performed to:

  1. Justify each option (is it cost effective? Yes or no)
  2. Time each justified option (is now the best time to implement?)
  3. Prioritize each option (given all your fleet’s needs, will this project be the best use of your company’s resources?)

Details of advanced ways to justify, time and prioritize HILP reduction options will be described in a future case study on the Evaluation Phase of a Performance Improvement Program.

The final step is to monitor the results of each implemented HILP improvement option and compare to the expected results. You can also, over time, compare your fleet’s FOR trend due to HILPs. Then use the results from successful and unsuccessful project implementations to improve the process.

Remember: HILPS Happen!

Keep in mind that no power plant is immune to HILPS. Your plant may be either just recovering from a HILP or is about to experience a HILP. Certainly, the plant staff must respond to the “problems of the day” but top performing companies will find ways to devote some resources to seeking cost-effective ways to avoid, detect or mitigate HILPs. If HILP benchmarking shows that you currently have a HILP problem, you might consider starting a formal HILP reduction program soon. If you don’t have a HILP problem right now you might make plans to ensure your fleet stays ahead of the game.

Addressing HILP causes and seeking solution options “before a HILP happens” is a proven way to move from a fire-fighting to a pro-active style of management, one of the key characteristics of top-performing generating companies.

Optimizing Economics

Power (15-Sep-04)

Maximizing availability may not optimize plant economics

 

Conventional wisdom says you’ve got to spend more O&M dollars to improve your plant’s availability and reliability. But today’s plant managers should focus on optimizing the plant’s overall economics and avoid being sidetracked by jockeying performance stats. The approach: Benchmark your plant, and then optimize its economics using the latest in statistical methods.

When setting power plant performance goals, sometimes you can be your own worst enemy. For example, if your stretch goal for this year were to increase your equivalent availability factor (EAF) by 1%, then being the good plant operator that you are, you would find a way to make it happen. Power plants exist to produce electricity, but their business is to make as much money as possible. Unfortunately, attaching incentives to indirect measures of performance can often lead to suboptimal plant economics. In other words, achieving your stretch goal may actually negatively affect your balance sheet.

The industry’s new math

In regulated environments, the general equation for electricity pricing is (or was):

Price = Cost + Profit

Here, cost represents actual outlays for construction, operation and maintenance (O&M), and fuel that were deemed prudent by the regulator, which also sets the profit and therefore the price. Over time, cost-plus regimes fostered a risk-avoidance mindset. If successful, a high-risk investment could only yield a maximum allowable profit margin, while any unsuccessful investments that were judged imprudent were disallowed. Pavlov could not have trained his dogs any better: Taking risks was not a good corporate strategy because there were limited rewards for doing so.

When electricity markets began to be deregulated/liberalized, the equation governing utilities competing in them changed to:

Profit = (Market) Price – Cost

Here, profit is the dependent variable, price is determined by supply and demand, and cost represents the net impact on a utility’s bottom line of all its decisions, good and bad. Risk avoidance became a less effective strategy because the high returns from a series of successful high-risk investments now could be used to offset the losses from unsuccessful ventures.

Some utilities figured out this new math earlier than others; they evolved from risk-averse organizations to businesses in which decision-makers identify, quantify, and manage risk (excluding safety). As part of that group, plant managers now must choose from a range of best practices solutions whose cost is an important selection criterion. Under competition, the best solution is the one that is economically superior or most cost-effective, rather than the one with the least technical risk.

Room for improvement

Even a cursory look at available U.S. power plant cost data reveals that there is a wide variation in the O&M costs of top-performing plants and the rest of the pack. Those data show that the lowest-cost plants spend only half as much on O&M as the average plant, and that the highest-cost plants are 50% above the average. This spread also exists when only units with good technical performance are considered. Therefore, there is substantial opportunity for significantly reducing cost at many U.S. plants. Some utilities have already seized that opportunity (see box). A recent U.S. industry study showed that seven nuclear plants with a total capacity of 8,300 MW moved from the category of “not cost-competitive” to “cost-competitive” by reducing their O&M costs.

Improving the overall economics of a power plant requires a comprehensive understanding of the relationship between O&M spending and plant performance. It should be recognized that, in addition to the obvious—higher O&M costs—poor performance also leads to lost revenue opportunities and higher-than-necessary generation costs. There is a mutual interaction between O&M spending and plant performance. For the purpose of the following discussion, O&M costs are defined as outlays for operations and maintenance, including refurbishment capital, but not for fuel.

This article argues that plant managers shouldn’t strive to minimize their plant’s O&M costs or to maximize its performance (using availability and reliability metrics). Rather, their goal should be to minimize their plant’s total cost (O&M plus performance) by optimizing its O&M costs. Achieving that goal will produce maximum profits—the holy grail of any business.

New math, new frontier

A new statistical technique called Frontier Analysis (FA) makes it possible to do two things: estimate the point at which a plant’s total cost is minimized and set aggressive yet achievable cost goals for a plant. Although the following example does not hint at the rigorous statistics required for developing a true FA, it nonetheless is useful for demonstration purposes.

When EAF is plotted against O&M costs for a group of properly benchmarked plants, the result is often a wide scatter of data points (Figure 1). Such cost data can be obtained either from the Federal Energy Regulatory Commission, EUCG (see page 24), or other data sources and availability data reported to the North American Electric Reliability Council through its Generating Availability Data System.



 
1. Frontier Analysis. The “frontier curves” pass through the benchmarked data points and visually show the best quartile and best decile performers. The plant being benchmarked must also determine its proactive and reactive maintenance costs as part of the analysis. Source: Robert R. Richwine

 

Naturally, when benchmarking cost and availability it is vital to select as appropriate a peer group as possible. Studies done by the author and others reveal that for benchmarking purposes a plant’s design and operational factors are often more statistically important than its size and fuel type. For example, for fossil steam units it’s far more significant to know whether a unit is supercritical or subcritical, or operated in baseload or cycling mode. Whatever comparisons are made, cost data may have to be normalized to account for differences in labor rates and productivity, material costs, and local tax rates and environmental constraints. When benchmarking against overseas plants, exchange rates (and sometimes government subsidies) also should be taken into account.

Referring again to Figure 1, note that the two “frontier curves” go through the data points of those benchmarked plants that are achieving the highest availability for various levels of spending. Typically, these plants have incorporated best practices O&M techniques into their day-to-day management decision-making and are getting superior results. The best quartile and best decile frontiers are shown, which are often used to establish cost goals, plotted where 25% or 10%, respectively, of the data points lie below them.

By contrast, the top quartile and top decile frontier curves reflect the total of proactive costs (preventive maintenance) and reactive costs (corrective maintenance). Typically, low proactive costs result in low availability and high reactive costs, whereas high proactive costs (if the preventive maintenance efforts are effective) lead to high availability. Of course, as a plant’s EAF approaches 100% its reactive costs move toward zero (no unavailability means few breakdowns), but its proactive costs become exponentially higher.

Keep right

Also note in Figure 1 that plants on the left side of each curve have lower availability factors with the same O&M costs as frontier plants on the right side. These plants are said to be “not on the frontier” because they operate inefficiently. Plants with the same availability factor but higher O&M costs are not achieving their full potential and are said to be in the interior. Potentially, such plants could decrease their total costs without decreasing their EAF, increase their EAF without increasing their costs, or some combination.

Studying the best O&M practices used by plants “on the frontier” provides valuable insights into methods that could be employed to move a plant there. However, any attempt to put a plant on the frontier requires locating its optimum economic performance point—the point of diminishing returns at which extra expenditures do not generate equal value.

Optimizing economic performance

Locating a plant’s optimum economic performance point requires superimposing the value of an increase in EAF (or the cost of a decrease in EAF) on Figure 1. Whereas units of similar design have similar frontier costs, each individual unit has a unique value (or cost) that depends on the economic conditions of the system in which it operates.

Glancing at Figure 2, you might conclude that the optimum economic performance point should be at the lowest point of the total O&M frontier cost curve (for quartile data, the yellow line). However, one also must consider another cost—the cost of unavailability. For a generating unit in a large regulated system, the cost of unavailability (typically a straight line, like the green one) can be estimated by calculating replacement energy costs. For a merchant plant in a competitive environment, it represents lost opportunities, lost profitability, or both. For a plant with a power purchase agreement, the cost of unavailability is determined by the terms and conditions of the contract.



 
2. Optimum economic availability. The costs related to a plant’s unavailability can be added to its total O&M costs to produce a curve that enables determination of its optimum economic availability. Source: Robert R. Richwine

 

Adding the frontier O&M cost curve (the yellow line) to the cost of unavailability (the green line) yields the total cost curve (the blue line). The unit’s economic goal should be to operate at the bottom of this curve. By dropping a vertical line from this point until it intersects the frontier O&M cost curve, one can determine the minimum cost necessary to achieve this goal, called the total O&M cost target.

The lowest point of a plant’s total O&M cost curve is known as the plant’s optimum economic availability (OEA). It is also the point of diminishing returns, at which an incremental increase in O&M spending (if spent as efficiently as a top quartile or top decile plant would spend it) would serve to increase EAF by exactly the same value.

Cost targets

If we now drop the vertical line further down until it intersects the proactive costs and reactive costs curves (shown in Figure 3), we can determine the optimal relationship between these two costs for our example plant.



 
3. Optimal costs. After a plant’s optimum economic availability is determined, optimal cost targets for proactive and reactive maintenance can be calculated. Source: Robert R. Richwine

 

By doing so, we can see that the plant’s OEA is a dynamic goal that changes as a function of its technical, operational, and economic environment. Figure 4 illustrates what might happen to the OEA if the plant becomes less valuable (its cost of unavailability decreases). The total cost curve will shift, as will the OEA. In other words, you may not be able to justify spending to maintain as high a level of EAF as in the past.



 
4. A moving target. The optimum economic availability (OEA) is not a constant. If the cost of unavailability is reduced, so are the OEA and the reactive and proactive maintenance cost targets. Any complete analysis must consider a range of unavailability costs. Source: Robert R. Richwine

 

And therein lies perhaps the biggest challenge to implementing a plant performance system driven by pure economics: How do you convince senior management that a lower EAF can mean better plant economics?

Postscript:

Since the above article was published by Power magazine in 2004, I have had the opportunity to apply the principles described at several US and international companies. As a result I have gained several practical insights that can be categorized into two primary areas: 1) cost issues and 2) availability/reliability issues.
1) Cost issues – The biggest issue with cost is to gain access to a reliable, consistent cost database, an issue which has become especially difficult in an increasingly competitive business environment (except for the nuclear industry). There are private cost databases that some have used but the only public domain cost database that I am aware of is the US Federal Energy Regulatory Commission’s (FERC) cost database where only regulated US generating companies use FERC form 1 to report their cost. Even here there are substantial inconsistencies in the cost reporting as well as regional labor rate and equipment cost differences. Furthermore, not all cost are spent on the plant’s availability/reliability but with an increasing percentage used for efficiency and environmental requirements. The problem is further compounded when international companies seek to use the US cost database with even greater labor rate and equipment cost differences and often radically different environmental regulations in addition to monetary exchange rate issues.
2) Availability/Reliability issues – The diagrams shown in the article imply that the annual spending results in a constant availability over the entire year. Of course we know that planned outages lead directly to seasonal differences in availability, but even when we convert the X axis from availability to reliability, usually using the term 1-EFOR (Equivalent Forced Outage Rate), there are normally substantial seasonal differences in EFOR. Studies have shown that for most companies their plants’ reliabilities are better during their peak seasons when they are most valuable (this will be the topic for a future case study). Many companies have established higher reliability goals during peak times reflecting that increased value and have in fact achieved those goals. I personally am convinced that if a plant’s goals actually reflect its value to its company, plant management will find a way to achieve those goals. Unfortunately, in my opinion, the number one problem worldwide for a plant in reaching its potential performance is the disconnect between its goals and the company’s goals. In future case studies I will be describing the studies I and others have undertaken that have led to this conclusion.

Although there are difficulties with the practical application of the concept of Optimum Economic Availability/Reliability (OEA) using statistical frontier analysis it remains in my mind a concept that should be well understood and incorporated into every company’s thinking when establishing goals for its generating plants. Some companies that have applied these concepts have reported to me that it has been “transformative” in establishing a better goals structure and expectations between their executive management, their generating plants and their trading/marketing organization as well as their other stakeholders. So even in the light of the practical limitations I have described, I would encourage you to explore the OEA concepts and techniques and consider applying them at your company. I would also encourage you to post any questions or comments you may have so as to open a dialogue on this and related subjects or contact me directly.