Reliability indexReliability index is an attempt to quantitatively assess the reliability of a system using a single numerical value.[1] The set of reliability indices varies depending on the field of engineering, multiple different indices may be used to characterize a single system. In the simple case of an object that cannot be used or repaired once it fails, a useful index is the mean time to failure[2] representing an expectation of the object's service lifetime. Another cross-disciplinary index is forced outage rate (FOR), a probability that a particular type of a device is out of order. Reliability indices are extensively used in the modern electricity regulation.[3] Power distribution networksFor power distribution networks there exists a "bewildering range of reliability indices" that quantify either the duration or the frequency of the power interruptions, some trying to combine both in a single number, a "nearly impossible task".[4] Popular indices are typically customer-oriented,[5] some come in pairs, where the "System" (S) in the name indicates an average across all customers and "Customer" (C) indicates an average across only the affected customers (the ones who had at least one interruption).[6] All indices are computed over a defined period, usually a year:
HistoryElectric utilities came into existence in the late 19th century and since their inception had to respond to problems in their distribution systems. Primitive means were used at first: the utility operator would get phone calls from the customers that lost power, put pins into a wall map at their locations and would try to guess the fault location based on the clustering of the pins. The accounting for the outages was purely internal, and for years there was no attempt to standardize it (in the US, until mid-1940s). In 1947, a joint study by the Edison Electric Institute and IEEE (at the time still AIEE) included a section on fault rates for the overhead distribution lines, results were summarized by Westinghouse Electric in 1959 in the detailed Electric Utility Engineering Reference Book: Distribution Systems.[3] In the US, the interest in reliability assessments of generation, transmission, substations, and distribution picked up after the Northeast blackout of 1965. A work by Capra et al.[9] in 1969 suggested designing systems to standardized levels of reliability and suggested a metric similar to the modern SAIFI.[3] SAIFI, SAIDI, CAIDI, ASIFI, and AIDI came to widespread use in the 1970s and were originally computed based on the data from the paper outage tickets, the computerized outage management systems (OMS) were used primarily to replace the "pushpin" method of tracking outages. IEEE started an effort for standardization of the indices through its Power Engineering Society. The working group, operating under different names (Working Group on Performance Records for Optimizing System Design, Working Group on Distribution Reliability, Distribution Reliability Working Group, standards IEEE P1366, IEEE P1782), came up with reports that defined most of the modern indices in use.[10] Notably, SAIDI, SAIFI, CAIDI, CAIFI, ASAI, and ALII were defined in a Guide For Reliability Measurement and Data Collection (1971).[11][12] In 1981 the electrical utilities had funded an effort to develop a computer program to predict the reliability indices at Electric Power Research Institute (EPRI itself was created as a response to the outage of 1965). In mid-1980, the electric utilities underwent workforce reductions, state regulatory bodies became concerned that the reliability can suffer as a result and started to request annual reliability reports.[10] With personal computers becoming ubiquitous in 1990s, the OMS became cheaper and almost all utilities installed them.[13] By 1998 64% of the utility companies were required by the state regulators to report the reliability (although only 18% included the momentary events into the calculations).[14] Generation systemsFor the electricity generation systems the indices typically reflect the balance between the system's ability to generate the electricity ("capacity") and its consumption ("demand") and are sometimes referred to as adequacy indices;[15][16] as NERC distinguishes adequacy (will there be enough capacity?) and security (will it work when disturbed?) aspects of reliability.[17] It is assumed that if the cases of demand exceeding the generation capacity are sufficiently rare and short, the distribution network will be able to avoid a power outage by either obtaining energy via an external interconnection or by "shedding" part of the electrical load.[citation needed] It is further assumed that the distribution system is ideal and capable of distributing the load in any generation configuration.[18] The reliability indices for the electricity generation are mostly statistics-based (probabilistic), but some of them reflect the rule-of-thumb spare capacity margins (and are called deterministic). The deterministic indices include:
Indices based on statistics include:[21]
Ibanez and Milligan postulate that the reliability metrics for generation in practice are linearly related. In particular, the capacity credit values calculated based on any of the factors were found to be "rather close". [25] References
Sources
|