Grid 2.0 What is it!

Summary

The electric grid is entering a rapid state of transition. The current grid delivers less than 1/5th of the total energy needs of customers and with the shift to clean energy, that share will increase dramatically.

Utility planners are quick to explain that grid design is complex and must meet important criteria:

  • Deliver the annual peak demand, with redundancy to allow for the loss of a single critical component at peak demand without an overall failure.
  • Supply demand balance must be maintained continuously in real time.

The current grid design has very little available capacity during peak periods.

As fossil fuel loads are electrified, more electrical power and energy will be needed. Estimates suggest that a 3x electrical energy will be required. That will be challenging, as grid additions can be slow and costly to install. New technology to enable this growth will be valuable.

Loss will also be an issue. Average delivery loss in the US is about 7%, and if delivered capacity is doubled, the loss may increase by up to 4x. (Loss increases with the square of electrical current) With current annual US losses valued at $19B, there will be real value in managing and optimizing loss.

Today, Generac has both the technology and the skills to contribute value, both in added capacity/energy delivery and in loss management.

Delivering More Energy through the Existing Grid

The existing grid CAN deliver more energy and more power, but this requires changes in the ways that the system is planned and operated.

The basic grid design, which has existed for more than 100 years, is intended to meet customer demand when and where needed. The utility was designed to deliver power from central generation to loads. Until recently, any thought of distributed generation was simply not considered. The utility was established to meet customer demand, ensuring that the customer always receives quality power with reliability. This concept has two distinct requirements: acceptable quality (voltage, frequency, etc.) and reliability. The system must deliver quality electric power with reserve capacity to allow the loss of a critical transmission or generation element at peak without causing a system collapse.

Grid 2.0 can increase delivery of power and energy while ensuring that these requirements are maintained.

The average load on the grid is typically about 50% of the peak demand. More power could be delivered during off peak periods when there is little or no demand. A combination of load management and distributed storage can meet this need by increasing the average power delivery and storing surplus energy near the load site for later use.

The other issue is voltage management. In many cases, the limited capacity during peak conditions is caused by low voltage at the customer site. This voltage issue can be addressed with dynamic voltage management, another strength that Generac can provide.

Managing Demand

The mandate to “meet customer demand” has been a real challenge in some areas. The utility MUST maintain the supply demand balance on a continuous basis. If you turn on an air conditioner, one or more generators will increase their output levels to meet this capacity change. Consider an industrial arc furnace that can draw large amounts of power for very short periods. The utility must have capacity available to follow these changes continuously. Recently, large capacity solar and wind generation has been added to the grid. These sources have generally not been dispatched and may be intermittent requiring the same dynamic management as volatile loads, adjusting dispatched generation to follow the intermittent generation, in such a way that total demand and generation are continuously balanced. The balancing task has increased with the addition of intermittent generation.

Demand response systems appeared some years ago, specifically aimed at reducing the system demand during the daily peak, and thereby reducing the need to either start peaking generation or to purchase the needed capacity at peak prices.

With the inclusion of large amounts of intermittent generation, the issue has become more challenging, as both the supply and demand capacities are subject to intermittency. As renewable generation increases, and conventional generation decreases, the problem gets exponentially more difficult to manage.

The immediate solution to this is demand management – a step beyond demand response. There are many load devices that can be managed with little no impact on the users’ experience, and these are now being widely used. Systems may pre-heat hot water or pre-cool an air-conditioned building before the peak demand period to reduce the need for peak capacity. Much work has been done to optimize systems that provide this service.

The impact of this action is to reduce peak demand and shift some of the capacity to off peak periods. The immediate effect of this process is a benefit in another way that is less visible. Loss is reduced. The loss in the distribution system is proportional to the square of the current in the line. If one draws power at a constant rate, the loss will be significantly less than if one draws the same amount of energy over a period, but draws zero for half of the time, and double for the other half of the time. Demand management can reduce loss.

Managing Voltage and Loss

Lowering the peak and increasing the capacity factor (average demand/peak demand) makes room to deliver more energy. If the demand can be managed, ideally to have a capacity factor of 1.0, a line that was previously operating at a capacity factor of 0.5 could deliver 2x the energy that was delivered in the past, but this would result in an increase in loss of up to 4x. A distribution line that experienced loss of 4% might experience loss of up to 16% after this increase in capacity factor. Voltage and Loss management can play a large role in this area. Voltage management is the next frontier in Grid 2.0.

When load is increased on distribution systems, the limiting capacity is most often the low voltage limit that is breached as the demand increases. Utilities have used capacitor compensation to address this issue for many years. But in recent times, the capacitor compensation has resulted in a limited ability to connect rooftop solar and other renewable sources to existing distribution feeders because of high voltage issues. This has become a significant issue in Australia, where rooftop solar is popular. The solution is to utilize dynamic voltage control, to manage voltage on a continuous basis, similar to what is currently done in managing demand.

Dynamic voltage control, however, has other features that make it potentially highly valuable. The system can not only increase feeder capacity, and enable much more distributed generation to be connected, it can also manage and optimize loss, reducing it by about 1/3.

Dynamic voltage management has not been extensively used in the past because there was little need; there was almost no distributed generation on the grid. But with the changes that are now well under way, there is a rapidly growing demand for solutions that will enable the connection of more renewable capacity, while managing loss and increasing delivery capacity.

Generac has real strengths in this area, with proprietary control technology, and the use of Generac equipment, to be a strong leader in managing distribution operations.

Grid 2.0 Is TECHNOLOGY

The future grid may require physical additions, but new technology will provide a lower cost path to deliver more power, allowing time to implement longer-term projects for a robust grid that can continue to grow for the future.

Generac has the technology to meet these needs today, and this will be a valuable addition for utilities that are facing the rapid changes that are coming.

Voltage Collapse — a Poorly Understood Power System Failure Mechanism

In a previous blog, I explained a form of grid collapse that can occur on the power system when an interconnected area has a significant loss of capacity and balance cannot be re-achieved quickly. As system frequency declines, other generation shuts down, and the system may cascade quickly to a dark and quiet place.

During one major blackout, subsequent analysis revealed that there was another mechanism that played a major role, and that was voltage collapse. That concept is very different but delivers a similar result.

The maximum power that can be delivered through a transmission line can be shown to be proportional to the product of the voltages at each end of the line. A transmission line that is operating with 230kV at both ends of the line can deliver twice the power of the same line that is operated with 230 kV at one end and 115 kV at the other. While this voltage difference may seem unusual, it may occur if there is a fault on another nearby line.

Faults occur frequently, often caused by lightning, and grid design must ensure that such faults do NOT lead to a collapse. Planners discovered, through calculations and simulations, they can design the grid to automatically react to faults without ending in a collapse. This mathematically intense exercise is one that is critical to maintaining system reliability and stability above minimum standards. The people performing this analysis are generally employed in transmission planning or system operation planning functions. These people have a detailed understanding the inner workings of the grid. They are also the people behind the scenes in purchase decisions that have stability impacts, such as load shedding or system voltage management.

When a fault occurs on a transmission line, the voltage at the substations at each end of that line will be depressed until the fault is cleared. This voltage depression causes the line end voltage on every other line, connected to each of the two substations, to decline, and this in turn reduces the carrying capacity of the nearby transmission lines.

At this point, I use a little analogy. Consider an elephant that is pulling a heavy cart up a hill and is assisted by a second elephant that is connected to the first with a heavy chain. If something happens that the chain suddenly becomes a thin rope, the rope may stretch somewhat and then it breaks. Transmission lines may do the same. If a transmission line that is carrying a load that is near its peak capacity experiences a lowered voltage at one end, it may be unable to carry the required capacity, and when that happens, the frequency at both ends of the line may begin to deviate very slightly. After a very short time, if the voltage at the station near the fault does not recover, other lines may trip offline and a collapse may be initiated.

Planners perform detailed analysis on these potential events and have determined that the time to clear a faulted line may need to be short to maintain the stability of surrounding transmission lines. This total clearing time may need to be a fraction of a second (3-5 cycles or 50-84 msec).

Protective relays are devices that monitor grid operation and isolate faulted lines quickly. Their history is interesting, because it has played a key role in management of the grid from its beginning. Significant changes in technology have occurred over time:

  • The earliest protection systems controlled lines, with a current-based time-delay to trip with a high current. In theory at least, the faulted line had the highest current, and would trip first.
  • In 1918, a Canadian born engineer, Charles Fortescue, published a paper demonstrating Symmetrical Components, an analysis process that allowed fault currents to be identified. Fortescue’s theorem enabled more sophisticated protective relay designs. Impedance or distance relays were a popular example.
  • After about 1950, powerline carrier, microwave and later fibre optic systems enabled the use of telecommunications between stations at each end of lines to coordinate protection. Differential protection, transferred trip and other systems became common.
  • GPS Satellites, put into orbit after 1978 have become standards for navigation. The GPS system provides an accurate time standard anywhere, and this has been valuable for grid control. The voltage “phase angle”, used extensively by planners, could suddenly be measured. Synchrophasers, or Phase Measurement Units (PMUs), based on GPS, have many uses in power system protection and control and are now included in new protection systems.

Protective relays can now detect and locate a fault with good reliability. The protection system can then trip the faulted line within the time constraint required to maintain stability. This capability has resulted in an ability to operate transmission lines at higher capacity levels, with lower stability margins, to reduce the needs for costly additional line capacity.

The planners task went further, setting capacity constraints and processes to allow for scheduled maintenance. Utilities use an “N-1” criteria, to ensure that stability is maintained after the loss of the largest potential component loss that is probable under all operating conditions. Planners, both in initial design and operations, need to consider many scenarios for configurations during system maintenance.

Some examples of operating plans are interesting. The Pacific Intertie is a 3,100 MW DC transmission line that connects the BPA Celilo Converter near The Dalles generating station on the Columbia River in Oregon, to the Sylmar Converter Station owned by LADWP, near Los Angeles. The line is frequently used to deliver power exported by BC Hydro or BPA to Los Angeles. If the intertie is heavily loaded, and a fault occurs, tripping the intertie, other parallel AC lines would not be capable of handling the capacity needed and would trip as well. To avoid the secondary trips, generation shedding is used in the north, and load shedding in the south, to maintain balance at both ends of the intertie. In the case of BC Hydro exporting, a fault on the Pacific Intertie will result in several generators being tripped offline at the Bennett dam in northern BC, not far from the Alaska border. These generators would be offline in much less than 1 second after a fault was detected on the intertie. Systems of this type have become common, needed to maintain stability.

Even these systems may not always have the required capacity. A 500 kV fault occurred relatively recently on a hot summer day near a major city in the southern US. The fault depressed voltage in the entire city for a short time but was cleared within the expected time. It was expected that the system would remain normal, but the city collapsed into darkness. The cause was apparently the fact that it was very hot, and many air conditioning systems were running at peak capacity. When the voltage drop occurred, caused by the fault, many air conditioning compressor motors “stalled” and stopped rotating, drawing a much larger current than normal. That in turn, caused an increased decline in voltage, and more compressors stalled. The city voltage collapsed. This situation demonstrated that the role of users may have a valuable role in maintaining system stability. In this case, the motors running air conditioners likely needed to be designed to withstand a short duration of low voltage. But new DER and voltage management systems that can provide rapid support will have growing value in maintaining stable operation

Companies such as Generac may find opportunities to partner with utility planners and operators in contributing to grid stability and resilience through the application of optimized and fast systems to address voltage and frequency deviations. There may be real value in working with planners to be positioned to support their internal needs to manage stability and resilience.

System Blackout – A Problem and Perhaps an Opportunity

Power outages due to extreme weather, wildfires and other factors have created a great deal of interest and publicity concerning the reliability and resilience of the electric grid. These are important topics that will play a changing and valuable role as the grid transitions to meet these challenges and evolving customer energy needs.

There are several ways in which major failures occur. I will deal with more of the causes in the next few blog posts and will discuss how distributed energy resources (DERs) can mitigate or prevent failures and support the grid in times of need.

There have been multiple large outages that have occurred in the last 50 years. Initially, it was believed that most were a result of a loss of generation capacity and underfrequency resulting from the loss. Other threats have been identified. Perhaps the most unusual was an outage in Quebec that was caused by a large solar storm. This may be a growing threat, particularly as the geographic areas covered by our interconnected grids grow in size.

Much of U.S.-based generation relies on steam turbine technology, including coal and nuclear-powered generation. Steam turbine machines are huge — and presumably robust. The steam turbine system — invented by Charles Parsons in 1884 — is based on the original principle but have been improved and optimized to transfer maximum power from steam to the generator.

Steam is injected into turbine blades at a high temperature and pressure. The blades are positioned to maximize the power transfer to the mechanical system at the designed speed of operation (50 or 60 Hz). If the rotating speed of the generator is either increased or decreased, the angle at which the steam strikes these turbine blades changes. People that fly airplanes know that if the aircraft is slowed, the angle at which the airflow strikes the wing changes, and at a point, the flow over the wing changes abruptly from a smooth flow to a turbulent flow. This is known to pilots as a stall. The steam turbine experiences similar issues, and if the frequency falls below a certain point, the flow of steam through the turbine blades becomes turbulent, and damaging vibration can occur. The generator is automatically tripped offline, and a shutdown sequence is initiated.

In a large grid, if there is a large loss of supply capacity, (loss of transmission or generation), the frequency may decline rapidly. When that happens, if the decline is not halted quickly, steam turbines connected to the grid will trip below a specific frequency in order to protect the turbines. Once started, the process may cascade to other generators. Restarting may take an extended period, as these machines must be stopped, cooled, and a restart initiated. In some cases, this may take several days. In a nuclear generating station, there may be additional time needed to bring the generation capacity back to its operating target.  

I remember asking a nuclear plant operator what happened to the reactor steam when the generator suddenly shut down. His response: it is vented outside. It makes a huge noise, and if it is winter, no one can go home, as cars in the employee lot will be quickly covered with a thick layer of ice.

A cascading failure can occur quickly, leading to a total collapse. This is generally far too fast for any manual intervention. It may start in one location and depending on frequency settings at other steam turbine plants, subsequent generator trips may occur many miles away. This is a serious issue, and there are systems in place to halt any rapid drop in system frequency.

When a loss of capacity occurs, the frequency will begin to decline. The initial decline is slowed by system inertia. The utility term used for the ­Rate of Change of Frequency is RoCoF. Interesting to note that the highest inertia among generators are the steam turbines, as they rotate at the highest speed (60 Hz – typically 1,800 or 3,600 RPM). Hydro plants, with huge heavy rotating mass (up to 1,000 tons) have the lowest inertia constant, as they generally rotate slowly (<500 RPM). The inertia constant is proportional to the square of the speed of rotation. Solar and wind generation have almost no inertia component. As steam turbines are displaced with renewables and gas turbines, system inertia will decline, and this may create serious issues for system operators

One can examine the frequency response to a loss of generation and can understand the composition of the grid. The graph shows three responses to a loss of generation. The blue line is the Eastern Interconnection, which has a large component of steam turbine generators (high system inertia). The black trace is WECC – the western grid, which has both steam and hydro generation (medium system inertia), while the green is ERCOT (Texas), which has many gas turbine generators and a large component of renewable capacity (low system inertia). System inertia plays a key role in the frequency response after a loss of capacity.

Utilities have installed systems to address any sudden frequency decline. Many utility substations are equipped with a protection system that will monitor frequency and will trip entire distribution feeders if the frequency falls rapidly. These systems may operate in utilities that are far from the loss of capacity. Widespread use of systems of this type have demonstrated that many severe disturbances can be handled by the grid, without leading to a cascading collapse.

The National Electricity Reliability Corporation (NERC) has established standards to be met by all interconnected utilities in Canada and the U.S. NERC requires the system to be capable of maintaining stability with the largest probable single loss on the system. This standard has successfully reduced failures.

Distributed energy resources are a great resource for preventing system collapse because of their ability to achieve rapid load reductions, either by deploying battery capacity, or by shedding demand. Some utilities pay well for this support. A large battery was installed in Australia, claiming that it would pay for itself in a short time. Skeptical as I was, I watched, and it indeed did pay for itself faster than expected, based NOT on trading energy as I had assumed, but on providing fast Frequency Control Ancillary Services (FCAS).

Services of this type will become more important with time. As coal-fired generation declines, replaced at least partially with renewables, system inertia will decline. But as long as there are steam turbine generators on the grid, the need to respond quickly will be valuable. The ability of “behind the meter” systems to respond autonomously at times when rapid response capability is critical will provide real value to both the owner and the grid operator.

Inertia – the Hidden Asset Provided by Old Generators

Inertia is a concept that is rarely talked about in the electric grid, but it is one that plays an important role and will need to be well managed in any transition to clean energy.

When I mention power system inertia, I generally hear the same story –– “we have done things this way for more than 100 years – and there is no reason for change…” While “human inertia” plays a role in many companies, it is not the same inertia that I am about to address.

The power system has relied on central generation for more than 100 years to deliver power. All classic power system generators turn at a synchronous speed, and while that speed may be different for each generator, the result is the same.  Each generator MUST create 60 electrical cycles in every second and generators may create a differing number of cycles per rotation, resulting in different synchronous speeds. Speeding or slowing the generator results in more or fewer cycles in a second.  All generators in a single interconnected area are essentially “locked in step” together, so they slow down or speed up together.  

Generators typically have large rotors, and these may weigh up to 1,000 tons in large machines. These spinning rotors store kinetic energy in their rotational speed.  Any change in speed will change the amount of kinetic energy stored.

If you were to examine a generator spinning, delivering no load, and only enough power applied to the turbine to make it continue to rotate at the synchronous speed, and you suddenly applied a large load to the machine.  The system would slow down and the increased power to the load would be taken from the kinetic energy stored as the generator rotor at the synchronous speed.

A generator with a large storage capacity would see the speed decline slowly, while one with a small storage capacity would slow down much more quickly.

There is a non-intuitive aspect to inertia in generators. Hydro generators that have very heavy rotors, (1,000 tons or more) may have much less inertia than a high-speed lightweight generator used in a steam turbine (coal fired or nuclear generating station).  This occurs because the energy stored in the rotor is determined by the equation shown

Where I is the inertia constant (a number that is related to the weight and structure of the rotor) and ω is the speed of rotation of the generator. Stored energy increases with the square of the rotational speed.

Consider two generators. A hydro generator, 100 RPM/1,000 tons and a steam turbine generator, 3,600 RPM/15 tons.  The steam turbine machine, with far less weight, has almost 20x the stored energy in the heavier hydro machine.  This is because of the significant difference in the speed of rotation.  Coal-fired and nuclear plants, which operate at either 1,800 or 3,600 RPM are the largest source of inertia in the grid.

Why is inertia important? The space station is entirely powered with solar energy, and there is no inertia at all, so why is inertia important for terrestrial based systems?

System inertia provides protection for steam turbines.  These machines are powered by steam that is “blasted” at the turbine blades to deliver rotational power.  The angle at which the steam hits these blades is carefully adjusted to transfer maximum steam power to mechanical power.  If the rotating speed of the turbine, which is directly coupled to the generator is decreased, the angle at which the steam hits the turbine blades changes, and the blades may experience a phenomenon known to aircraft pilots, a stall. The smooth flow around the blade becomes turbulent, energy transfer is reduced, and vibration from the turbulent flow can cause damage to the turbine.

Steam turbines must be protected. If there is a large loss of generation on an interconnection, the speed of rotation of all generators falls as the generators each give up some of their stored energy to meet the increased demand.  This occurs in a fraction of a second after the loss of capacity.  Once the frequency decline is detected, governor action increases turbine and generated power, but this occurs slowly after about 0.5 seconds.  That first half second is critical. The decline in frequency will continue at a slowing rate, until governor action on all generators creates enough new capacity to offset the loss that started the problem. System inertia throttles the rate of change of frequency (RoCoF).  High inertia results in a lower RoCoF.

If frequency declines to a level that may result in turbine damage, the machine is tripped offline with the result that the RoCoF increases and all steam turbine generators then trip and the systems rapidly collapse into a dark and quiet place.

As the world transitions off fossil fuels, system inertia is expected to decline.  As inertia declines, the RoCoF after a loss increase, and halting the decline becomes more difficult.  As long as there generating systems that are sensitive to lower frequencies on the grid, the problem will remain and potentially grow.  Norway recently announced a new record low level of inertia as their renewable generation has increased.  Norway operates a system that is based largely on hydro and nuclear, and it would be interesting to understand the steps that have been taken to manage any significant generation loss.  France, with its extensive nuclear system may become a significant inertia source for Europe.

North American utilities have implemented extensive autonomous load shedding systems to halt a decline that may be a threat, as a total collapse may take many days to recover.

Inertia is important and will continue to have a critical role until a time there are no generators that are sensitive to under frequency situations.

Control system companies may have opportunities to create synthetic inertia, using fast storage to support the system by slowing frequency decay in the short first period after a significant loss of generation capacity.

Muskrat Falls Hydro – More Federal Money to Address a Problem Created by the Federal Government

There has been a lot of publicity lately on the Muskrat Falls hydro development in Newfoundland, and our federal government have stepped in to bail the project out, spreading the extreme cost to all Canadians.  But before blaming the people and government in Newfoundland for this, there are a few issues that need to be considered.

The electric grid in North America is a collection of electric utilities that work together to ensure that everyone has a reliable source of electric power.  Many utilities trade energy between each other, sharing surplus capacity when it exists with others that may have shortages.  New generating plants may have surplus capacity for a few years, that can be sold to others, and these transactions may take place over extended distances.

In recent times, this ability has provided great value to everyone. Canadian utilities in BC, Manitoba and Quebec, with large capacity hydro storage have been providing support for US utilities as they grapple with the surpluses and shortages caused by intermittent solar and wind capacity.  When there is surplus at a US utility, they may send energy to these provinces to be stored, only to request to get it back, when they need it.  These occur as purchase and sale transactions, and the Canadian utilities have made excellent returns on the transactions.  BC Hydro (Powerex) trades frequently with CAISO (California Independent System Operator).  In the afternoon, when there is an excess of solar capacity in California, that cannot be accommodated, CAISO sells energy to BC Hydro, who use this imported power to provide for their customers, while reducing capacity at large hydro plants in the BC system.  As a result, these hydro plants store water and the lake levels behind their dams will rise by a very small amount in a few hours.  After sunset, when the solar power has stopped for the day in California, CAISO purchases the energy back from BC Hydro.  The California evening peak capacity may rely in the availability of this capacity.  Hydro storage capacity of this form is almost 100% efficient and these transactions have been beneficial for both BC Hydro and for CAISO.  This is much cheaper than installing batteries.

BUT there is no direct connection between BC Hydro and CAISO.  Instead, the power flows through Washington and Oregon with connecting transmission to California.  This transition may include multiple utilities and may cause added losses, but these are generally paid for through “wheeling charges.”  This grid has worked extremely well to help to minimize costs and provide improved reliability for everyone.  I can recall a time some years ago, when two major lines from the interior to Vancouver were severely damaged in a winter storm.  The interconnection with the US prevented a major blackout that could have caused a serious disruption in Vancouver.

Hydro Quebec uses the same system, transporting energy through multiple utilities in both Canada and the US to get to utilities that they are dealing with in the US.

Why can Newfoundland not do the same?

When the Churchill Falls plant was built many years ago, I expect that the intent was to export energy to US customers, through the province of Quebec.  This is the same process used today by Alberta, exporting, and importing power to US utilities through BC Hydro and it is exactly the same process used by Hydro Quebec to access the US utilities that they contract with.

Quebec blocked the process, and managed to make it stick.  Newfoundland is paid for the power from Churchill Falls under a very old contract, and the payment amounts are miniscule.  Hydro Quebec sells the energy from Churchill Falls either within Quebec, where it receives a good return, or it can sell it into the US, at a high price.  Hydro Quebec has done extremely well on this basis, but they rely on exactly the principle that they don’t allow for Newfoundland to get their energy to US customers.  I can remember some years back, looking at the numbers.  Hydro Quebec was seen as a big electric power exporter, and was making large profits on their trade, BUT the amount of energy from Churchill Falls was actually MORE than Quebec was exporting.  At that time, Quebec was actually a NET IMPORTER, but made great profits on Churchill Falls energy.  This seems a little ironic at best.

When Newfoundland decided to build the Muskrat Falls project, Quebec expressed loud protests when the Federal Government agreed to help to fund an underwater transmission line, at great cost, that essentially went around the province of Quebec.

And now that the entire project is again in trouble, the Federal Government has agreed to pour more money into it.

Why has the government never seen fit to inform the Government in Quebec that we ARE a country, and it and not neighbouring bands of pirates?  The very principle needed by Newfoundland is already used by Alberta, and by Quebec itself to export to utilities where they have no direct connection (New York is a good example).  Why is Quebec allowed to utilize this benefit – for their own exports when they will not allow Newfoundland the same privilege.

It seems an outrageous cost, paid for by all citizens of Canada – to allow one province to essentially hold another for ransom. The Federal Government seems happy to spend the money from all Canadians to satisfy the selfish demands of Quebec, by not requiring the same co-operation that exists in the entire North American electric grid.

Challenging Times

I was recently asked to meet with a class of students studying energy and the future, and as a part of the session, I was asked to prepare a challenge for the students to work through. The result was interesting and showed a glimpse of what may lie ahead. It will certainly be a challenge that will require innovation, new concepts and a lot of hard work.

I described a small community, powered by an electric utility (20% of total energy) natural gas (25% of local energy) and petroleum products (55% of total energy). The electric utility had capability to increase its energy delivered by about 25% in the next decade, and the students were asked to show how to minimize the emissions in that timeframe. They were free to add solar thermal or solar PV capacity to the system.

The problems began when they started to look at the efficiency of the various uses of energy, whereupon problems and constraints began to appear. The car that most people used was about 20-25% efficient, so it became an immediate target, but to convert all cars to EVs left no remaining electricity to offset the use of natural gas. The students then added solar, but because 75% of solar energy came in summer and 75% of natural gas use was in winter, there was a costly and uneconomic storage issue. In addition, the use of heat pumps that worked well in spring, summer and fall needed inefficient auxiliary capacity during the cold winter days. It was very clear that this would be a challenging issue to address.

To help make things a little easier, it was  assumed that the electric grid was based on hydro generation with large capacity storage behind existing dams, so solar energy generated in summer could at least be stored as needed by the electric utility. But this storage was limited based on the ability to displace only local loads to reduce the electric consumption during summer and was nowhere near enough to offset the gas use in winter.

The discussion exposed what was eye opening for me. We hear a lot about the use of hydrogen as a fuel and a storage concept, and to date, I have been concerned that many of the proposed solutions have poor return efficiency that would make them uneconomic. But this case presented a different situation. The excess solar in summer could be converted to hydrogen and blended with the use of natural gas to reduce the carbon emissions, and this in turn would utilize a significant increase in solar electricity production that could not be stored by the hydro utility. In fact, it reduced the carbon emissions from natural gas use for much of the year.

The entire exercise made it very clear that we are faced with a dramatic need to be “out of the box” thinkers. The students initially had the idea that they could simply add solar or wind capacity, and that would solve everything. They thought little about the role of the utility in providing the reliability required. And this led to another discussion.

Electric utilities have worked for more than 100 years on a simple principle – that they will meet customer demand where and when it is needed. Many utilities have done outstanding work in optimizing their systems, including generation, transmission, and in some cases, sub-transmission. But in most cases, the loads are allowed to do as they wish. Turn on the switch, and the light comes on. Start a compressor, and it cools the building. The utility mandate remains firmly in place – “meet customer demand.” This concept is going to need to change dramatically.

The first foray in this direction started some time ago. Some enterprising companies realized that they could reduce customer loads at a price far below what it would cost the utility to start and run peaker generation, and demand response programs came to life. Other concepts have made demand management a part of the overall control process.

BUT the real opportunity still lies ahead. Full optimization of the grid is going to be essential. Electricity delivery losses in the US are currently about $19B annually, and this is an area that has been largely untouched. Furthermore, as the electric loads increase, losses, which increase with the square of the current, may explode. There are opportunities ahead that will increase the delivery capacity of our distribution systems, while controlling and minimizing losses. The management of many loads will need to be an ongoing part of this optimization. While lights may still go on when one turns on the switch, there will be many opportunities to delay or tweak air conditioning, water heating, EV charging and many other devices or opportunities to manage local voltages. It is apparent that customers are going to need to be a major part of this solution, and this in itself may bring some real opportunities, both for the utilities and the customers.

Enbala has a history of innovation, control and a deep understanding of the issues surrounding the power system. There is a bright future for our smart people to be the next generation of “out of the box” thinkers.

Energy Partnerships – a New and Important Concept

With the rapid growth in intermittent generation and the decline in coal-fired, base-loaded generation, the need for storage is growing rapidly, and the demand for storage will almost certainly grow beyond the capability of many existing systems.

There has been outstanding progress on battery technology in recent years.  The cost per kWh has fallen dramatically, and these reductions are expected to continue for some time yet.  As we enter a time where tremendous growth in storage needs may be required, there also may be growing opportunities to utilize concepts that have previously not been considered as a means to meet these needs.

The electric system was built and has been operated for more than 100 years on a constraint requiring continuous balance. The power supply must equal the power demand, including all losses in real time.  The addition of large quantities of intermittent generation into a power network that must continuously operate in balance has presented many challenges for utilities.  Storage, however, has the capacity to unlock that restriction, enabling the system to generate at a different capacity than is demanded at any particular time.  Storage may be defined as a system that can shift the time at which generation is delivered or, alternately, the time at which capacity is required. Clearly, this will become very valuable as intermittent generation capacities increase.

This definition brings with it a mind shift in how storage is viewed.  The common view of storage revolves around a system that takes energy, puts it into storage and removes it from storage to be used at a later time.  A large form of storage that has often been ignored is hydro-electric storage, not to be confused with pumped hydro storage.  Hydro-electric storage does not utilize pumps, but rather simply delays generation.  The water flowing in a river into the forebay of a hydro plant simply fills the reservoir a little higher than normal, and the energy that was not generated is stored as water behind the dam.  A utility can purchase cheap energy to meet customer demand, while storing hydro capacity by reducing its own generation.  The stored energy behind the dam is then available to be sold at a high price a few hours later — during peak demand periods.

Hydro storage is likely the largest single source of storage in North America. Hydro Quebec claims to have more than 170 million MWH of storage capacity behind its large network of hydro dams and can purchase energy when it is very cheap to power domestic loads, while storing water that can be sold as energy a few hours later, during peak periods, at a much higher price.

But in addition to this traditional view of storage, there are numerous other systems used to provide storage, in very different ways.  Enbala has been a pioneer in the use of load devices to provide this type of storage, using water pumps that fill a reservoir, controlling water heaters, EV chargers or domestic battery systems used to support solar installations.  While most of these provide relatively small amounts of storage, the fact that they are based on the use of equipment that has been provided and paid for to meet another purpose means that this storage approach will have a very low cost.  With the amounts of storage that will be needed in future, this form of storage may play a very large role.

The opportunity for the energy system may have extremely high value.  It is clear that if the capacity of intermittent generation increases as expected, storage will play an ever-growing role.  The need for storage will be potentially very large and may require a broad demand for many different sources that can be cost effective. Some storage may be done by large hydro generators, while some may focus on home-based systems including EV chargers, domestic hot water tanks and backup home generators.

An Enbala customer in Australia has demonstrated this concept well. Many homes equipped with solar electric generation and storage batteries found that their battery systems were costly, and when all costs were considered, they would have been better off to purchase all electricity from the grid. A forward-thinking utility teamed with Enbala to find a way to partner with homeowners, enabling them to share their battery with the utility to reduce the utility peak demand at times when marginal electrical capacity was expensive and to give it back a few hours later after prices had declined. This allowed the homeowner to maintain ample storage for day/night and backup capacity, but it also provided a revenue stream that helped with the economics of the battery.

There will be other streams available over time. It is apparent that storage, in general, needs more than one source of revenue to be feasible, and this may be an important tool in managing the grid in future years. Many partnerships may evolve as we build a sustainable grid. The partnerships will span the system from basic supply to end use, and there will be value for all participants.

The future will look very different than the past. The total energy delivered by the grid will increase dramatically as we shift away from fossil fuels, and opportunities will exist for good benefits for all participants. The factor that may make the difference between success and failure may well come down to a partner structure that allows all participants fair access and the opportunity to influence overall outcomes that will benefit everyone. It is gratifying to see some utilities recognize this as a valuable opportunity, and Enbala is very proud to be able to support their initiatives.

Hydrogen – the fuel of the future?

In my lifetime, there have been several senior executives or administrators that have made statements that were intended to give a glimpse of the future. In all cases, the statements seemed bold at the time, and on reflection today, the results have been interesting.

Some of these are…

  1. Electricity will soon be too cheap to meter.” This statement, made in 1954, by the chairman of the US Atomic Energy Commission was presumably based on the assumption that electricity prices, driven by a rapid proliferation of nuclear electricity generation would fall to very low levels. While that has not happened, electricity prices remain a real bargain, and the availability of more renewable energy may be creating downward pressure on prices today.
  2. In about 1970, the CEO of Digital Equipment Corp (DEC) claimed that there would never be any need for a computer in any home. For background, DEC at that time was the second largest manufacturer of computers after IBM.  The company disappeared a few years later, and of course, most homes now have computer chips inside appliances, and home computers are often considered to be essential.
  3. In about 1990, Bill Gates was quoted, saying that we needed to be prepared for a world where communications would be available at almost no cost.  At the time, I recall having to pay about $2 for a 3-minute phone call to talk to my mother less than 500 miles away.  I laughed at the concept, but the use of fibre-optic technology resulted in a rapid and dramatic increase in availability and a collapse in the costs of communications.  The wide-spread use of the internet is perhaps the most significant result.  We now routinely use “Zoom” or “Teams” for video conferences, a process that cost more than $1,000 per hour in 1990.  This technology has delivered enormous changes in all our lives – in a timeframe that most of us can remember well.
  4. One the biggest “missed opportunities” that came between between predictions and reality was a study done for AT&T in 1985 on a new device called a cellular phone. At the time, the study suggested that the market in 15 years would remain small, and AT&T chose not to invest heavily in the technology.  The actual use of cell phones in 2000 was more than 10,000% larger than the study predicted.

Each of the predictions has had impacts. In particular, the statement by the DEC CEO.  His company no longer exists, and most people don’t even remember who DEC was.

Perhaps, most interesting however, was the prediction in 1954 hinting at a future based on nuclear energy.

Many years later, I attended a presentation by one of the founders of Ballard, the Vancouver based fuel cell company.  His presentation described a future hydrogen economy that would deliver clean and plentiful energy.  At the end of the presentation, I asked where the hydrogen was going to come from.  His response was simple and direct. “One day, we will recognize the essential need to embrace nuclear electricity generation, and with that will come the new clean energy economy based on hydrogen.”  I have wondered many times since, if the fuel cell development was sparked by the Atomic Energy Commission speech on electricity being “too cheap to meter.”

Fuel Cell technology has consumed large amounts of capital and the developments have produced some impressive systems and products.  There is a powerful lobby by companies that have invested heavily in this concept.  But with current electricity prices, is this concept sustainable, or is it likely to be pushed aside by alternate concepts?

Powering personal vehicles has been a big draw for the fuel cell industry.  At the same time, battery prices have fallen dramatically, and that battle, it appears, may have been won for the moment by Battery Electric Vehicles (BEV).

BEVs have two significant advantages for automotive use.  The fuel cost is low and the efficiency is high. The BEV, based on a previous post, is an ideal recycling machine, as each time the car is stopped, kinetic energy of motion is returned to the battery.  This approach increases efficiency significantly.  A fuel cell vehicle suffers from an overall efficiency gap.  The BEV takes electricity and charges a battery.  The return efficiency for this storage is 90% or better. The fuel cell vehicle takes electricity and uses it to create hydrogen from water.  That process is about 75% efficient. The hydrogen is then stored in the vehicle and a fuel cell uses it to make clean electricity, at an efficiency of about 40%.  The efficiency may be higher if the heat produced can be captured and used.  The overall efficiency without heat capture is about 30%, and when compared to the BEV at 90% for this process, the fuel cell may not be competitive.  To capture the recycling component of the BEV, the fuel cell vehicles now add a battery to provide short term power needed, and to recycle energy when decelerating or stopping.

There are, however, some advantages of fuel cell technology in transportation.  The on-board equipment in a vehicle is far lighter than a battery system.  Also, hydrogen can be generated at a filling site during off peak electricity times and stored to be quickly delivered to the vehicle when needed.  There has been considerable interest in using this technology for long haul trucking, and applications where charging times are a productivity issue.

I once examined the weight of a battery system required to power a large truck for 500 miles, and by my calculations, the battery was so heavy, that the capacity of the truck would be limited to delivering a few bags of feathers.  This issue has apparently been addressed, as Tesla claims that their truck will haul heavy loads for more than 500 miles.

Technology is in a state of rapid change, and there are some real decisions that may make our future follow very different paths.

Is there a renaissance coming for nuclear electricity?  Will the future, predicted by the Atomic Energy Commission and the founder of Ballard, become reality through Small Modular Reactors or Fusion based energy?  Are battery systems key to our future?  Both areas are subjects of great interest and research. At present battery technology appears to have a lead but will that continue?   Given that we must stop or dramatically reduce the burning of fossil fuels, the current energy sources are more than 80% based on fossil fuels, and the renewable sources provide less than 5%.  Can a transition be made in a cost-effective way that will meet needs in less than 30 years?  Can renewables meet the challenge ahead – or is there going to be a need for another source of energy, perhaps cheaper than what we have today?  The results of current research, and the choices made will almost certainly have a huge impact on the methods used to deliver and use energy.

We live in interesting times.

Electric Vehicles – A Smart Recycling Machine!

I was recently asked by a friend how I could claim that my EV was so much more efficient than his new gasoline powered car.  I told him that my car was more efficient, but it weighed almost 1,000 pounds more and had much more power than his new car.  He was skeptical of my claims, to say the least.

I examined the statistics on his car and found that the combined average fuel consumption for both city and highway travel was 9.1 L/100 kM, equivalent to 25.8 miles/US gallon.  I then looked at the energy use for my car, for the entire time that I have owned it, and found that it has used 162 Watt-Hours/kM, equivalent to 260.7 watt-hours/mile.

I did a little conversion, to convert the energy in a US Gallon of gasoline to kWh.  A US Gallon of gasoline contains approximately 33.7 kWh of energy.  A car that burns 25.8 mpg is burning 1/25.8 gallons per mile or 0.0388 gallons/mile.  This is equal to approximately 1,306 Watt hours/mile, almost 4 TIMES the energy used by my bigger and heavier EV.  My friend was surprised, as he had assumed the cost savings were only a result of taxes on gasoline that are not charged to EV vehicles.  

That number also surprised me.  A new gasoline engine that is operating well is generally capable of generating mechanical power at up to about 40% efficiency, but the typical vehicle total efficiency is about 20-30% efficient.  By comparison, the EV motors are about 80-90% efficient, but the total efficiency is claimed to be about 70%.  That would suggest that the average difference should be about 3.5 times different.  The ratio of over 4x captured my interest because in fact, I expected it to be less, given that his mileage rating was the manufacturers claim, while mine was based on actual driving, largely in wintertime.

After looking at EV operations, I found the answer.  A vehicle powered with a gasoline engine uses power to accelerate the car to a driving speed, but each time the car is required to stop, the driver typically applies the brakes.  These brakes convert the kinetic energy in the motion of the car to heat that is dissipated into the air by the brakes.  While this is a seldom considered fact, it seems to be a significant amount of energy, in particular in areas where the car is starting and stopping frequently.  The EV on the other hand, uses a clever concept called regenerative braking.  If you take your foot off the accelerator pedal, the car will slow rapidly and many of the new EVs will come to a complete stop. The use of the brake is applied, only for sudden or unexpected activities.  Many EV drivers claim that they use “one pedal driving,” almost never using the brakes.  When the accelerator pedal is released, the car is slowed rapidly because the motor turns into a generator and it collects the kinetic energy from the car slowing down and puts it back into the battery. Apparently, this makes a significant difference.

I am aware of one EV club that has a planned a demonstration of this process.  They will drive many EVs to a park at the top of a local mountain, and record the amount of energy used going up, and then repeat the measure going back down.  In fact, most of the EVs will arrive at the bottom with more energy in the battery than they had when they left the top of the mountain. Several gasoline powered cars will do the same route, with added measurement equipment to accurately measure the gasoline consumption.

One of the key advantages of the EV in the years ahead will be the overall efficiency and the ability to use regenerative braking.  It is an unmentioned recycling system that can be used to improve the operation and to reduce wasted energy, that has been lost as heat.

I live near the bottom of a mountain highway, and it now gives me real pleasure to see the battery level increase as I drive down the highway from the summit to my home after a trip to the other side. The comparison with a gasoline powered car, done by the EV club may produce some very interesting results.

In North America, when one looks at the portion of primary energy that results in actual work, the overall efficiency is less than 35%.  As we reduce the carbon emissions, to address climate change, one of the easy methods to consider may well be a full-scale program to increase the efficiency of every aspect of energy use.  It may be the best way to reduce the impacts of many of the changes that lie ahead.


Climate – An Action Plan Optimized for Minimum Pain

Climate action is on most peoples minds these days. Many people seem to think that this is an industrial problem that governments can force industry to solve. The 50% increase in light truck sales in the last few years demonstrates that this is not just a just an industry problem. This will be everyone’s problem, and there needs to be careful thought into how one can reduce emissions with minimal impact on the quality and cost of life. Without careful planning, this may be painful and costly.

We are inefficient in our use of energy. A few examples stand out. Overall, about 32% of the primary energy that we start with is used effectively. Automobiles powered by gasoline or diesel fuel are generally less than 25% efficient. Even new technology is not all that efficient. A solar panel that makes electricity is less than about 25% efficient, while a solar collector that makes hot water may be more than 80% efficient. If the goal is to reduce fossil fuel that is used for heating, it is difficult to comprehend why one would make electricity at 25% efficiency – to be used for heat, when the solar hot water collector can do the same job and deliver almost 4 times the energy from a collector that is the same size.

We have reached a state where fast response to reduce emissions will be essential to meet the established targets. Real thought and careful planning will be required to reduce fossil fuel emissions without a major decrease in our quality of life. If ever there was a need to seek the “low hanging fruit,” this must be the time.

For example, consider transportation. At a commercial level, we are beginning to see electric busses and trains. Automobiles are in the hands of individuals. An Internal Combustion Engine (ICE) powered car is about 20-25% efficient, while a Battery Electric Vehicle (BEV) is more than 70% efficient.

Supplying energy for a fleet that is converted to EVs may be a big challenge. Energy for transportation in the US supply of liquid petroleum is equal to almost 3 times the total generated electricity on the US grid. Of the total petroleum liquids consumed, 75% is used for transportation and the balance is used almost entirely by industry. Personal vehicles consume almost 60% of the total transportation portion and they would use 133% of the TOTAL electricity generated in the US if the efficiency was equal to BEVs. Fortunately, electric cars seem to use about 1/5 the energy used by an ICE powered car, so the increase in electricity use may be reasonable, provided it can be controlled to charge vehicles during off peak periods when surplus capacity is available.

Most people driving battery electric vehicles are expected to prefer to charge their cars at home because the cost is less. These vehicles get most of their energy at night. Apartment buildings are installing systems that throttle the charge rate to share capacity between many vehicles. This will avoid or defers the need to upgrade the electric supply to the building, but it may limit the amount of charge any single vehicle will get overnight. People returning from a long trip with a low battery may be encouraged to utilize a DC Fast Charger to charge the battery to a reasonable level, and then plug it in when they get home.

However, the DC Fast chargers that are appearing may become a problem if there are too many in use at any one time. Some of these will charge a car at up to 250 kW. One of these chargers, when charging a car, could use as much power as 25 homes, all running cloths dryers and a stove at the same time.

There also appears to be interest in the application of hydrogen. Hydrogen can be made from surplus renewable electricity at about 75% efficiency, but there are significant added costs and energy needed for compression and storage. Hydrogen fuel cell powered cars can provide transportation and have an advantage that they can travel longer distances than the BEV vehicles, but the efficiency is less than about 40%. However, a hydrogen powered vehicle can charge rapidly at charging station, and the hydrogen put into the vehicle can be made over a long period of low demand on the electric grid, so there may be some benefits to the use of hydrogen, albeit at a much higher cost per mile than the BEV.

We are facing unprecedented changes. The challenge and opportunity for users is to find a path that will use less energy and cost less, while allowing one to maintain their living quality. On the other side, there are some real opportunities for companies that can provide demand management as these changes will require smart systems that can provide electrical energy to charge cars, or to make hydrogen without impairing the ability of the grid to meet the recognized needs of customers.

The conversion of our ground transportation systems may well become the critical factor in our objective of meeting the emission target.