coal power plant renewable subsidies

Power plant efficiency boosted with machine learning technique

Power plants could be made more efficient thanks to a data-driven machine learning approach from Stuttgart University researchers which looks at how retrofitting the facilities results in cleaner, safer and more efficient operation.

In conventional steam power plants, residual water must be separated from power-generating steam. This process limits efficiency and in early generation power plants could be volatile, leading to explosions.

In the 1920s, Mark Benson realised that the risk could be reduced and power plants could be more efficient if water and steam could cohabit. This cohabitation could be achieved by bringing water to a supercritical state or when a fluid exists as both a liquid and gas at the same time.

While the costs associated with generating the temperature and pressure conditions necessary to achieve supercriticality prevented Benson’s patented Benson Boiler from being widely adopted at power plants, his concepts offered the world its first glimpse at supercritical power generation.

Using high-performance computing (HPC), the researchers are developing tools that can make supercritical heat transfer more viable.

“Compared with subcritical power plants, supercritical power plants result in higher thermal efficiency, elimination of several types of equipment, such as any sort of steam dryer, and a more compact layout,” said team member Sandeep Pandey, a PhD candidate at IKE.

While power generation and other industrial procedures use a variety of materials to generate steam or transfer heat, using water is most commonly used as it is relatively predictable.

However, the high temperature (374°C) and pressure (at least 22.4 megapascals) needed to reach this critical point led Pandey and his colleagues to investigate using carbon dioxide (CO2) instead.

The common molecule offers a number of advantages, chief among them being that it reaches supercriticality at just over 31°C, making it far more efficient than water and it’s more environmentally friendly because of this.

“SCO2 [supercritical CO2] actually has zero ozone depletion potential and little global warming potential or impact when compared to other common working fluids, such as chlorofluorocarbon-based refrigerants, ammonia and others,” Pandey said.

In addition, sCO2 needs far less space and can be compressed with far less effort than subcritical water. This, in turn, means that it requires a smaller power plant—an sCO2 plant requires ten-fold less hardware for its power cycle than traditional subcritical power cycles.

However, in order to replace water with carbon dioxide, engineers need to thoroughly understand its properties on a fundamental level, including how the fluid’s turbulence - or uneven, unsteady flow - transfers heat and, in turn, interacts with machinery.

Using the stress and heat transfer data coming from its high-fidelity DNS simulations, the team trained a deep neural network (DNN).

Traditionally, researchers train machine learning algorithms using experimental data, so they can predict heat transfer between fluid and pipe under a variety of conditions. When doing so, however, researchers must be careful not to “overfit” the model; that is, not make the algorithm so accurate with a specific dataset that it does not offer accurate results with other datasets.

Using Hazel Hen, the team ran 35 DNS simulations, each focused on one specific operational condition, and then used the generated dataset to train the DNN. The team uses inlet temperature and pressure, heat flux, pipe diameter and heat energy of the fluid as inputs and generates the pipe’s wall temperature and wall sheer stress as output.

Eighty per cent of the data generated in the DNS simulations is randomly selected to train the DNN, while researchers use the other 20 per cent of data for simultaneous, but separate, validation.

This ‘in situ’ validation work is important to avoid overfitting the algorithm, as it restarts the simulation if the algorithm begins showing a divergence between the training and datasets. “Our blind test results show that our DNN is successful in counter-overfitting and has achieved general acceptability under the operational conditions that we covered in the database,” Pandey said.

After the team felt confident with the agreement, they used the data to start creating a tool for more commercial use. Using the outputs from the team’s recent work as a guide, the team was able to use its DNN to simulate the operational condition’s heat energy with new data in 5.4 milliseconds on a standard laptop computer.

“Researchers at IKE are working with both experiments and numerical simulations,” Pandey said. “As part of the numerical team, we are seeking answers for poor heat transfer. We study the complex physics behind fluid flow and turbulence, but the end goal is to develop a simpler model.

“Conventional power plants help facilitate the use of renewable energy sources by offsetting their intermittent energy generation, but currently aren’t designed to be as flexible as their renewable energy counterparts. If we can implement sCO2-based working fluids, we can improve their flexibility through more compact designs, as well as faster start-up and shut-down times.”

For a more technical description of the technology check out this blog post.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close