In the first article of this series, Frank Moore gave an overview of the large volumes of data flowing into an ethanol plant’s control system every day. This data contains valuable information that can help plants focus on key decisions – the ones that will improve their performance. But the question is how to realize the data’s full potential. The answer – as Frank explained – is advanced analytics. In this article, we’re going to dig into a ‘hard’ example of advanced analytics in practice in ethanol plants.

Bringing consistency to the fermentation process

The first step to developing an advanced analytical strategy is using a statistical process control methodology. Statistical process control allows plants to identify and apply optimal process conditions, leading to consistently better performance. In fermentation, this translates to achieving the highest performing fermentation batches possible, as frequently as possible. When implemented correctly, statistical process control can lead to ethanol yield increases.

The methodology is effectively a process of elimination and inclusion to achieve consistently high-performing-batches. The first step is to classify batches as high-performing, normal or poor based on batch input and output data.  Once the cause of a poor performing batch is identified the fermentation process is adjusted to eliminate or reduce the factors surrounding it, and when the cause of a high-performing batch is identified the process is adjusted to aim for those parameters as a goal.

Taking control of the data

A wide range of input and output data – including solids, backset, additions, prop send counts, dextrose, total sugars, glycerol, acids, ethanol and ethanol/solid measurements – is gathered during the fermentation process. Once all this unfiltered data is in place, advanced analytics can bring it under control so plants can make sense of it and take decisions based on the results.

The first step is to find and remove any ‘outlier’ batches. These are batches outside the defined upper and lower control limits or – for example – batches with comments from operators flagging an upset in the process. An example could be a batch that – due to a pump issue – received a double dose of glucoamylase. Below (Figure 1) is an analysis of ethanol/solid measurements. To the right, you can see the upper control limit defined at 0.4511. Any batches with measures above this go ‘into the red’ and are considered outliers. Similarly, batches below the lower control limit of 0.4103 are considered outliers and are also removed from the data set.

Having identified and removed the outlier batches, the next step is to recreate the control charts with new upper and lower control limits. Below (Figure 2) you can see how the previous data set above looks after this process step, with 158 outlier samples removed and the control limits redefined accordingly. There are now ‘new’ outliers, but these do not need to be removed:  For this process only one round of outlier removal is required, other process analysis may need more.

This process of ‘cleaning up’ the data to remove outliers is repeated across the full range of input and output measurements mentioned earlier. The result is a set of control charts from which the batches can be classified as high-, average- or poor-performing. Classifying them in this way makes it possible to apply statistical methodologies to identify trends, correlations and significant differences in the data which could be the key to identifying the cause of their performance.

Benefits beyond yield

Applying advanced analytics to the fermentation process and other production units in the plant can also help to uncover costly mistakes. One example is a plant that had wide variations in ethanol production for each fermented batch. These variations only became apparent when the batch dataset was streamlined. Advanced analysis allowed a ‘track-back’ to the cause: alpha amylase dosing in the cook process. It became clear that operators were responding to issues in the cook process by turning up the alpha amylase dosage. Once an issue was resolved, however, the dose was not reset. A return to the recommended optimal dose led to the plant making significant savings on enzyme spend.

Everything starts with data collection and management

Although advanced analytics can be complex, the investment in the process analysis will pay off.  Advanced analytics relies on good data collection and data organization. To take the example of statistical process control in fermentation: Finding correlations between fermentation process inputs and batch output usually requires data from over 60 consistent batches. Consistent batches are batches resulting from stable operations –  new product trials, plant hiccups, shutdowns and other unusual process situations can’t be part of the batch analysis dataset, and neither can batches with major changes beyond the plant’s natural variation. The data also needs to be reasonably recent, usually not more than three months old, depending on plant capacity and run rate

And it’s not enough just to collect a large volume of quality data. Fermentation input conditions can significantly affect fermentation results, so they should be documented on a batch-by-batch (sample-by-sample in continuous plants) basis. The quality of this documentation is also crucial.  Going back to the double dosing of glucoamylase example we mentioned earlier, if comments about process issues and/or process ‘upsets’ (e.g. plant shutdown) need to be clearly flagged each batch/sample should also be documented. That way, these inconsistent batches can be removed from the data set. Samples should also be taken at regular time points during a batch, or at consistent times of day for continuous flow operations plants.

You can get more information on statistical process control and data management techniques on Novozymes Bioenergy University – contact your account manager for access.

The next and final article of this series will give insights into the potential of the Industrial Internet of Things to improve manufacturing efficiency and facilitate better, faster decision making.

About the authors

Laurie Duval
Senior Technical Service Manager
Biofuels technical service team
Novozymes North America

After graduating with a B.Sc. in biochemistry and biotechnology, Lori moved directly into the field of enzymatic technologies for renewable fuels. As part of our technical service team, she supports US customers with process and enzyme technology optimization.

Rachel Burton
Technical Service Manager
Biofuels technical service team
Novozymes North America

Rachel has been involved in developing new technologies and innovation for the biofuels industry for 15 years. Today, she leads our digital customer engagement activities exploring the use of data science and digital tools for applications in the biofuel market.

Print Friendly, PDF & Email

Rachel Burton

Rachel leads our Biofuel Plant Support team training efforts in Technical Service. She is a longtime user of renewable energy, a diesel engine nerd, and a biofuels advocate.