Question 55: Given the expansion of the Industrial Internet of Things (IIoT), “Big Data”, cloud-based technologies, and advanced analytics, how are you applying these cutting-edge aspects into their work processes and toolkit to optimize FCC yield, reliability, and safety performance?
BHARGAVA (KBC Advanced Technologies, Inc.)
Some of the biggest challenges we have seen with “Big Data” revolve around security. Within the KBC software suite, we focus on security as our major differentiator. Given the connectors we have in The Cloud, we can now download the data from the refinery in a secure fashion and put it into one location; so, when people retrieve the data, they are all looking at one consistent set. That way, Yield Accounting is not looking at different numbers for closures from their perspective alone. The process engineer has his/her own mass balance that creates a lot of issues around the refinery.
The other big issue here is accessibility of the data: who inputs the data and who has the access to change it. All those issues must be addressed before you start using The Cloud to improve your data that is already on The Cloud. Because we are able to use this data and have access to lots of capacity, we are now able to micromanage, if you please, in terms of identifying and doing performance audits. We are even using this approach for shift performance information to see how one shift is doing against another. We can now get most of the information from The Cloud because we are able to store more data. This easy access to information has allowed simulation to move to a different level. We have been able to use the gobs of data we have from the DCS (distributed control system) to develop models that are more accurate. We can calibrate these models. In fact, in some of the real-time applications we have used, we have been able to calibrate the models on the hour. These models also allow us to set operational targets more accurately. That is about as far as we have gone with “Big Data”. So, it is all about finding your best possible operation and what you have done across the shift on a 24-hour average.
The benefits of having more data, sifting through it, and identifying good stable operation data to develop simulation models are all being used to generate LP (linear program) vectors, which are then used by the planners to help with crude selection and then do a gross optimization. From a yield perspective, it has definitely helped a lot. From a safety reliability standpoint, we now have enhanced unit monitoring. The old method of unit monitoring was to gather a number of KPIs (key performance indicators), have an operating range, and then try to optimize the unit in the operating envelope. We have used simulation as a way to help with some calculated variables that affected reliability of the unit. We have expanded the windows from operating envelopes to an integrity operating window to allow people to have a much better idea of or prediction capabilities on equipment failure.
Finally, having this data on The Cloud has allowed the SMEs (subject matter experts) from within the corporate level of the company to look at the same data set and help with the optimization of the unit from a remote location. You can also do this with third-party consulting companies to allow for another perspective with a fresh set of eyes to help make sure that the unit is running optimally on a daily basis.
SANJAY BHARGAVA (KBC Advanced Technologies)
With the advent of “Big Data” and cloud-based technologies, refiners are now able to harvest mega data in a process environment. Hundreds of thousands of values are being generated every few seconds to measure, monitor, control, and optimize plant operations. The major benefit of “Big Data” is to demystify the different plant data used by various parts of the organization by using with a common set of information that is utilized by all departments. In the modern refinery, the key is transparency in strategy, philosophy, and shared goals. The benefits are seen in reliability, safety, and operator consistency.
Also, refiners using in-house or third-party simulation technologies are able to extract precise information from these “lakes” of data about the performance and compare it to the best possible performance of the plant. Some companies are already tapping into this pool of information – through the use of simulations – to provide a robust means for selecting a set point or target setting of the independent variables.
The data is also being used in open-loop controls (and limited closed-loop controls) to help make advanced decisions to improve plant performance. The simulation models are being calibrated and updated to maintain prediction fidelity. The calibrated models are being used to generate LP vectors to keep the refiner’s planning models updated in order to mimic actual plant operation which will optimize feed and operating conditions, such as riser outlet temperature, feed preheat temperature, and main fractionator cutpoints in FCCs.
Advanced or enhanced unit monitoring applications are being used by refiners to convert raw data into key performance indicators. These applications are being linked to simulation technologies in real-time to generate calculated/derived key performance indicators focused on ensuring operation within integrity operating windows (IOWs) to help improve the plant reliability and safety and to maximize plant life by monitoring for corrosion, erosion, fouling, and accelerated mechanical equipment damage in order to allow for early action and prevention which will improve reliability and safety performance. These enhanced applications are also being used for predictive maintenance, as well as for monitoring the performance of operators between operating shifts and bringing shift performance closer to best possible operations, which will benefit all shift teams. These advanced unit monitoring applications, in addition to the cloud-based technologies with added security, are now being used by managers to view plant performance dashboards on their mobile devices. Finally, the data can also be used to perform remote monitoring by subject matter specialists – both within and outside of the organization – to optimize FCC operations. This monitoring can be now done due to enhanced security systems put into place to protect refinery data.
KATHERINE JONES (Honeywell UOP)
Honeywell recognizes the untapped potential of the industrial internet of things (IIoT) and sees these capabilities as the next revolution in industry:
-
IIoT enables users to leverage all data available including stranded data that, until now, has not readily used in business decisions.
-
Cloud-based solutions allows for a more collaborative environment to enable users to benefit from global expertise.
-
The use of big data and analytics, combined with the computing power of the Cloud, enables the investigation of large data sources and the discovery of new insights.
Maintaining high onstream efficiency and reliability are critical parameters for maximizing plant profitability. To help our customers maximize the use of their assets, Honeywell UOP offers a suite of cloud-based services with the Honeywell Connected Plant. One of these services, Process Reliability Advisor, combines plant data with UOP Process models that are tuned to current operation, plus a fault model which are cause-and-effect relationships and troubleshooting guides specific for the process. The service utilizes a cloud-based platform that supports big data analytics and machine learning to drive continual innovation.
Process Reliability Advisor enables our customer to:
-
Detect performance gaps early and perform mitigation steps,
-
Understand the unit’s current performance and asset utilization based on constraints, and
-
Promote learning and knowledge retention, leveraging UOP insights and context.
More information can be found at www.uop.com/cps including videos on how Process Reliability Advisor works.
Question 35: How far can the hydrogen to hydrocarbon ratios be decrease in gasoline hydrotreating units before experiencing high reactor pressure drops? Please provide some details of your experience with reference to the run length limitations and operating performance.
Ujjal Roy (Indian Oil Corporation)
We have number of naphtha hydrotreatment units in our refineries, some operating with straight-run naphtha as feed and others in mix mode with significant cracked feedstock varying from 10% to 40%, to produce feedstock for catalytic reformers. I suppose, the question here is for hydrotreating units processing cracked components.
Straight-run naphtha hydrotreatment units, in our case, are designed for low pressure (i.e. 20-25 kg/cm2.g) and with 40-75 Nm3/M3 of gas to oil ratio depending on feedstock characteristics and desired product quality. In case of hydrotreaters designed to process FCC gasoline, designed gas to oil ratio is about 400 – 500 Nm3/M3 of feed operating at about 50 kg/cm2. The designers recommend the partial pressure of hydrogen through gas oil ratio and system pressure based on the given feed characteristics and target product w.r.t. olefin, sulphur and nitrogen content. Difficult feedstocks with higher nitrogen content require higher hydrogen partial pressure. Reduced gas to oil ratio can only be compensated partially through higher RIT for equivalent nitrogen removal. But running at higher RIT compensating for lower gas to oil ratio with cracked component in feed will accelerate the coking rate on catalyst leading to high pressure drop. Coke formed on the top of catalyst bed can lead to excessive pressure drop and channeling within reactor which will reflect in radial temperature spread. Delta T across the first bed of the reactor will increase due to less availability of hydrogen as heat sink. Also, lower gas to oil ratio aggravates coke formation in the preheat exchangers resulting in high pressure drop. All these would finally lead to slippage of sulphur and nitrogen in product apart from reduced cycle length. This phenomena has been experienced in one of our hydrotreaters with cracked component in feed due to problem in RGC resulting in low flow over days. We normally do not practice lower gas to oil ratio below recommended value as the penalty is large over the time period as compared to pushing extra capacity or reduced energy consumption.
However, in one of our units, we have optimized gas to oil ratio to nearly 90% of recommended value with the advice of licensor, by shifting some reaction from Bed-1 to Bed-2 through reduced reactor inlet temperature in Bed-1 and reduced quench rate in Bed-2. This in turn has led to ascending temperature profile i.e. drop in Bed-1 peak temperature as compared to that of Bed-2 peak temperature. By doing so, we could maintain uniform radial temperature and no appreciable increase in reactor Delta P since about two years of operation inferring no appreciable reduction in run length due to these adjustments.
In case of coking or fouling, pressure drop across reactor will increase steadily over operation and spikes are not expected. Despite maintaining design gas to oil ratio in many of the hydrotreaters, we have experienced high pressure drop leading to frequent skimming of catalyst bed. The reasons for these incidents have been identified to be caustic carry over from upstream caustic wash units, dissolved oxygen in tank wagon while being transported from one refinery to another and carryover of foulant from feed tanks.
The decrease in run length on account of lower gas to oil ratio operation on continuous basis is a factor of type of feedstock i.e., olefin, sulphur and nitrogen contents and target product specifications. In case of margin available in the feedstock quality, gas to oil ratio can be optimized based on adjustment in reactor severity and conversions.
Praveen Gunaseelan (Vantage Point Consulting)
It is assumed that the question pertains to FCC gasoline hydrotreating. Due to the variability in unit designs, process configurations, feed compositions, contaminant levels, product quality targets, etc., a specific answer to the question cannot be provided. For site-specific guidance, refiners are advised to consult with the gasoline hydrotreating process licensor or a qualified engineering contractor.
Maintaining adequate partial pressure of hydrogen is a critical element of hydrotreater operation, as it minimizes coke formation on the catalyst. An adequate feed gas to oil ratio is also essential as the gas plays a critical role in heat removal from the reactor. For these reasons, hydrotreating process licensors often require a minimum gas to oil ratio during operation to prevent premature catalyst deactivation and reactor overheating. While a common rule of thumb is that the minimum gas to oil ratio should be at least 4 times the hydrogen consumption per barrel of feed, it is critical to recognize that this ratio is inherently unit-specific, and the licensor or designer’s operating recommendations should be strictly followed.
Other potential complications of operating at low hydrogen to hydrocarbon ratios include reactor fouling due to incomplete saturation of diolefins, accelerated catalyst deactivation due to higher temperature operation, and unsatisfactory product quality.
The chapter on Hydrotreating by A. Gruia in the Handbook of Petroleum Processing (D.S.J. Jones, P.R. Pujadó, eds., Springer, 2008) has useful information pertaining to this question.
Olivier Le-Coz (Axens)
As a general guideline, in viewpoint of catalytic performances and cycle length it always recommended to operate naphtha HDS reactors at maximum recycle gas rate. Because those reactors operate in gas phase Hydrogen partial pressure is significantly affected when the recycle gas rate varies. Maximized recycle has rate and thus Hydrogen partial pressure, allows minimizing catalyst temperature and maximizing cycle length. In the case selectivity towards HDS versus olefins saturation is targeted, maximizing recycle gas rate to maximize hydrogen partial pressure and minimize catalyst temperature is crucial.
Brad Palmer (ConocoPhillips)
COP sets the lower limit on gas/oil ratio at 300 scf/b (with a minimum of 70% hydrogen in the treat gas). Remember that the hydrogen is diluted by vaporized hydrocarbon, especially in a naphtha unit. Hydrogen partial pressures are actually very low. We also set a minimum of 3:1 treat gas hydrogen to chemical hydrogen consumption, i.e., the treat gas hydrogen rate per barrel must be at least 3 times the per-barrel hydrogen consumption. Both of these criteria are supposed to be met. In practice, some units do not meet the minimum rates.
Once the minimum is met, there are many other factors that are more critical than the hydrogen/oil ratio. These factors include the operating pressure, LHSV, feed composition, feed contaminants and percent cracked stock.
With respect to hydrogen gas/oil ratios, we can offer direct comparisons where two units feed essentially the same feedstock and operate at primarily the same conditions, except for the gas/oil ratio. The best comparison basis is barrels of oil processed per lb catalyst because in this case the units do not have exactly the same catalyst volumes.
•Case 1: Straight-run naphtha feed at about 360 psig. One unit has more catalyst in it, but the cycle lengths are the same at 18 months. One unit has 140 scf/bbl hydrogen and the other has 270 scf/bbl. The oil amounts processed in 18 months in these units are 195 and 241 Bbls/Lb catalyst, respectively. The unit with the higher gas rate processes about 24% more oil per pound with a gas/oil ratio about 93% higher.
•Case 2: Straight-run naphtha feed at about 450 psig. Again, the amounts of catalyst in the units differ, with the cycle lengths the same at 48 months. Gas/oil ratios are 570 scf/bbl and 710 scf/bbl. The barrels per pound catalyst processed are 571 and 740, respectively. The unit with the 25% higher gas rate can process about 25-30% more oil.
James Stavridis