Senior Executive Roundtable
Dick Cheney, Former U.S. Vice President
BHARGAVA (KBC Advanced Technologies, Inc.)
Some of the biggest challenges we have seen with “Big Data” revolve around security. Within the KBC software suite, we focus on security as our major differentiator. Given the connectors we have in The Cloud, we can now download the data from the refinery in a secure fashion and put it into one location; so, when people retrieve the data, they are all looking at one consistent set. That way, Yield Accounting is not looking at different numbers for closures from their perspective alone. The process engineer has his/her own mass balance that creates a lot of issues around the refinery.
The other big issue here is accessibility of the data: who inputs the data and who has the access to change it. All those issues must be addressed before you start using The Cloud to improve your data that is already on The Cloud. Because we are able to use this data and have access to lots of capacity, we are now able to micromanage, if you please, in terms of identifying and doing performance audits. We are even using this approach for shift performance information to see how one shift is doing against another. We can now get most of the information from The Cloud because we are able to store more data. This easy access to information has allowed simulation to move to a different level. We have been able to use the gobs of data we have from the DCS (distributed control system) to develop models that are more accurate. We can calibrate these models. In fact, in some of the real-time applications we have used, we have been able to calibrate the models on the hour. These models also allow us to set operational targets more accurately. That is about as far as we have gone with “Big Data”. So, it is all about finding your best possible operation and what you have done across the shift on a 24-hour average.
The benefits of having more data, sifting through it, and identifying good stable operation data to develop simulation models are all being used to generate LP (linear program) vectors, which are then used by the planners to help with crude selection and then do a gross optimization. From a yield perspective, it has definitely helped a lot. From a safety reliability standpoint, we now have enhanced unit monitoring. The old method of unit monitoring was to gather a number of KPIs (key performance indicators), have an operating range, and then try to optimize the unit in the operating envelope. We have used simulation as a way to help with some calculated variables that affected reliability of the unit. We have expanded the windows from operating envelopes to an integrity operating window to allow people to have a much better idea of or prediction capabilities on equipment failure.
Finally, having this data on The Cloud has allowed the SMEs (subject matter experts) from within the corporate level of the company to look at the same data set and help with the optimization of the unit from a remote location. You can also do this with third-party consulting companies to allow for another perspective with a fresh set of eyes to help make sure that the unit is running optimally on a daily basis.
SANJAY BHARGAVA (KBC Advanced Technologies)
With the advent of “Big Data” and cloud-based technologies, refiners are now able to harvest mega data in a process environment. Hundreds of thousands of values are being generated every few seconds to measure, monitor, control, and optimize plant operations. The major benefit of “Big Data” is to demystify the different plant data used by various parts of the organization by using with a common set of information that is utilized by all departments. In the modern refinery, the key is transparency in strategy, philosophy, and shared goals. The benefits are seen in reliability, safety, and operator consistency.
Also, refiners using in-house or third-party simulation technologies are able to extract precise information from these “lakes” of data about the performance and compare it to the best possible performance of the plant. Some companies are already tapping into this pool of information – through the use of simulations – to provide a robust means for selecting a set point or target setting of the independent variables.
The data is also being used in open-loop controls (and limited closed-loop controls) to help make advanced decisions to improve plant performance. The simulation models are being calibrated and updated to maintain prediction fidelity. The calibrated models are being used to generate LP vectors to keep the refiner’s planning models updated in order to mimic actual plant operation which will optimize feed and operating conditions, such as riser outlet temperature, feed preheat temperature, and main fractionator cutpoints in FCCs.
Advanced or enhanced unit monitoring applications are being used by refiners to convert raw data into key performance indicators. These applications are being linked to simulation technologies in real-time to generate calculated/derived key performance indicators focused on ensuring operation within integrity operating windows (IOWs) to help improve the plant reliability and safety and to maximize plant life by monitoring for corrosion, erosion, fouling, and accelerated mechanical equipment damage in order to allow for early action and prevention which will improve reliability and safety performance. These enhanced applications are also being used for predictive maintenance, as well as for monitoring the performance of operators between operating shifts and bringing shift performance closer to best possible operations, which will benefit all shift teams. These advanced unit monitoring applications, in addition to the cloud-based technologies with added security, are now being used by managers to view plant performance dashboards on their mobile devices. Finally, the data can also be used to perform remote monitoring by subject matter specialists – both within and outside of the organization – to optimize FCC operations. This monitoring can be now done due to enhanced security systems put into place to protect refinery data.
KATHERINE JONES (Honeywell UOP)
Honeywell recognizes the untapped potential of the industrial internet of things (IIoT) and sees these capabilities as the next revolution in industry:
IIoT enables users to leverage all data available including stranded data that, until now, has not readily used in business decisions.
Cloud-based solutions allows for a more collaborative environment to enable users to benefit from global expertise.
The use of big data and analytics, combined with the computing power of the Cloud, enables the investigation of large data sources and the discovery of new insights.
Maintaining high onstream efficiency and reliability are critical parameters for maximizing plant profitability. To help our customers maximize the use of their assets, Honeywell UOP offers a suite of cloud-based services with the Honeywell Connected Plant. One of these services, Process Reliability Advisor, combines plant data with UOP Process models that are tuned to current operation, plus a fault model which are cause-and-effect relationships and troubleshooting guides specific for the process. The service utilizes a cloud-based platform that supports big data analytics and machine learning to drive continual innovation.
Process Reliability Advisor enables our customer to:
Detect performance gaps early and perform mitigation steps,
Understand the unit’s current performance and asset utilization based on constraints, and
Promote learning and knowledge retention, leveraging UOP insights and context.
More information can be found at www.uop.com/cps including videos on how Process Reliability Advisor works.
FEDERSPIEL (W.R. Grace & Co.)
I would like to call on Warren Letszch who was on that panel in 1973. He can cover the historical answer and provide us with any updates, if available.
WARREN LETSZCH (TechnipFMC Process Technology)
Yes, I do remember that panel. It was at the Hyatt Regency Hotel in Houston, Texas, and there were about 700 people in the audience. In those days, we had about 1300 people coming to the meeting.
Let me put this question in context for you. The biggest advancement in catalytic cracking was the advent of the zeolite catalyst. That started in fluid units really in about 1965. By 1970, 30% of the units were still running amorphous cracking catalysts. Now just to give you a nickel’s worth of background here: All cat crackers built with amorphous catalyst were basically designed in the early days to have anywhere from a minimum of 50% recycle up to 100% recycle.
Why do you suppose we call it ‘light cycle oil’ and ‘heavy cycle oil’? Many units were designed with two risers, and the risers were equal: One was a recycle riser, and one was a fresh feed riser. The zeolites were at least – in order of magnitude – more active than the cracking catalyst we were using. In fact, if you want to think about our own operations today, think about running an activity level of about 40 on your catalyst versus the 65 or 70 or whatever you are running. Most of the units were designed with a space velocity of about two, so there was as much catalyst in the reactors as there generally was in the regenerator. When the zeolitic catalysts came out, everyone was anxious to put in the FCCU and see what was going to happen.
You currently use cracking catalyst that typically contains anywhere from 25 to 35 wt% (weight percent) zeolite. The very first cracking catalyst that came out in 1965 to around 1970 contained anywhere from about 3.5 to 7 wt% zeolite; so, by 1973, a high level was 12 wt%. So, what did people do? They put the zeolite catalyst into the unit. In some cases, the catalyst did not work; in others, it did. We discovered three things that were important in the operation. First, we had to cut down the residence time of the catalyst in the reactor. With a two-space velocity, the catalyst could be in the reactor for five minutes or more. With the riser cracker, of course you cut it down to seconds. We also found out that paraffinic feeds worked very well. Aromatic feeds were not all that great. And guess what? We learned that carbonium ion mechanism does not allow the cracking of aromatics.
The other critical thing was the carbon-on regenerated catalyst. We found that the carbon on the regenerated catalyst had to be below 0.3 wt%, and preferably down to at least 0.2 wt% to work. Quite simply, the residual coke left on the catalyst is all associated with the zeolite, not with the matrix. A couple of years ago, I presented a paper about this at the Annual Meeting. In it, I explained that with about 7% coke on the zeolite, the zeolite will not work anymore. Early FCCUs were designed for as much as 0.5% carbon on regenerated catalyst. Why? Because there was not any real benefit, as far as activity on the old amorphous high-alumina cracking catalyst. Also, you have to remember that we all had carbon steel. We had partial-burn regenerators. There were units that ran 1050°F in the regenerator. A really high temperature was 1150°F. At 1200°F, you were getting nervous. At that point, people would put water sprays up there to protect the carbon steel cyclones.
So, this was really the advent of high-temperature regeneration and where riser cracking became the norm. At that time, some refiners went to a riser cracker and could not get all the conversion they wanted. A lot of them got it just by dropping the bed level in the reactor and just letting the riser come up. In some cases, they would have to go back to the reactor bed.
But if you did not have enough activity, the easiest fix was just to add more catalyst or go to a catalyst with a higher zeolite concentration. The riser pressure drop is a way to create more catalyst contacting. That was not really the best solution; but occasionally, it would happen. Then the question came down to whether or not, in the revamp, you caused another limit on the unit that was really not related to the reactor performance. I have news for you: Some of these people could not handle all the gasoline in the gas plant. Suddenly, you are running your FCC unit and the gasoline yield goes up 8% on fresh feed. Your debutanizer may not be able to handle that much. It was a problem that refiners were happy to have. So, yes, we did find a few people that went back to bed cracking. That is basically the answer I gave and the history behind it.
ROBERTSON (AFPM)
Thanks, Warren. I think you said more words than were up there on your slide. [Laughter]
MALLER (TechnipFMC Process Technology)
I will be brief with my response. I find it interesting to hear some of the history that was certainly before my time, especially because TechnipFMC now has a process that – although relatively new compared to 1973 – is employing the same technology and concepts mentioned in the question: bed cracking. We have a process which is using a riser that distributes catalyst and reactor effluent into a bed to continue the cracking reactions. We use that bed cracking to get additional residence time so we can get additional conversion of the naphtha that is generated in the riser and convert it to LPG. This process is intended for on-purpose petrochemical feedstock production. We are able to produce LPG yields over 40%, with half of that being propylene. This particular bed cracking method is also able to produce a relatively high percentage of ethylene.
BOB LUDOLPH [Shell Global Solutions (US) Inc.]
I would like to poll the audience. How many of the people here work for companies that continue to operate bed crackers? [No response] Wow! Shell is one. [Laughter]
MELVIN LARSON (KBC Advanced Technologies, Inc.)
KBC has seen a number of units that were built in the 1960s and 1970s that have gone up to such high conversion levels that water removal is no longer achievable in the overhead receiver. We see a lot of naphtha streams that are supersaturated with water going into primary and secondary absorbers, so we know of many places that have been expanded hydraulically and through conversion. The original equipment will just not get the water separation you really need to run a dry, safe gas plant. You will get H2S and all kinds of nasties in the absorber systems. So, you should check for water content or saturation, looking for your contaminant levels to see if you have tower problems. If you can get the water out, you will actually get a whole new gas plant.
ROBERTSON (AFPM)
Thank you. And thank you, Warren.
ALEX MALLER (TechnipFMC Process Technology)
Technip has a process in which a combination of riser cracking and bed cracking is used to maximize conversion. We have found that there are limits to the conversion that can be achieved in traditional riser cracking. By adding the bed cracking, we are able to supply additional residence time and catalyst/vapor contact with which to complete the conversion of naphtha to LPG. With this process we are able to attain LPG yields well in excess of 40% with a variety of feedstocks. Maximum conversion is achieved with feeds that are very rich with hydrogen, which could be hydrotreated VGO (vacuum gas oil) or even hydrotreated atmospheric tower bottoms.