Researchers hope to use big data to make pipelines safer

Unfortunately, integrity and health are ongoing and serious problems for North America’s pipeline infrastructure. According to the US Department of Transportation (DOT), there have been more than 10,000 pipeline failures in that country alone since 2002. Complicating safety measures are the cost and intensity of labour required to monitor the health of the thousands of kilometres of pipelines that criss-cross Canada and the United States.

In a recent paper in the Journal of Pipeline Systems Engineering and Practice, researchers at Concordia and the Hong Kong Polytechnic University look at the methodologies currently used by industry and academics to predict pipeline failure — and their limitations.

“In many of the existing codes and practices, the focus is on the consequences of what happens when something goes wrong,” says Fuzhan Nasiri, associate professor in the Department of Building, Civil and Environmental Engineering at the Gina Cody School of Engineering and Computer Science.

“Whenever there is a failure, investigators look at the pipeline’s design criteria. But they often ignore the operational aspects and how pipelines can be maintained in order to minimize risks.”

Nasiri, who runs the Sustainable Energy and Infrastructure Systems Engineering Lab, co-authored the paper with his PhD student Kimiya Zakikhani and Hong Kong Polytechnic professor Tarek Zayed.

Safeguarding against corrosion

The researchers identified five failure types: mechanical, the result of design, material or construction defects; operational, due to errors and malfunctions; natural hazard, such as earthquakes, erosion, frost or lightning; third-party, meaning damage inflicted either accidentally or intentionally by a person or group; and corrosion, the deterioration of the pipeline metal due to environmental effects on pipe materials and acidity of oil and gas impurities. This last one is the most common and the most straightforward to mitigate.

Nasiri and his colleagues found that the existing academic literature and industry practices around pipeline failures need to further evolve around available maintenance data. They believe the massive amounts of pipeline failure data available via the DOT’s Pipeline and Hazardous Materials Safety Administration can be used in the assessment process as a complement to manual in-line inspections.

These predictive models, based on decades’ worth of data covering everything from pipeline diameter to metal thickness, pressure, average temperature change, location and timing of failure, could provide failure patterns. These could be used to streamline the overall safety assessment process and reduce costs significantly.

“We can identify trends and patterns based on what has happened in the past,” Nasiri says. “And you could assume that these patterns could be followed in the future, but need certain adjustments with respect to climate and operational conditions. It would be a chance-based model: given variables such as location and operational parameters as well as expected climatic characteristics, we could predict the overall chance of corrosion over a set time span.”

He adds that these models would ideally be consistent and industry-wide, and so transferrable in the event of pipeline ownership change — and that research like his could influence industry practices.

“Failure prediction models developed based on reliability theory should be realistic. Using historical data (with adjustments) gets you closer to what actually happens in reality,” he says.

“They can close the gap of expectations, so both planners and operators can have a better idea of what they could see over the lifespan of their structure.”

https://www.sciencedaily.com/rss/all.xml