Showing 6 result(s) for tag: gis

Celebrating GIS Day with York Regional Police: How GIS Data Has Mitigated Risk & Increased Efficiency in Crime Prevention

Today marks the 20th anniversary of International GIS Day! At Geocortex we’re always inspired by the positive stories were hear from our customers who are using GIS in new and innovative ways to help make the world a better place to live.

We recently caught up with Greg Stanisci of the York Regional Police to chat with him about their use of Active Operating Picture, an extension for Geocortex Essentials that helps respond to emergency situations with reliable information.


How were your operations being carried out prior to making the decision to integrate a GIS solution?

Greg Stanisci [GS]: Before integrating our Active Operating Picture (AOP) solution, our Real-Time Operation Centre (RTOC) had to go hunting for data on important issues, which meant they were seeking data all day long. Now, our GIS solution gives us that data immediately, helping us identify priority calls and better manage our resources, so the overall impact has been an increase in efficiencies and a reduction in the risks associated with the lack of awareness around not always knowing what our priorities are.

What were some of your GIS goals prior to adopting the YRP Active Operating Picture?

[GS]: We believed that mapping technology was one of the best ways to visualize police information and bridge communication between our officers. Everything we do is location-based, and we wanted to interconnect GIS with our team of analysts, investigators, front line officers, supervisors and senior officers to better collaborate and respond to situations.

It was our goal to support a more data-driven strategy that revolved around utilizing our resources in the most efficient way possible. Ultimately, we wanted to empower our force with data, and use that data to drive the way we plan our operations.

Can you explain how York Regional Police is currently deploying AOP technology?

[GS]: In a nutshell, we’re using AOP to provide more information in real time to and from the many different members on our team. This ranges from a variety of different applications such as enhancing road safety, preventing crimes before they happen, locating missing people, and accessing information about known offenders. AOP enables us to better streamline the way these processes are managed.

Additionally, the analytics we use in AOP helps us analyze our police presence in a given area to gain more insight into historical deployment patterns, giving us the ability to plan future front line deployment more strategically based on the data we’re receiving.

Describe your how AOP supports your Real-Time Operation Centre (RTOC).

[GS]: One of the primary functions of the RTOC is to mitigate risks to officers in our community. Thanks to AOP, our operatives no longer need to seek important information like priority calls and other alerts, how many units are assigned and whether officers arrived at their destination safely – the information is delivered to them directly.

AOP gives us visibility into which of our officers are currently in the field, the sectors they’ve been assigned to, whether they’re responding to a call, as well as the details of the call itself. AOP also warns us when a patrol sector is empty so that we can actively manage that risk as well. This helps us empower the people within the RTOC with more information, so they can better support our officers.

We’ve worked with our RTOC team to compile a list of roughly thirty different types of priority calls. These priorities can be displayed very quickly and easily for them to respond to as they occur.

How has AOP technology been used to better deal with countering crime in the York Region?

[GS]: AOP technology has allowed us to make more intelligent and proactive decisions with our resourcing. We’re able to put officers in the right place, at the right time. We’re also training officers to analyze the various data points, like heatmaps, to better understand where they’re most needed. AOP helps us leverage location based data to identify priority patrol zones for officers, like areas that have higher gun violence or gang activity as an example.

What have been some success stories that have occurred since onboarding AOP?

[GS]: We’ve diffused quite a few situations since we’ve started using AOP. It has been used to ID suspects that committed a string of commercial break and enters, including a string of thefts that took place at various liquor stores. The technology allowed us to link together a series of prescription fraud cases, ultimately helping us identify the suspects. We were also able to make key arrests to several wanted persons due to the data we were able to relay in AOP.

Thanks to AOP, our ROTC -- as well as front-line officers -- are able to help deter crimes happening in real-time, such as a terrorist threat at Canada's Wonderland amusement park, and a bank robbery that was in progress.

Have all your officers been trained on the technology?

[GS]: Currently, all frontline officers and investigators have been trained on AOP technology. AOP is being used in both their cars and on their desktops. There are also some civilian administrative groups that are using AOP for planning and crime analysis purposes.

Thanks Greg! One final question - are there any plans for further use of GIS in future applications?

[GS]: We’re hoping to get further use of the “after action playback” mode, which can playback events and how units responded throughout the day. This can provide context on where our zones were created and the maps we drew to better assess how we dealt with a response.

It is also our hope to soon run workflows that convert individual unit points to lines to see how our officers and platoons drove during that date. This will give us a better idea on exactly where we were patrolling on a street level and determine which neighborhoods require a heightened presence.

Another future idea we had in mind was to develop a site to track bail checks that are done by officers on the road. This would allow an officer to use a form that would then update on the map, preventing duplicate checks from taking place in a day.

Finally, we’d like to explore how our officers’ GPS alerts are generated based on if they’re in a priority zone or near know offenders or other types of hazards.

GIS Day In Your Community

Every year on November 14th, GIS Day gives us a special opportunity to turn the world into a forum for one day, advocating the impact that geographic technology has on our everyday lives in ways that we may have otherwise taken for granted. Around the world, organizations host educational sessions to spread information, solutions and knowledge on how GIS is improving operations everywhere to make our cities cleaner, safer, better resourced and more efficient.

For more information on GIS Day, we recommend visiting the official website. It offers plenty of information about GIS events happening in your communities, as well as treasure trove of valuable resources.

We invite you to share your GIS Day stories with us in the comments section below!

Integrating Geocortex Essentials with ArcGIS Online and ArcGIS Enterprise portal [Geocortex Tech Tip]

Whether you’ve been building against ArcGIS Server, or you’re just getting your feet wet with ArcGIS online, Geocortex technology is built to enable change, allowing for easy and seamless integration with the ArcGIS platform in its entirety.

 In this week’s Geocortex Tech Tip, we take a closer look at the intrinsic nature of web maps, and how Geocortex Essentials can be integrated with ArcGIS online and ArcGIS Enterprise portal.



Video Transcript

“Hi, my name’s Drew and I’m the Chief Technology Officer and in this Tech Tip we’re going to explore how Geocortex Essentials can be used alongside ArcGIS online, or your ArcGIS Enterprise portal, so let’s dive in!

So I think we’ll start with some context surrounding how to connect Geocortex Essentials to the ArcGIS platform. For many years, our customers have been able to connect Geocortex Essentials directly with ArcGIS Server. Public services can be connected to directly, or we can use token or Windows authentication to connect Geocortex Essentials sites to ArcGIS Server map services, feature services, tiled services, and other types. Applications produced by Geocortex Essentials can also connect to ArcGIS Server through that same authentication method.

ArcGIS Online, portal introduced Web Maps, and that’s really the central currency in the geoinformation model. When we used Geocortex Essentials with ArcGIS Enterprise or ArcGIS Online, web maps become an intrinsic part of this equation.

Here we can see multiple users or groups of users signing in to a portal. This can be ArcGIS online or an ArcGIS Enterprise portal, and they’re using their ArcGIS identity to do so, and then they can create web maps inside of this organization. Those web maps can be shared and used within apps like Operations Dashboard, Collector, or Web AppBuilder-based applications so that other users can use those apps that consume the web maps.

If we add Geocortex Essentials to this picture, users can sign in with the exact same ArcGIS identity that belongs to their portal (or ArcGIS Online org). Then, when we author a site, the identity’s credentials are used to fetch content, like the web map. So the very same web maps can be referenced inside of a Geocortex Essentials site. Then apps created out of Geocortex Essentials can be shared back in that portal, increasing the use of GIS throughout the organization.

Let’s have a look at this pattern in practice.

Here’s a web map that I want to use in a Geocortex Essentials application. It contains store locations and it’s stored inside of my ArcGIS online organization.

I’m going to sign into Geocortex Essentials using my ArcGIS online account. Once I’ve signed in, I’m brought to a list of sites that I’m able to manage. This time, I want to add a new site, give it the display name “Stores”, and I’m going to reference a web map from ArcGIS Online to create my application.

Now, I can search the public database for content, or I can hit this checkbox and refine the search results to only the web maps that are inside my organization.

Notice the lock icon indicates that this web map isn’t shared with everyone. That means end users of my application are going to have to sign in with their ArcGIS identity to access this app.

Geocortex Essentials makes a reference to the web map and understands all of the content within it. So it has an understanding of all of the map services and layers that are used within this web map, and now I can start to author my application within Geocortex Essentials Manager.

Let’s add a viewer to this application using our HTML5 viewer template. Without making any configuration changes, lets launch this in a new browser window.

Now, transparently and behind the scenes, I was signed in to this application. In the top right corner you can see that I can sign out and that I’m currently signed in using my ArcGIS Online account. The reason I was signed in is because the web map inside this application is protected. If I sign out, I’m prompted to sign in using my ArcGIS identity. If that web map is shared and made available to everyone, the end user is not required to sign in using an ArcGIS identity or otherwise.

Now that I’ve built an application, I can publish this back in to my ArcGIS Online organization and share it with other users or make it one of my favorites. Notice that this have been given an item ID, and if I click on this link, I’m brought to my ArcGIS Online org, where I’ve got my Stores application. Clicking on this will simply launch my application.

For now, I’ll simply add it to my favorites. If I go into My Content, and then click on Favorites, there’s the Stores application that I just published from Geocortex Essentials.

You can see this pattern in action using Geocortex Essentials to build applications, share them back inside of your ArcGIS Online organization, or inside of your portal so that they can be used by more users.

Geocortex Essentials 5-Series applications also integrate with ArcGIS Enterprise and ArcGIS Online. Here we can see three example applications – Printing, Workflow, and Reporting. An ArcGIS identity is used to sign in to the design experience of these apps.

Once we’ve signed in, we can create content in the form of items. With Geocortex Workflow for example, the item type is a workflow, and with Geocortex Reporting 5, the item type is a report template.

These items are stored inside of the ArcGIS Online organization or within the ArcGIS Enterprise portal alongside apps and web maps and other types of content.

Those items can be used by Geocortex apps or within Web AppBuilder for ArcGIS apps so that more Geocortex content can be shared with other users within the organization.

Now, lets explore this pattern. I’m going to sign in to Geocortex Workflow. I’m using my ArcGIS identity to sign in so that I can restore a workflow that I created earlier. In the File menu, I can browse all of my workflows that I’ve authored, workflows that have been shared with me, or - if I have the URL to a workflow the item ID, and the URL to my ArcGIS Online organization - I can open it that way.

The workflow I’m looking for is one that I worked on recently. This workflow is called “StoreFinder” and it does just that; it allows the user to search for stores inside of the map. I’ve got a search form prompting the user to select from a list of store types, and once they select a store type, if they click search, we’re going to query the stores layer based on the center type that the user selected. Then we’re going to get the extent of the results, set the map to that extent, and then simply display the results in a list. It’s a pretty simple workflow.

If I go to the Info tab, you can see that this workflow is stored inside of my ArcGIS Online organization, and it has an item ID. I’ve named my workflow “StoreFinder” and it’s got a unique URL used to discover it.

Now, if I sign into Web AppBuilder using that same ArcGIS identity, I can access that workflow.

Let’s go to the widget tab in the authoring tool, and add a new widget to my application. I’ll use the workflow widget (which I’ve installed earlier), and I’m allowed to browse for any workflow in my organization. I can look for my content, my organization, groups, and even public workflows.

Using the keyword search “StoreFinder”, I was able to discover the workflow I authored earlier.

Now I’m just running through the workflow inside of my Web AppBuilder designer experience. Let’s look for all strip malls on this map.

You can see that the results are highlighted and then the workflow displays an item picker, allowing me to hover on each result and show the corresponding record on the map.

That’s an example of how Geocortex Workflow 5 was used to integrate with an ArcGIS Online organization by storing an item and consuming it inside of a Web AppBuilder for ArcGIS app.

The idea here is that you can deploy Geocortex alongside other ArcGIS applications that you have that are also consuming web maps. Collector, Operations Dashboard and Web AppBuilder can all be used alongside Geocortex Essentials.

We’ve built Geocortex Essentials to allow our customers to enable technology change. Whether you’ve been building directly against ArcGIS Server, or you’ve started to work with ArcGIS Online, or ArcGIS Enterprise, Geocortex Essentials has technology for you to integrate with the entire ArcGIS platform.

Thanks for watching this short Tech Tip. I hope you learned something today.

Bye for now!”

Want to learn more about how Geocortex Essentials can help organizations of any size or industry address business challenges? Check out the Discovery Center to get a feel for the product.

Discover Geocortex


Cross-Platform Development with Xamarin [Webinar]

Cross-Platform Development with Xamarin

Using Xamarin.Forms allows you to construct native UIs for iOS, Android and Windows mobile devices from a single shared C# codebase.

Over the past few months, the Product Development team at Geocortex has been using Xamarin.Forms – along with the ArcGIS Runtime SDK for .NET – to create a new, next-generation mobile viewer. We learned a lot of valuable lessons in the process, and we’re excited to share them with you!


In this developer webinar (or devinar, as we like to call it), Spencer and Jeff break down how to get started, some of the challenges they faced, and how to create reusable form components to support Geocortex workflows on mobile devices.

If you’ve been thinking about deploying Xamarin.Forms for an upcoming project, you’ll want to check this out!


Watch on YouTube

How to search for data in a non-spatial database [Geocortex Tech Tip]

Maps allow you to visualize data in meaningful ways and expose patterns that can’t be seen anywhere else. One of the challenges, though, is that your most important business data typically lives in another system or database. This can become even more challenging when it’s data stored outside your geodatabase.

In this Geocortex Tech Tip, Drew Millen shows you how to search for data in a non-spatial database (such as Oracle or SQL), find the spatial relationship, and display it on a map. 

Watch on YouTube


Video Transcription

“Hi everybody, I’m Drew with Latitude and in this Tech Tip we’re going to look at searching for non-spatial data. That’s data stored in Oracle or SQL Server… somewhere that’s not in your geodatabase. We’re going to look for that, find the spatial relationship, and display it on a map, so let’s see how we do that with Geocortex.

What we’re looking at here is a very basic Geocortex viewer application that’s been configured with a single layer called “Land Use”. This contains polygons of different types of land uses and what I’m interested in is this “Arts and Recreation” land use polygon, which contains park information for Los Angeles County. I also have a database table - in this case, an Excel spreadsheet of trees. Now notice that I‘ve got records of all the different types of trees that exist, but I don’t have location information for these. In other words, this is a non-spatial database table. This could live in Oracle or SQL Server, but for the sake of this demonstration it’s just an Excel table.

We’ve got a facility that tells us which park this tree belongs to, but we still don’t have its “XY” location on the map. What I want to find out is where I can find certain trees in my county, so, what parks do I have to visit to discover certain types of trees.

Now in this application, I’ve got a data link between my parks layer, or my land use layer, and the tree database. So, if I have a park selected, and I view the additional details for [the park]; I can see the spatial details associated with that park and I can also see the trees that are within that park, but I’m not quite there. What I want to find out is which parks contain which trees... and remember, my trees don’t have “XY” locations.

How do I solve this? Well, I’ve already set it up so that we can do a search against this Excel table. So, if I do a search for the word “macadamia”, for example, I will find search results from that Excel table, but I still don’t have the location on the map where these macadamia nut trees exist. I need to create a “join” between these search results and a spatial layer on the map to find the underlying spatial feature. In other words, the park that the trees live within.

What I can do is come back to Geocortex Essentials Manager where I’ve configured this application. And to connect to this Excel spreadsheet, I’ve established a data connection. You can see the connection string that we’ve used here simply points to the spreadsheet. If you’re connecting to Oracle or SQL Server, there’s different syntax that you would use for your connection string, but the same idea exists.

Now that we have that data connection, we can set up what we call a “Search Table.” And a search table gives us a select clause: in other words, which fields are we interested in returning from that table when the user issues a search. In this case, we want the user to be able to search on the common name of the tree (like my example when I typed in the keyword “macadamia”) and find all the attributes from the LA Parks trees in this database. So that search is set up.

I’ve also got the land-use layer in my site configured with a datalink. This datalink means that the layer is joined to this data connection, so that every time I click on a park on the map, I see the associated records from my Excel spreadsheet. Recall, however, that I want to do the reverse. So, our current datalink makes sure that every time I select a park on the map I’m grabbing the trees and joining it on the facility column. Notice that facility column is the name of the column that we're using in the spreadsheet to represent the park that the tree exists within.

There’s this section down at the bottom, here, that allows me to add a search, so that’s the reverse of what we’re currently doing, and it allows me to use one of the searches that I’ve configured to find these features from the land use layer that match my search criteria from my datalink.

I’m going to give this a display name. We’ll just use “Park Trees Search,” and the search table that I’m searching on is the only one that we’ve configured in our site earlier, so it’s this Park Trees Search table. And then the field that we want to join is called “Facility,” and that maps to the name of the land use polygon. So that’s where we get our many-to-one relationship from. I’ll go ahead and save the site with those configuration changes and then refresh our viewer.

Now I’m going to issue a search for the word “macadamia” like I did before, and I’ll find the same four results from my Excel spreadsheet. But now when I drill into a result, we can see the facility that it belongs to. It exists in two different parks: there’s “Runyon Canyon Park” and “Runyon Canyon Park – MRCA”. If I click on one of those it’s going to take me to the park where I can discover these macadamia trees.

Hopefully this quick Tech Tip has shown you how you can configure a non-spatial data source to be searchable inside your viewer and still return spatial results. Thanks for watching!”

Explore more Geocortex Essentials functionality in the Discovery Center.

Discover Geocortex

GIS Health Assessment: A new way to think about your system

When we think about the health of our GIS, many of us are used to beginning with the infrastructure. After all, it is what drives the technical performance of your system. The problem with starting at the infrastructure level, though, is that it’s difficult to get a complete picture of which aspects of the GIS are most important to your users.

While the GIS infrastructure is extremely important, not all the resources in your environment are created equal. You might have some layers or services that are used 3-4 times a week, and others that are accessed thousands of times each week. While you want to do all you can to ensure the entire system is performing as it should, there are only so many hours in a day. With an infrastructure-first approach, you’re often unable to hone in on what the most important apps, layers, services, servers, and ArcGIS instances are. 


A new way to think about GIS health

It’s time we flip the traditional infrastructure-first approach and begin thinking about GIS health through the lens of end-user productivity. Your GIS is there to help your users do their jobs, so that’s where your analysis should start.

Whether it’s explicitly or implicitly, you’re going to be measured on the productivity of the users you build apps for, not the response time of a specific server. Without the users, there is no need for the GIS infrastructure.

By starting with what your users are trying to accomplish, you’ll be able to map your key business processes and user flows to the GIS infrastructure and resources that are most important to supporting them. Looking at your GIS from users’ perspectives allows you to better understand how it is being used day-to-day and identify the critical resources needed to support your monitoring and optimization efforts.

With so many moving pieces in your GIS, you don’t have time to treat everything equally.  Focusing your efforts will let you be much more productive and spend more time working on high-value activities.

When we talk about a user-first approach to GIS health, there are two major areas that you need to be considering:

Performance: While closely tied to infrastructure performance, what we mean here is the performance of your end-users. Are they able to do their jobs effectively with the tools and applications you’re building? Are your users taking longer than expected to complete certain tasks?

When these things crop up, a user-first approach will help you target your efforts and fix issues quicker. A good example would be if an application had a poorly performing layer. This would be an infrastructure performance issue, but if you understand what specific layers and services are used in that application, you will know where to look to address the issue.

Usability: If your GIS infrastructure is performing as expected, the next area to examine is usability. Usability is all about whether your applications are configured and designed in a way that makes sense for what your users need to do. Strong infrastructure performance combined with poor usability is still poor performance (remember, performance is about end-user performance, not infrastructure).

An example of how usability can affect performance is when a common tool is not in a prominent location in your app. If it’s difficult to find, users will waste time looking for it, take longer to complete a task by using a different method, or abandon it entirely. This is also true when incorrect layers are loaded by default – users end up wasting time searching for the layers they need.

Completing a user-first health assessment

Once you’ve adopted a user-first approach to GIS health, you’re ready to perform a user-first health assessment. What you’re trying to accomplish is mapping out the business processes and use cases that you manage with your GIS to the specific GIS resources that support them.

First, you’ll want to identify the different user groups that leverage your GIS. By user group, we mean a group of users that have common work patterns and engagement with your applications. This could be a group of people (or one person) with the same task in your head office, or it could be a specific field crew that uses an app on a tablet. The key here is to identify people who use the GIS in similar ways.

We’ve created a checklist to help you perform a health assessment; it’ll help you map what your different user groups need to accomplish to the GIS resources and infrastructure required to support their work.

The checklist contains areas to detail the users and what they need to do, the app(s) they use, the most-used layers, the services the app(s) consume, which ArcGIS products are used and how they’re configured, and the server(s) that support it.

Get your GIS health assessment checklist now  

What to do with your health assessment

Once you’ve completed your GIS health assessment, you can use the information you’ve gathered to proactively monitor the GIS resources that are the most important. Tools like Geocortex Analytics allow you to configure personalized dashboards that provide a snapshot of the resources you want to monitor.

You can also configure alarms and notifications in some systems monitoring tools. Because you know what you need to monitor, you can set thresholds for warning signs of potential issues and have notifications sent to your email.

Next, identify anomalies among your use patterns. If certain users are performing notably better or worse than the average, you can dive into the specifics of how those individuals are using the applications you’ve built. Replicate the superior use patterns and examine the weaker patterns to gauge if there is a potential gap in training or understanding of certain functions.

If you want to learn how all of this is possible with Geocortex Analytics, we’d like the chance to show you! We’ve recently added great new features (including individual user reporting) and made significant improvements to performance and reliability. Get in touch with us using the button below.

Let's chat

ArcGIS Pipeline Referencing: Choose the best data model

Over the past few weeks, I’ve shared foundational knowledge about how data is stored and managed in the pipeline industry. My first post introduced ArcGIS Pipeline Referencing (APR) and explained some options operators have in adopting it. My second post worked to demystify the confusing world of pipeline data models (there’s a lot to consider).

In this post, I will outline important information you need to consider when choosing a data model for your organization, including: 

  • Limitations of current data models;
  • How APR is addressing these limitations; and
  • Questions you should ask yourself to help assess the best data model for your organization (should you choose to move to APR).

Limitations of Existing Models

A data model is defined as: “An abstract model that organizes elements of data, and standardizes how they relate to one another and to properties of real world entities.”

In the pipeline space, real world entities include not only the pipe and valves that make up the pipeline system, but all of the things that happen to and around the pipe. Things such as repairs & replacements, surveys & patrols, one-call, cathodic protection, ILI & CIS assessments, HCA & class location, land ownership & right-of-ways, and crossing information all have components that, in one form or another, need to be captured in your GIS.

The differing needs of these complex data representations expose limitations in legacy systems. And in a world where critical business decisions must be made from this data, identifying limitations and addressing them is an important step as we move to next-generation solutions.

Limitation #1: Data volume

As the years have progressed, the operational and regulatory needs surrounding pipelines have increased. These needs are driving new levels of inspections and analyses on pipeline systems - resulting in more data, both in terms of volume and complexity. The legacy systems were simply not designed to handle the volume of data current programs produce.

An example is the case of Inline Inspection (ILI) and Close Interval Survey (CIS) data. A single ILI or CIS inspection results in hundreds of thousands of records. With assessment intervals recurring every 1-7 years -- and operators performing dozens of inspections each year -- the resulting records from these inspections alone add millions of records to the database. This doesn’t include follow-up inspections, digs, and run comparison activities.

When you couple the sheer volume of records with complexities surrounding data management and the need to provide a high-performance environment, limitations in the system are quickly exposed. These limitations force operators to make difficult data storage decisions, often choosing to remove subsets of data from the system of record. This is sub-optimal to say the least; it significantly impacts your ability to view and analyze important trends in the data.

Limitation #2: Engineering Stationing

Engineering stationing is important, complex, and rooted in pipeline data management. Before modern GIS, operators kept track of the location of pipelines and associated appurtenances using engineering stationing on paper or mylar sheets. With a vast majority of pipelines that are in use being constructed before the existence of modern GIS technology, large volumes of data were referenced with this approach.

Engineering stationing doesn’t benefit all operators; however, companies that manage gathering and distribution assets find this method burdensome … and dare I say unnecessary?

When traditional data models were developed, the need to adhere to legacy engineering stationing outweighed the need to redesign the entire system to favor a spatial-first approach. But as technology has improved, and more users have embraced spatial data, new methods to blend modern (spatial-first) and legacy (stationing-first) models have emerged. Operators need this flexibility when managing their assets.

Limitation #3: Native support for the Esri Platform

The emergence of the Pipeline Open Data Standard (PODS) represents the last major shift in data storage solutions for pipelines, and it happened nearly 20 years ago. At that time, the GIS landscape was both immature and fragmented. As a by-product, PODS was designed specifically to be GIS-agnostic. In the nearly two decades since, Esri has emerged as the predominant provider of spatial data management, and they have developed a suite of solutions that enable stronger collection, management, and analysis of data.

Chances are your organization embraces Esri for spatial data management and content dissemination, which begs the question: “If your organization has standardized on Esri technology, does it make sense to employ a data structure that does not natively support the environment?” (Hint: probably not.)

Addressing and Improving Limitations

The core of APR has been engineered to address important limitations currently felt due to the existing designs of PODS and APDM. APR directly addresses the three limitations described above.

Improvement #1: Data volume

Understanding the need to support large datasets, time-aware data, and the ability to offload the storage of data to other systems, APR has been engineered to handle the high volume of data more efficiently, with a focus on scalability. To achieve this, available rules can be configured to allow a more fine-grained approach to managing data during routine line maintenance. No longer are implementations limited to keeping the data referenced to the LRS or detaching it.

Changes like these allow operators to keep more data in the system, providing a baseline for more powerful analysis and decision making.

Improvement #2: Engineering Stationing

As explained above, engineering stationing is firmly rooted in pipeline data management, but it’s not required for all operators. New construction, gathering systems, and vertically-integrated distribution companies are finding the rigorous application of stationing to be unnecessary overhead. If your current database repository requires it, and your organization doesn’t rely on it, you are taking on unnecessary data management cycles - costing valuable time and money.

APR not only provides the ability to manage data in stationed and non-stationed methods: its flexibility allows for both stationed and non-stationed lines to exist in the same model. Let that sink in for a bit: Operators that have deployed two separate data management systems can now consolidate the management of these assets! This functionality benefits a majority of the clients I’ve worked with over the years.

Improvement #3: Native support for the Esri Platform

As I stated in my previous post, APR is (possibly most importantly) integrated with the ArcGIS platform. You can perform complex long transactions on your data, analyze it in ways that have not been possible before, keep track of product flow using the Facility Network, and get the data in the hands of your organization with methods that are integrated, fluid, and connected.

Considerations for Implementation

If you’re considering implementing ArcGIS Pipeline Referencing (APR), knowing why, and which data model to use with it is has more to do with your business than with IT -- success can be achieved with either model.

But how do you decide which one is best for your organization?  Here are some questions to consider as you’re laying the foundation for your next-generation GIS implementation.

1) Business focus: What segment of the vertical are you in?

If you are a distribution company with transmission assets, the decision is pretty clear: you should choose Utility and Pipeline Data Model (UPDM). It’s designed as a distribution-first model, allowing you to integrate the management of distribution and transmission assets in a single model.

If your company is ‘upstream’ of distribution, the answer gets a bit trickier. Both models are adequate, but my vote tends to lean towards PODS for a few reasons:

  1. Out-of-the-box PODS supports APR slightly more natively for operators without distribution assets than UPDM.
  2. Are you a liquids operator? As UPDM is focused on gas utility and transmission, the PODS model will provide a better solution for those moving liquid products.
  3. As an organization delivering a comprehensive model to the industry, PODS is a thriving community of operators and vendors working together to design a comprehensive model for the industry. This collection of subject matter expertise is invaluable to operators – and provides an opportunity to share your experience with like-minded individuals.

2) Existing model: What are you using now?

As you consider moving to APR, understand that it’s going to require a data migration. The existing system will need to be mapped and loaded into the new solution. If you are currently using PODS and are a gathering, midstream, or transmission company, APR with PODS is probably the best solution to implement. It’s likely that your existing data will migrate more seamlessly, and the model will make more sense to those that manage and interact with the data.

If your organization is primarily gas distribution, and you’ve implemented a PODS model for a small subset of high-pressure assets in the system you manage, consider UPDM. You can take advantage of the intended benefits and consolidate those assets into a common platform.

3) Other programs: ILI, CIS, other survey, Cathodic Protection

If your company has a strong investment in recurring inspections, PODS again rises as the preferred model, especially considering the efforts of the PODS Next Generation initiative around how to efficiently store and process this data moving forward.

4) Interoperability

With the growing importance of importing and exporting data (due to acquisitions, divestitures, etc.), analysis, and reporting, a system that promotes standard mechanisms to exchange data becomes increasingly more important. With the work the PODS organization is putting into a data interchange standard, it again rises as the preferred model.

There isn’t just one approach, but there is a best approach for your organization

While this change is beneficial for operators, many things need to be considered before you commit to an approach. I hope my series of posts provides some clarity for you. To stay up-to-date on the data model landscape and the tools surrounding it, I encourage you to follow the PODS association and Esri. The work of these two organizations in the pipeline space is a great thing for our industry.

If you’d like to discuss any of these concepts further, or would like to have a conversation about which model is best for your implementation, please get in touch with me here. I, and the rest of the Energy team at Latitude, are eager to offer our years of expertise to help you.