Your GIS is about serving your users and keeping them productive, and it’s important to understand how they’re interacting with the tools and apps you’re providing. Being able to dive into the usage of specific tools is a major component of understanding the return on your GIS investment.
In this Geocortex Tech Tip, Derek Pettigrew (Geocortex Analytics Product Manager) shows you how Geocortex Analytics allows you to drill down and understand usage levels for the specific tools in your applications.
“Hi, my name is Derek, I’m the Product Manager for Geocortex Analytics. Today, we’re taking a look at tool usage in your Geocortex applications to understand how your end-users are interacting with your apps in more detail. Let’s take a look.
Here we are in Geocortex Analytics, and we’ve drilled down to the Geocortex application section, looking at the LA County applications. What we’re looking to understand is how users are engaging with the tools in our application. As you can see, we have this wonderful panel to tell us that information.
We can see how the workflows are a very popular item for tools, along with identification, active tool sets, simple query builders, and many others. You can drill down further and see that there’s markups, collaboration going on, layer catalog usage - all very useful to understanding the return on investment around those tools.
Not only can we do this, we can also go through and break this down into further details to really see, at the granular level, how people are engaging with your tools. If I select “include details” I can now see that my identify operation is mostly [used] through the rectangle tools. My “run workflow” is a demographic query area, and I can continue back and forth through this looking at different areas, [for example], seeing that the other run workflows here are for parcel reports.
And If I really want to understand my workflows and which ones are being used in this application, I can filter this [view] to only show me the workflows I actually want to see. So, I put workflows in, and now I can see my workflows by popularity of use for this tool, and how often they were used in the “occurrences” area here. I can see my demographic query one is very popular, followed by parcel reports, profile tool, drive time, and road closures.
By using this, we can really understand what’s going on with your end-users, so we can build better tools to help them accomplish their tasks. That is all, thank you!”
Are you thinking it might be time to assess the health of your GIS? We've created a checklist that will help you perform a user-first GIS health assessment.
The Geocortex Essentials identify operation allows you to draw a geometry on the map, and have the application return a collection of features that intersect that geometry. But the identify operation will only return results from your GIS layers, and many (likely most) of us integrate our GIS with various 3rd party business systems, such as asset management, document management, ERP, and business intelligence.
In this week’s Tech Tip, Drew Millen will show you how to invoke a Geocortex Essentials workflow from an identify operation to return non-GIS results. Perhaps you want to see documents in your document management system displayed on the map, or geo-located tweets for a specific area. Kicking off a workflow from the identify operation will allow you to display these types of results and will help you avoid writing a ton of custom code to do so.
“Hi, I’m Drew Millen, Director of Products at Latitude. In this short Tech Tip video, we’re going to talk about workflows; specifically how you can make Geocortex Essentials workflows run in Geocortex Viewer for HTML5 when you perform an identify operation. Let’s dive in.
I’m going to show you how to use identify workflows, which are Geocortex Essentials workflows that piggyback on top of the identify functionality. In this site, I’ve got the default identify behavior working, so when I perform an identify [operation], I’m going to find cities on top of this map, and I want to run a workflow every time I perform an identify as well.
Let’s look at the configuration file that supplies the configurations for this viewer. There’s a module in here called “identify”, and we want to configure the identify behavior. This [view you’re seeing] is the desktop.json.js file that configures the viewer we were just looking at. Notice that the identify module has a section called “identify providers”. It’s here that we specify which logic will run when an identify is performed by a user: for example, querying a graphics layer, or querying the map itself. And down here, I’ve added a workflow identify provider. I’ve also supplied some configuration to this identify provider, so it’s looking for workflows in my site with the suffix “_identify”. Any workflow I’ve added with this suffix will be run by this workflow identify provider.
With that in place, let’s author our workflow. I’m going to open the Geocortex Essentials Workflow Designer. If you go into the “file” menu and click on “new”, you’ll see that we’ve provided a template for creating identify workflows. This template supplies a basic example to help you get started. If you look at the arguments, an identify workflow is expecting a geometry as an input argument. That geometry comes from the identify the user performs. It’s also expecting a unique identifier just for some bookkeeping. We can just ignore that property.
The other properties are output arguments - things that your workflow must supply. For example, the feature set that’s returned from your query, the display name for that collection of features, and any aliases and formats that you want to use for the features that come back. In this example, we simply query a sample layer from ArcGIS Online that looks at [US] states. The geometry from the identify operation is passed in as a parameter to perform that identify. We set the display name to be "states” and we supply some aliases for the fields that are going to come back, making it readable for the user. And we supply some format strings for features that are going to be displayed in the map tips and results list.
With this workflow developed, we [now need to] attach it to our site so that it can be run every time we perform an identify operation. Let’s look at this app in Geocortex Essentials Manager, and I’ll navigate down to the workflows tab where I want to attach this workflow that we were just looking at. Recall that it must have an “_identify” suffix to be picked up by my workflow identify provider. I’ll give it the name “helloworld_identfiy”. Now it’s looking for the URL or URI of the workflow I just authored. So, I’m going to browse for that, and I’m going to go into the directory that we have for this site. I’ll upload it into a folder I created called “resources”. It’s now stored on my workstation as “helloworld_identfy.xaml”. I’m going to go ahead and upload it to that directory and select it.
Now Geocortex Essentials Manager is smart enough to know that this workflow has parameters, so I’m being prompted to supply them here. Because the geometry and unique identifier are going to be supplied by the identify operation, we don’t need to supply them here.
The workflow is now attached to my site, so I’ll go ahead and save it. Let’s refresh the viewer and see the resulting behavior. I’m going to run an identify again, which will identify the cities, but it should also run my workflow and grab the states. Here we can see the result of my states workflow populating the list of results that I expected.
To view a more sophisticated example, we’ve also done the same thing by integrating a workflow that uses the Twitter API to find tweets within a geographic area. In this case, I’m going to perform an identify at the San Francisco airport and discover all the tweets that have been added in this area in the last hour. This is a more sophisticated example of using an identify workflow in a Geocortex Viewer for HTML5 application. To learn more, please get in touch. Thanks for watching!”
Want to learn more about Geocortex Essentials? Visit our Discovery Center to take it for a spin and explore real-world examples of how Geocortex Essentials helps organizations address common (and not-so-common) business challenges.
When we think about the health of our GIS, many of us are used to beginning with the infrastructure. After all, it is what drives the technical performance of your system. The problem with starting at the infrastructure level, though, is that it’s difficult to get a complete picture of which aspects of the GIS are most important to your users.
While the GIS infrastructure is extremely important, not all the resources in your environment are created equal. You might have some layers or services that are used 3-4 times a week, and others that are accessed thousands of times each week. While you want to do all you can to ensure the entire system is performing as it should, there are only so many hours in a day. With an infrastructure-first approach, you’re often unable to hone in on what the most important apps, layers, services, servers, and ArcGIS instances are.
A new way to think about GIS health
It’s time we flip the traditional infrastructure-first approach and begin thinking about GIS health through the lens of end-user productivity. Your GIS is there to help your users do their jobs, so that’s where your analysis should start.
Whether it’s explicitly or implicitly, you’re going to be measured on the productivity of the users you build apps for, not the response time of a specific server. Without the users, there is no need for the GIS infrastructure.
By starting with what your users are trying to accomplish, you’ll be able to map your key business processes and user flows to the GIS infrastructure and resources that are most important to supporting them. Looking at your GIS from users’ perspectives allows you to better understand how it is being used day-to-day and identify the critical resources needed to support your monitoring and optimization efforts.
With so many moving pieces in your GIS, you don’t have time to treat everything equally. Focusing your efforts will let you be much more productive and spend more time working on high-value activities.
When we talk about a user-first approach to GIS health, there are two major areas that you need to be considering:
Performance: While closely tied to infrastructure performance, what we mean here is the performance of your end-users. Are they able to do their jobs effectively with the tools and applications you’re building? Are your users taking longer than expected to complete certain tasks?
When these things crop up, a user-first approach will help you target your efforts and fix issues quicker. A good example would be if an application had a poorly performing layer. This would be an infrastructure performance issue, but if you understand what specific layers and services are used in that application, you will know where to look to address the issue.
Usability: If your GIS infrastructure is performing as expected, the next area to examine is usability. Usability is all about whether your applications are configured and designed in a way that makes sense for what your users need to do. Strong infrastructure performance combined with poor usability is still poor performance (remember, performance is about end-user performance, not infrastructure).
An example of how usability can affect performance is when a common tool is not in a prominent location in your app. If it’s difficult to find, users will waste time looking for it, take longer to complete a task by using a different method, or abandon it entirely. This is also true when incorrect layers are loaded by default – users end up wasting time searching for the layers they need.
Completing a user-first health assessment
Once you’ve adopted a user-first approach to GIS health, you’re ready to perform a user-first health assessment. What you’re trying to accomplish is mapping out the business processes and use cases that you manage with your GIS to the specific GIS resources that support them.
First, you’ll want to identify the different user groups that leverage your GIS. By user group, we mean a group of users that have common work patterns and engagement with your applications. This could be a group of people (or one person) with the same task in your head office, or it could be a specific field crew that uses an app on a tablet. The key here is to identify people who use the GIS in similar ways.
We’ve created a checklist to help you perform a health assessment; it’ll help you map what your different user groups need to accomplish to the GIS resources and infrastructure required to support their work.
The checklist contains areas to detail the users and what they need to do, the app(s) they use, the most-used layers, the services the app(s) consume, which ArcGIS products are used and how they’re configured, and the server(s) that support it.
Once you’ve completed your GIS health assessment, you can use the information you’ve gathered to proactively monitor the GIS resources that are the most important. Tools like Geocortex Analytics allow you to configure personalized dashboards that provide a snapshot of the resources you want to monitor.
You can also configure alarms and notifications in some systems monitoring tools. Because you know what you need to monitor, you can set thresholds for warning signs of potential issues and have notifications sent to your email.
Next, identify anomalies among your use patterns. If certain users are performing notably better or worse than the average, you can dive into the specifics of how those individuals are using the applications you’ve built. Replicate the superior use patterns and examine the weaker patterns to gauge if there is a potential gap in training or understanding of certain functions.
If you want to learn how all of this is possible with Geocortex Analytics, we’d like the chance to show you! We’ve recently added great new features (including individual user reporting) and made significant improvements to performance and reliability. Get in touch with us using the button below.
Geocortex Workflow 5 offers such a wide variety of activities, tools, and functionality that it can be a little confusing to understand exactly what everything does. To help you out, we’ve added several in-app features that provide context and help you better understand the tools and activities you’re using.
In this week’s Geocortex Tech Tip, Ryan Cooney (Geocortex Workflow 5 Product Manager) provides a detailed look at how to use the in-app help system to build the best workflows possible.
“Hi, I’m Ryan, and I’m a Product Manager at Latitude. Today we’re going to look at the in-app help system in Geocortex Workflow 5… lets do it!
When you’re authoring a workflow, there’s a number of ways to get help. The first is going to be through this help menu. If I click on the help button, it’s going to open this help tab. It’s got a few standard links to useful parts of our documentation:
“What’s New” is basically our release notes. Every time we update [Geocortex] Workflow, we’re going publish to this page.
And this is on a public Documentation Center, where all of our Geocortex [product documentation is available]. Geocortex Workflow is in here, and we’re describing any new features that have been added, as well as any issues that were fixed. [You can access] the full details down here.
Also, we’ve got a couple sections here on key concepts and getting started. This is a great place to start for anyone new [to Geocortex Workflow]. It will take you through basically an end-to-end process that will help you get up and running with [Geocortex] Workflow. You can use this to [help you] build your first workflow.
We also have complete documentation of all the activities and form elements that are supported in [Geocortex] Workflow. And this is going to describe everything about the activity or form element; what its inputs are, what it’s actually doing. [You’ll also find] keyboard shortcuts, so things like “Ctrl+S” to save, and some more advanced topics are in here. [There’s] a full list of all the keyboard shortcuts we support.
The last link in here is a link to the Community. Geocortex Workflow has an [online] community where users can post questions to forums – we have product announcements in here, and there’s lots of good stuff in the forums, and there’s tons of traffic. So, if you have a question, do post it on the [Geocortex Workflow] Community and you’ll get feedback right away.
That’s it for the help panel itself, but there’s a few other places help shows up. Anywhere you see a question mark in the application, you can mouse over it and get some information about it: in this case, in the tool box, there’s information about these activities. I can see the “container” activity, and it’s got this information about it. And if I click the “more” link, it will take me to the [Geocortex] Documentation Center where it’s describing the activity and all of its inputs and outputs, as well as any info about whether this activity has any requirements for working offline. In this case, this “container” activity has no special requirements for offline. It will work offline.
We have this info available for every activity. If we look at an example from this workflow I’m designing right now, I can get the same information in the right-hand properties panel. So, if I just click on an activity, I get the same sort of information in this panel. And also, for any inputs, outputs or properties, I can get information about the specific ones. So, the “if” activity has a condition input that’s of type “Boolean”. In this case it’s taking me to the Mozilla development network, describing what a Boolean is.
The same applies to forms. If I open up this form and select “form element”, the properties panel is going to provide me information about -- in this case the “text box” input -- and if I click on “more info” I’m taken to a page describing this “text box” input, [including] all of the properties that it has available for configuration, and their type information.
So, there is help everywhere in [Geocortex] Workflow Designer, so take advantage of that help system. And all that documentation is in our public [Geocortex] Documentation Center that you can browse at your leisure, even outside of just Geocortex Workflow Designer, so take advantage of that.”
Want to see Geocortex Workflow in action and take it for a spin?
Geocortex Analytics dashboards allow you to monitor a collection of GIS resources all in one, easy-to-consume view. Dashboards can be personalized to contain whichever resources and infrastructure you want to monitor.
Perhaps you want to monitor the most-used resources for a set of your users? With Geocortex Analytics, you can build a dashboard that displays all the resources that are critical to the productivity of those users.
In this Geocortex Tech Tip, Derek Pettigrew (Geocortex Analytics Product Manager) shows you how to create a dashboard and populate it with the different resources you want to monitor.
Hi, my name is Derek. I’m the Geocortex Analytics Product Manager, and today we’re going to take a look at configuring dashboards [in Geocortex Analytics], so you present all the panels you’re looking for in one place.
Let’s add a dashboard by going to the top right-hand corner, where we see “my dashboards”. We select this, add a new dashboard and name it based on the theme. I’m going look at a theme for LA County services, so I’ll call it that: “LA services”. By creating the dashboard, I now have one available, but I have not yet added any content to it.
To add content, I need to go and find the panels I want to add to my dashboard. We’re looking at services performance overall, so we’ll probably want to see things that impact it, such as [the physical] server, the ArcGIS server, and maybe Geocortex Essentials as well.
Now looking through [Geocortex Analytics], I have a panel in front of me with a lot of service information on it, but it’s not just for the LA County area that I’m looking for. What I can do is apply a filter to it and type in only things for “Los Angeles”. When I apply this filer, now I can see all my services for “Los Angeles”, and this filter will be retained when I apply it to my dashboard.
I go over here, to this dashboard icon, and I can add [this panel] to the dashboard I just created for LA Services Performance. Now I have that added; I can also go through and take a look at my server that [the LA County service] resides on to see how it’s performing so I can correlate that information. I’m going down to my actual memory usage - I think this is very useful to understand how my service is performing to ensure the actual hardware can handle the demands. So, I’ll also add that to my dashboard for LA Services Performance.
Finally, I’m going to go and filter this whole menu, and take a look for my LA county application that I want to monitor. Selecting this, what I’m looking for is to see the overall traffic that’s coming in, to see how that actual service is performing under the demand of end-users. And I’ll also add that one in to my new dashboard.
Now that I’ve gone through and added numerous items to my dashboard, let’s take a look at it. This is what I’m seeing in my dashboard. I’m able to go through and have this wonderful panel just with my filtered results for “Los Angeles”. I’m able to see the server it resides on, and I’m able to go down and see the actual visitor types coming through to my applications, all in one place. And these dashboards are also exportable to PDF for sharing.
That’s how to create a [Geocortex Analytics] dashboard. Thank you for your time.
Last year we released Geocortex Workflow 5 and – for the first time – we delivered a Geocortex product as software-as-a-service (SaaS). This is the direction Esri has been moving with ArcGIS Online, and has been the industry standard for most business software applications since the early 2000s: GIS arrived a little later to the game.
For those who aren’t familiar, SaaS is a way of delivering centrally-hosted software via the Internet – instead of downloading and configuring a piece of software in your environment, you access the application in a web browser.
One of the key benefits of SaaS is the concept of continuous deployment. Because applications are centrally hosted by the provider, new software updates can be pushed to customers with a much higher frequency.
Future-proofing your investment in Esri technology has always been at the forefront of our product direction, and this is an inherent protection that comes with SaaS delivery.
Since our beta release of Geocortex Workflow 5 last August we’ve built and deployed 11 successive releases – each containing exciting new features and adding improvements – and without interruptions to our users. Customers didn’t require compatibility testing, or incur any downtime to upgrade their applications. In fact, most probably didn’t notice: they simply logged in to do their work and were running the most current version of the software.
Many of us have experienced the pain of needing dedicated GIS- or IT professional assistance to complete an upgrade. It can take the better part of a workday, and involves significant planning and testing to ensure everything works as it did before (and even then, it sometimes doesn’t).
Due to the resources required to keep large, on-premises software deployments current, many organizations avoid upgrades, only to find themselves in a sticky situation a few years down the road when they’re finally forced to do so. This is never a good situation to be in, and the ease of upgrades with SaaS helps organizations limit the technical debt that comes with delays.
Rapid deployment of new features
We all love new features; with lots of traditional solutions, however, we can end up waiting months for new features that need to be scheduled into a long release cycle. SaaS, in comparison, allows new features to be built and deployed at a rapid pace. In some cases, new features can be delivered within a matter of days.
There is no better example of this than our 2017 Geocortex Business Partner Summit, an annual event where we bring all our international partners to our Victoria Head Office for three days of learning and collaboration. We dedicated the first day to a Geocortex Workflow Hackathon, where our partners teamed up to build the best workflows they could.
Our partners built some really cools apps, and provided a bunch of great feedback to our Product team about their experiences with Geocortex Workflow 5. By the last day of the Summit (roughly 48 hours later), our Product Manager was able to show them some of the feedback that the team had already implemented in the product.
Applications that are always secure
As noted above, organizations are often saddled with technical debt from not upgrading once new versions become available. Not only can this result in more work when you’re finally forced to upgrade, it can expose your applications to security vulnerabilities.
Many of our customers use their GIS for mission-critical business applications, which makes security even more critical.
The rate at which new releases are pushed to production assures that your applications are always secure. Instead of waiting to upgrade to new versions, every time you log in you know you’ll be working with the latest version of the software that’s been tested to ensure the strongest security measures are in place.
This is the approach most SaaS providers have adopted. For example, your Google Chrome browser has likely updated every week without you noticing. Google takes security very seriously, and pushing consistent updates lets them assure that their users are always running the most secure version of the browser.
Lower cost, better uptime
SaaS applications are centrally hosted, and this ensures that your applications are always hosted in an optimal environment. When customers host their own applications, there are inconsistencies in the configuration of the hosting environments, and they may not be hosted for the best performance.
For Geocortex Workflow, we use world-class hosting services from Microsoft Azure. If anything goes wrong with the hardware in the hosting environment, teams of Microsoft engineers are standing by to fix the issue. And if something goes wrong with the software, we have a team ready to investigate and fix the issue, which can typically be achieved by pushing a new release or software patch.
The cost of hosting your applications is included in the cost you pay for the product. The hosting environment is scalable and includes protections such as failover, which are both difficult and expensive to achieve in your own environment. You don’t need your IT department to worry about managing your servers - we take care of all the hosting and associated maintenance. It’s a win-win: you free up your people for more important work, and you get a hosting environment that’s set-up in the best possible way for the product.
It’s easy to see why SaaS offerings are becoming the standard for business software solutions. Being able to offload a bulk of the costs associated with hosting your own applications can save you (and has saved many organizations) thousands of IT hours and dollars each year. The lower initial cost and the rapid delivery of new, powerful features are why we think SaaS is the direction more and more GIS tools will move towards.
We’ll still be offering on-premises solutions for situations where it makes the most sense, but SaaS is where we’re headed. This is the model Esri has adopted with ArcGIS Online, and is sure to be the model for future ArcGIS products.
If you’d like to chat about what adopting a SaaS-first approach would look like for your organization, please feel free to get in touch with us. You can also get your hands on Geocortex Workflow 5 in our Discovery Center, complete with tutorials, demonstrations, and sample applications you can try.
One thing that we’ve been hearing, though, is that developers want the ability to apply their own code in the workflows they’re building.
So, what are “activities”? In the simplest terms, they’re the building blocks of a workflow - each activity represents a unit of work. Geocortex Workflow 5 offers more than 150 pre-built activities that chain together to automate almost any task. Activities such asgeocode, query layer, set the map extent, get user info, calculate distance, buffer geometry, run geoprocessing,and so many more allow you to streamline even the most complex GIS and business tasks.
Flex your development chops and write activities to perform tasks that weren’t previously possible – or were extremely complex to assemble with pre-built activities. You can combine your programming skills with Geocortex Workflow’s intuitive, activity-based design to build powerful applications.
Custom activities can be built for yourself, or for others in your organization; even non-developers can work with activities as they would any others in Geocortex Workflow Designer. And granted that your technology of choice supports the functionality you’re building, custom activities can be consumed in Geocortex and/or Web AppBuilder for ArcGIS applications.
Take Geocortex Workflow 5 even further
While most tasks can be automated with the pre-built, out-of-the-box (OOTB) activities offered with Geocortex Workflow 5, you can now build anything you want with the SDK. Custom integrations, talking to custom web services, connecting with 3rdparty APIs, and interfacing with custom code in your existing apps are now all possible.
Here are a few examples of what you can do with custom activities:
Perhaps you want to integrate with a 3rdparty system like SAP®. While this is possible with pre-built activities, you’ll be manually assembling workflow logic to make the web requests, parse the responses, and execute your business logic. With the latest updates, you can achieve a result that’s more clean, efficient, and consumable by wrapping the logic in a few simple custom activities.
Many common tasks are time-consuming to build – maybe you find yourself using the same pattern over and over in one workflow. Instead of following this repetitive pattern, you can bundle all the logic within a single custom activity. An example might be sorting map features by multiple columns. Pre-built activities are available that will sort data by one column, but it’s more efficient to write a custom activity to sort by multiple columns than it is to link activity after activity – especially if you need to perform these tasks across multiple applications and workflows.
Set a standard
Unless your organization follows strict guidelines for building custom apps and widgets, there is always the risk that developers will use different patterns and approaches to develop custom code. This makes it difficult for others to maintain or update the code; it can be a bit like the wild west.
This can be mitigated with Geocortex Workflow 5’s custom activities. All activities have the same, simple signature of inputs, outputs, and an execute method. Following the activity-based pattern ensures you have a standard practice for building custom logic.
With activities, you are implementing a unit of work rather than a large, rigid solution. This promotes reusability and your code will be easier to write, interpret, test, and maintain. Any developer will be able to pick up your custom activities and understand how to work with them.
You can also control how custom activities are presented to other users in the browser-based Geocortex Workflow Designer. They can be configured to look like the existing OOTB activities, helping ensure a consistent pattern across your apps.
Custom activities in Web AppBuilder for ArcGIS®
At Latitude Geographics, we’ve always built complementary technology to help our customers accomplish even more with Esri’s ArcGIS platform. With Geocortex Workflow 5, we’ve taken this to a new level by allowing you to build workflows that runinsideWeb AppBuilder for ArcGIS.
If you’re using Web AppBuilder for ArcGIS, creating custom activities with Geocortex Workflow 5 is still the preferred alternative to writing a bunch of custom widgets. Initial deployment will require a similar amount of effort, but ongoing maintenance and modifications of custom activities require significantly less time (and pain!).
If you write a custom widget for Web AppBuilder for ArcGIS and want to deploy it to multiple apps, you need to edit the source code inallthe applications using that widget each time a modification is required. With Geocortex Workflow 5, the custom code is packaged in an activity, and you only need to modify the source activity for changes to be applied across all your applications.
You can access the SDK inour Documentation Center. Just look for the .zip file that contains all the necessary instructions you need to get started.
Let us know how it goes
As you get going with the new SDK, we want to hear your feedback. If you have questions, comments, or concerns, pleaseget in touch with usto let us know.
We’d also love it if you share what you’re building with us and other users in theGeocortex Workflow Community. This is a great place to connect with other users - everyone benefits from sharing tips, tricks, and sample workflows.
Our Product Development team has been hard at work for the past few months, adding functionality for GIS professionals to work more intuitively with data in tables (after all, your data is the most important aspect of your apps), and brings in editing functionality that hastypically been reserved for desktop GIS tools.
One of these features is aredesigned results table that dramatically improves performance, introduces infinite scrolling for hundreds (or even thousands) of records, and allows you to perform actions on specific results or sets of results. Geocortex Essentials users also benefit from advanced editing tools that allow you to cut and union geometries while maintaining attribute integrity.
Geocortex Essentials 4.9 (alongside Geocortex Viewer for HTML5 2.10 and Geocortex Mobile Framework 2.3.1) arenow available for download. You can learn more about the latest releases on the product release page, or download the new versions in the Geocortex Support Center.
Additional support for Geocortex Workflow 5
We’ve continued to add support forGeocortex Workflow 5to deploy offline workflows, dynamic forms, and new activities in Geocortex Essentials applications.
Geocortex Workflow 5 is our newest product that allows you to extend Geocortexand Web AppBuilder for ArcGIS ® applications by turning complex business processes into simple, guided end-user interactions. Instead of writing thousands of lines of code to meet custom requirements, choose from more than 150 pre-built activities that chain together to automate almost any task.
Building upon Geocortex Essentials’ popular workflow capabilities, Geocortex Workflow 5 is also offered as a standalone product and enables you to build workflows right in your browser.
Geocortex Workflow 5 provides the ability to take workflows offline to support field operations. You can download workflows and run them in your mobile viewers, keeping field workers productive… even in areas with no network connectivity.
For example, a municipal field engineer may need to perform fire hydrant inspections throughout a rural neighborhood. With Geocortex Workflow 5, the inspection application and corresponding workflow can be downloaded right to the worker’s tablet at the beginning of the day, and the assigned inspections can be completed without any network connection.
When the day is done and the tablet has returned to network connectivity, the field worker can sync the inspection data back to the database. This ensures that the field worker can continue to complete inspection, regardless of where they are, and that the data is readily available to the people in the office that need it.
Explore Geocortex Workflow 5
If you’d like to learn how you can take your business processes offline, or extend applications you’ve built with Web AppBuilder for ArcGIS, you can get a feel for the product in theGeocortex Workflow 5 Discovery Center. Here you’ll find demonstrations of different deployment scenarios, tutorials, and sample applications.
Or, if you’re ready to get started with Geocortex Workflow 5 you can sign up for a free developer license here.
Over the past few weeks, I’ve shared foundational knowledge about how data is stored and managed in the pipeline industry.My first postintroducedArcGIS Pipeline Referencing (APR)and explained some options operators have in adopting it.My second postworked to demystify the confusing world of pipeline data models (there’s a lot to consider).
In this post, I will outline important information you need to consider when choosing a data model for your organization, including:
Limitations of current data models;
How APR is addressing these limitations; and
Questions you should ask yourself to help assess the best data model for your organization (should you choose to move to APR).
Limitations of Existing Models
A data model is defined as: “An abstract model that organizes elements of data, and standardizes how they relate to one another and to properties of real world entities.”
In the pipeline space, real world entities include not only the pipe and valves that make up the pipeline system, but all of the things that happen to and around the pipe. Things such as repairs & replacements, surveys & patrols, one-call, cathodic protection, ILI & CIS assessments, HCA & class location, land ownership & right-of-ways, and crossing information all have components that, in one form or another, need to be captured in your GIS.
The differing needs of these complex data representations expose limitations in legacy systems. And in a world where critical business decisions must be made from this data, identifying limitations and addressing them is an important step as we move to next-generation solutions.
Limitation #1: Data volume
As the years have progressed, the operational and regulatory needs surrounding pipelines have increased. These needs are driving new levels of inspections and analyses on pipeline systems - resulting in more data, both in terms of volume and complexity. The legacy systems were simply not designed to handle the volume of data current programs produce.
An example is the case ofInline Inspection(ILI) andClose Interval Survey(CIS) data. A single ILI or CIS inspection results in hundreds of thousands of records. With assessment intervals recurring every 1-7 years -- and operators performing dozens of inspections each year -- the resulting records from these inspections alone add millions of records to the database. This doesn’t include follow-up inspections, digs, andrun comparisonactivities.
When you couple the sheer volume of records with complexities surrounding data management and the need to provide a high-performance environment, limitations in the system are quickly exposed. These limitations force operators to make difficult data storage decisions, often choosing to remove subsets of data from the system of record. This is sub-optimal to say the least; it significantly impacts your ability to view and analyze important trends in the data.
Limitation #2: Engineering Stationing
Engineering stationingis important, complex, and rooted in pipeline data management. Before modern GIS, operators kept track of the location of pipelines and associated appurtenances using engineering stationing on paper or mylar sheets. With a vast majority of pipelines that are in use being constructed before the existence of modern GIS technology, large volumes of data were referenced with this approach.
Engineering stationing doesn’t benefit all operators; however, companies that manage gathering and distribution assets find this method burdensome … and dare I say unnecessary?
When traditional data models were developed, the need to adhere to legacy engineering stationing outweighed the need to redesign the entire system to favor a spatial-first approach. But as technology has improved, and more users have embraced spatial data, new methods to blend modern (spatial-first) and legacy (stationing-first) models have emerged. Operators need this flexibility when managing their assets.
Limitation #3: Native support for the Esri Platform
The emergence of thePipeline Open Data Standard (PODS)represents the last major shift in data storage solutions for pipelines, and it happened nearly 20 years ago. At that time, the GIS landscape was both immature and fragmented. As a by-product, PODS was designed specifically to be GIS-agnostic. In the nearly two decades since, Esri has emerged as the predominant provider of spatial data management, and they have developed a suite of solutions that enable stronger collection, management, and analysis of data.
Chances are your organization embraces Esri for spatial data management and content dissemination, which begs the question: “If your organization has standardized on Esri technology, does it make sense to employ a data structure that does not natively support the environment?” (Hint: probably not.)
Addressing and Improving Limitations
The core of APR has been engineered to address important limitations currently felt due to the existing designs of PODS and APDM. APR directly addresses the three limitations described above.
Improvement #1: Data volume
Understanding the need to support large datasets, time-aware data, and the ability to offload the storage of data to other systems, APR has been engineered to handle the high volume of data more efficiently, with a focus on scalability. To achieve this, available rules can be configured to allow a more fine-grained approach to managing data during routine line maintenance. No longer are implementations limited to keeping the data referenced to the LRS or detaching it.
Changes like these allow operators to keep more data in the system, providing a baseline for more powerful analysis and decision making.
Improvement #2: Engineering Stationing
As explained above, engineering stationing is firmly rooted in pipeline data management, but it’s not required for all operators. New construction, gathering systems, and vertically-integrated distribution companies are finding the rigorous application of stationing to be unnecessary overhead. If your current database repository requires it, and your organization doesn’t rely on it, you are taking on unnecessary data management cycles - costing valuable time and money.
APR not only provides the ability to manage data in stationed and non-stationed methods: its flexibility allows for both stationed and non-stationed lines to exist in the same model. Let that sink in for a bit: Operators that have deployed two separate data management systems can now consolidate the management of these assets! This functionality benefits a majority of the clients I’ve worked with over the years.
Improvement #3: Native support for the Esri Platform
As I stated in myprevious post, APR is (possibly most importantly) integrated with the ArcGIS platform. You can perform complexlong transactionson your data, analyze it in ways that have not been possible before, keep track of product flow using theFacility Network, and get the data in the hands of your organization with methods that are integrated, fluid, and connected.
Considerations for Implementation
If you’re considering implementing ArcGIS Pipeline Referencing (APR), knowing why, and which data model to use with it is has more to do with your business than with IT -- success can be achieved with either model.
But how do you decide which one is best for your organization? Here are some questions to consider as you’re laying the foundation for your next-generation GIS implementation.
1) Business focus: What segment of the vertical are you in?
If you are a distribution company with transmission assets, the decision is pretty clear: you should chooseUtility and Pipeline Data Model (UPDM). It’s designed as a distribution-first model, allowing you to integrate the management of distribution and transmission assets in a single model.
If your company is ‘upstream’of distribution, the answer gets a bit trickier. Both models are adequate, but my vote tends to lean towards PODS for a few reasons:
Out-of-the-box PODS supports APR slightly more natively for operators without distribution assets than UPDM.
Are you a liquids operator? As UPDM is focused on gas utility and transmission, the PODS model will provide a better solution for those moving liquid products.
As an organization delivering a comprehensive model to the industry, PODS is a thriving community of operators and vendors working together to design a comprehensive model for the industry. This collection of subject matter expertise is invaluable to operators – and provides an opportunity to share your experience with like-minded individuals.
2) Existing model: What are you using now?
As you consider moving to APR, understand that it’s going to require a data migration. The existing system will need to be mapped and loaded into the new solution. If you are currently using PODS and are a gathering, midstream, or transmission company, APR with PODS is probably the best solution to implement. It’s likely that your existing data will migrate more seamlessly, and the model will make more sense to those that manage and interact with the data.
If your organization is primarily gas distribution, and you’ve implemented a PODS model for a small subset of high-pressure assets in the system you manage, consider UPDM. You can take advantage of the intended benefits and consolidate those assets into a common platform.
3) Other programs: ILI, CIS, other survey, Cathodic Protection
If your company has a strong investment in recurring inspections, PODS again rises as the preferred model, especially considering the efforts of the PODSNext Generationinitiative around how to efficiently store and process this data moving forward.
With the growing importance of importing and exporting data (due to acquisitions, divestitures, etc.), analysis, and reporting, a system that promotes standard mechanisms to exchange data becomes increasingly more important. With the work the PODS organization is putting into a data interchange standard, it again rises as the preferred model.
There isn’t just one approach, but there is abestapproach for your organization
While this change is beneficial for operators, many things need to be considered before you commit to an approach. I hope my series of posts provides some clarity for you. To stay up-to-date on the data model landscape and the tools surrounding it, I encourage you to followthe PODS associationandEsri. The work of these two organizations in the pipeline space is a great thing for our industry.
If you’d like to discuss any of these concepts further, or would like to have a conversation about which model is best for your implementation,please get in touch with me here. I, and the rest of the Energy team at Latitude, are eager to offer our years of expertise to help you.
Why do we talk about data models so much in the pipeline space? From my years interacting with operators, vendors, and regulators, I believe it comes down to a handful of reasons:
The regulatory landscape;
The need for thorough and accurate data;
Meeting demands of complex implementation architectures;
Maintaining interoperability; and
Aligning with industry standards for linear referencing.
The pipeline data model landscape can be a difficult one to navigate. Operators work with significant amounts of data and there is no shortage of models to explore.
In my last post, I introduced ArcGIS Pipeline Referencing (APR). APR is at the core of an integrated offering from Esri that incorporates linear referenced GIS (LRS) into the ArcGIS® platform. APR focuses on the “core” of the LRS, with modeling of the data on the line (events) being stored in either a Pipeline Open Data Standard Next Generation (PODS) or a Utility and Pipeline Data Model (UPDM).
This is a slight deviation from previous approaches, which has introduced some confusion. PODS and UPDM provide database models to organize your pipeline data, while APR provides a set of tools to manage and interact with it inside your GIS. Hopefully the diagram below helps explain it a bit.
With the emergence of Integrated Spatial Analysis Techniques (ISAT), pipeline operators have had methods to store data about their systems and the surrounding environment since at least the early 1990s (some methods probably pre-date that). These methods have continued to develop through the work of the PODS organization and contributors to the ArcGIS Pipeline Data Model (APDM).
Each of these have offered unique benefits to the industry, but they’ve also introduced unneeded fragmentation to the landscape. As I mentioned in my previous post, APR helps simplify this by providing consolidation.
Pipeline Open Data Standard (PODS)
PODS is the data model standard for the pipeline industry. Founded in 1998, PODS was developed to extend legacy models (ISAT), and provide a baseline for software solutions in the industry. Since that time, PODS has established itself as the industry standard.
The success of PODS is rooted in the unique nature of operators and vendors coming together to meet the storage, analysis, regulatory, and reporting needs of the industry. It is important to note that PODS is more than a data model: it’s a group of individuals coming together to discuss, evaluate, and establish the data needs for the industry. The work of the PODS organization extends well beyond how to model a pipeline in a database.
PODS exists predominately in two variations: PODS Relational and PODS Spatial. Both models share a structure and format that adheres to the PODS standards, but differ in how they’re implemented. PODS Relational leverages core relational database standards, and PODS Spatial provides a native implementation for Esri Geodatabases.
Important to note: even though PODS Relational is designed as a GIS-agnostic data model (i.e. it’s not a geodatabase), most every implementation I have worked with has vendor-developed implementation methods and toolsets that integrates with Esri’s ArcGIS platform.
ArcGIS Pipeline Data Model (APDM)
APDM is the Esri pipeline data model template for pipeline assets. As with all of the models provided by Esri, APDM is a method for operators to access a structured data model, free of charge, in full support of an Esri implementation.
The template nature of APDM differs from the standards designation of PODS by allowing any portion of the base template to be altered to meet implementation requirements. This flexibility is the single biggest deviation from the standards-driven approach of PODS. Another separation between APDM and PODS is that APDM focuses on the features that make up the pipeline network itself, and is not intended to be as encompassing as the PODS models.
With the release of the UPDM, APDM has ultimately been retired.
Utility and Pipeline Data Model (UPDM)
As mentioned, UPDM has essentially replaced APDM as the recommended Esri pipeline data model. This model provides an implementation foundation for gas and hazardous liquids industries. UPDM has been designed to work with or without the APR linear referencing component of the ArcGIS platform.
Most importantly, UPDM is the first model released that allows vertically-integrated utilities (gas distribution companies that operate regulated, high-pressure lines) to consolidate database schemas, and centralize data management to a single model.
I hope this post has helped demystify the world of pipeline data models, as there is a lot to consider and it can be difficult to understand.
Next week, I will dive into what you should consider when choosing the best pipeline data model for your operation, including the limitations of different models, how APR is addressing the limitations, and the questions you should be asking yourself.