Showing 7 result(s) for tag: esri

Delivering accessible mapping applications for everyone [Geocortex Tech Tip]

Accessibility has become a top-of-mind topic for businesses, government agencies, and developers of technology in recent years; U.S. legislation like Section 508, which requires inclusivity for end-users of all abilities, has emerged in recent years and historic exemptions for web mapping have been eliminated.

Since 2015, Geocortex Viewer for HTML5 has been accessible out-of-the-box and meets the criteria to be Web Content Accessibility Guideline (WCAG) AA compliant, without requiring administrators to undertake complicated and onerous configuration or development. In this week’s Tech Tip, Garrett takes a closer look at screen reader support, keyboard navigation, and other accessibility features that ship with Geocortex Viewer for HTML5.

Watch on YouTube

 

Video Transcription

“Hi my name is Garrett, I work on the Product Experience Design team here at Geocortex. Today we’re going to take a look at some of the accessibility features included in our viewers. You don’t need to do anything to configure these – they’re all included out-of-the-box with every viewer implementation.

Geocortex Viewer for HTML5 is accessible and meets WCAG AA standards. This has taken a lot of work on our part to look at many different things, from color contrast to screen reader support to keyboard navigation.

Let me show you how keyboard navigation works in [Geocortex Viewer for HTML5]. First, we have skip links. When you first come to our viewer and use the tab key to navigate throughout the application, the first time you hit “tab” you’ll be presented with what we call skip links. This gives you quick shortcuts to jump to popular portions of our viewer.

The skip links allow you to jump to other regions in our application without having to tab through each individual, clickable item. If we wanted to jump straight to the tool bar, we just tab over and hit enter. Now once the toolbar is open, we can navigate through the different tabs on the toolbar to find an individual tool that we want to use.

Let’s try drawing a polygon on the map. When we activate the drawing tools with keyboards, we have activated “accessibility drawing mode”. Once we’ve activated the polygon drawing tool, our focus is now on the map, as indicated by the purple line around the map. We can now draw on the map using the keyboard.

Hitting Enter will drop a marker in the center of the map extent, and we can control the position using the arrow keys on the keyboard. Hitting Enter will drop a vertex on the map, from which now we can move our cursor around with the keyboard. Hitting Enter again will drop another vertex. If you find that the increments with which the keyboard moves around is too large, you can hold the Alt key and you get a more fine-grained control over where you want to drop the vertex.

Hitting Enter again will complete the shape, and now we can edit this shape using the keyboard shortcut “V”: we can cycle through all the vertices, which we can move. Holding Shift+V will cycle the vertices in reverse order. Between each vertex, another handle will get added that we can then drag out to edit the shape, which will add more handles that we can edit. When we’re done drawing the shape, hit Enter again to complete the shape. And now your shape is drawn on the map. If you hit Enter again, you can draw a second shape. And that’s how you draw on the map with the keyboard.

Another great accessibility feature in our viewers is screen reader support. Geocortex Viewer for HTML5 supports the combination of Firefox with the NVDA screen reader. The screen reader will read aloud changes in the application, links, text, map location, those sorts of things.

In combination with some of the keyboard support, we can navigate through the viewer and the visually impaired will have the benefit of a screen reader reading out the context and instructions to them. Let’s try a couple examples here.

[Screen reader reading results]

Now know that we can perform a search because the screen reader has read out those instructions for us. So, let’s perform a simple search.

[Screen reader reading results]

After we perform the search, the screen reader read out that we’ve closed the home panel and opened the search results panel. We can tab through here to hear other instructions.

[Screen reader reading results]

As you could hear, as we zoom to all the features in the feature collection, the screen reader read out the coordinates and extent change on the map to keep users centered with where the map is now located.

To learn more about accessibility with our viewers, you can visit our Documentation Center at docs.geocortex.com. Just search for “accessibility” and you can read all about accessibility in our viewers, including a detailed list of all the keyboard shortcuts to help you navigate through applications with just the keyboard.”

​You can learn more about Geocortex Essentials accessibility features in our 2017 webinar, which is available on YouTube here.


How to search for data in a non-spatial database [Geocortex Tech Tip]

Maps allow you to visualize data in meaningful ways and expose patterns that can’t be seen anywhere else. One of the challenges, though, is that your most important business data typically lives in another system or database. This can become even more challenging when it’s data stored outside your geodatabase.

In this Geocortex Tech Tip, Drew Millen shows you how to search for data in a non-spatial database (such as Oracle or SQL), find the spatial relationship, and display it on a map. 

Watch on YouTube

 

Video Transcription

“Hi everybody, I’m Drew with Latitude and in this Tech Tip we’re going to look at searching for non-spatial data. That’s data stored in Oracle or SQL Server… somewhere that’s not in your geodatabase. We’re going to look for that, find the spatial relationship, and display it on a map, so let’s see how we do that with Geocortex.

What we’re looking at here is a very basic Geocortex viewer application that’s been configured with a single layer called “Land Use”. This contains polygons of different types of land uses and what I’m interested in is this “Arts and Recreation” land use polygon, which contains park information for Los Angeles County. I also have a database table - in this case, an Excel spreadsheet of trees. Now notice that I‘ve got records of all the different types of trees that exist, but I don’t have location information for these. In other words, this is a non-spatial database table. This could live in Oracle or SQL Server, but for the sake of this demonstration it’s just an Excel table.

We’ve got a facility that tells us which park this tree belongs to, but we still don’t have its “XY” location on the map. What I want to find out is where I can find certain trees in my county, so, what parks do I have to visit to discover certain types of trees.

Now in this application, I’ve got a data link between my parks layer, or my land use layer, and the tree database. So, if I have a park selected, and I view the additional details for [the park]; I can see the spatial details associated with that park and I can also see the trees that are within that park, but I’m not quite there. What I want to find out is which parks contain which trees... and remember, my trees don’t have “XY” locations.

How do I solve this? Well, I’ve already set it up so that we can do a search against this Excel table. So, if I do a search for the word “macadamia”, for example, I will find search results from that Excel table, but I still don’t have the location on the map where these macadamia nut trees exist. I need to create a “join” between these search results and a spatial layer on the map to find the underlying spatial feature. In other words, the park that the trees live within.

What I can do is come back to Geocortex Essentials Manager where I’ve configured this application. And to connect to this Excel spreadsheet, I’ve established a data connection. You can see the connection string that we’ve used here simply points to the spreadsheet. If you’re connecting to Oracle or SQL Server, there’s different syntax that you would use for your connection string, but the same idea exists.

Now that we have that data connection, we can set up what we call a “Search Table.” And a search table gives us a select clause: in other words, which fields are we interested in returning from that table when the user issues a search. In this case, we want the user to be able to search on the common name of the tree (like my example when I typed in the keyword “macadamia”) and find all the attributes from the LA Parks trees in this database. So that search is set up.

I’ve also got the land-use layer in my site configured with a datalink. This datalink means that the layer is joined to this data connection, so that every time I click on a park on the map, I see the associated records from my Excel spreadsheet. Recall, however, that I want to do the reverse. So, our current datalink makes sure that every time I select a park on the map I’m grabbing the trees and joining it on the facility column. Notice that facility column is the name of the column that we're using in the spreadsheet to represent the park that the tree exists within.

There’s this section down at the bottom, here, that allows me to add a search, so that’s the reverse of what we’re currently doing, and it allows me to use one of the searches that I’ve configured to find these features from the land use layer that match my search criteria from my datalink.

I’m going to give this a display name. We’ll just use “Park Trees Search,” and the search table that I’m searching on is the only one that we’ve configured in our site earlier, so it’s this Park Trees Search table. And then the field that we want to join is called “Facility,” and that maps to the name of the land use polygon. So that’s where we get our many-to-one relationship from. I’ll go ahead and save the site with those configuration changes and then refresh our viewer.

Now I’m going to issue a search for the word “macadamia” like I did before, and I’ll find the same four results from my Excel spreadsheet. But now when I drill into a result, we can see the facility that it belongs to. It exists in two different parks: there’s “Runyon Canyon Park” and “Runyon Canyon Park – MRCA”. If I click on one of those it’s going to take me to the park where I can discover these macadamia trees.

Hopefully this quick Tech Tip has shown you how you can configure a non-spatial data source to be searchable inside your viewer and still return spatial results. Thanks for watching!”

Explore more Geocortex Essentials functionality in the Discovery Center.

Discover Geocortex


Configuring Geocortex Analytics to monitor a new Portal for ArcGIS instance [Geocortex Tech Tip]

Geocortex Analytics helps you get a complete picture of your GIS infrastructure; you can ensure peak performance, keep your users happy, and avoid interruptions. For many of us, Portal for ArcGIS is a critical piece of the GIS environment, and one that we want to monitor.

In this Geocortex Tech Tip, Aaron Oxley shows you how to configure Geocortex Analytics to monitor a new Portal for ArcGIS instance.

Watch on YouTube

 

Video Transcript

“Hi, my name is Aaron Oxley. I’m a Product Support Analyst at Latitude Geographics and in this video I’ll be explaining how to configure Geocortex Analytics to monitor your Portal for ArcGIS.

Once you're logged in and looking at the summary page in your Geocortex Analytics reports, click the “configuration” link in the top right corner. And that will take us to the configuration overview page, and we can see to add a new resource, we need to click “add resource” in the bottom of the resource list. Let’s go there.

Portal for ArcGIS is what we’re after. As you can see, there’s not a lot of configuration required. The first thing that we need is a name. This is what’s going to show up in reports, alarm emails, and texts. I like to use the name of the server where the portal is hosted, even if you only have one portal, it’s a good naming convention in case your environment grows in the future.

In the next field, you’ll need the URL to your Portal for ArcGIS. You can see there’s an example here; the default URL is “servername/arcgis”, but if you aren’t sure about what to put here you can confirm the correct URL by testing in a browser.

I’d like to do that, so let’s open a new tab and load up our portal. We can see this is our portal, so we know the URL is correct. Let’s copy it and take note of the protocol. We can see here that it is HTTPS. We’ll paste the URL and toggle the protocol field to HTTPs.

Now lastly, because your Portal for ArcGIS is secured, you need to enter credentials, and they need to be from an administrator account. There’s five options here. First one is token, and if your portal uses token authentication it’s very straightforward: just enter a username and password for an administrator and click save.

The next option -- OAUTH2 -- is certainly the most common, and it’s also Esri’s recommended methodology for user sign in. We see a message here that we’re going to need an app that has this redirect URI. We’re also going to need an app ID, an app secret, and lastly, we see a message letting us know that we are still going to need to provide administrative credentials.

So, let’s go and get this app created: come over to your portal and click “content” in the top. Under my content, click on “add item” and select an application. In here, we’ll select an application again and enter a title, and some tags. If we click “add item” that will create our application, and we can see it there.

Under the “settings” tab, near the bottom, there’s a registered info button. If we click that button, and then click the update” button, we can enter a redirect URI. If you remember from the configuration page here, we have the redirect URI specified. We can copy that and paste into here. Click the “add” button and it shows up in the list below, click the “update” button and it’s all set.

Now lastly, before we go back to the configuration in Geocortex Analytics, we need the app ID and app secret. Go ahead and copy the app ID, paste into the corresponding field in Geocortex Analytics. Same thing with the app secret. That’s all there is to it. We can now click “save” and we should be prompted to sign into ArcGIS Enterprise.

So, this has now taken us to Portal for ArcGIS. These are administrative credentials for portal, so this is an account that has administrator access. “Sign in” brings us up to “Save Successful” and we can see that it was saved successfully.

The third option for authentication types is good old Windows authentication, and it really is as simple as entering username, password, domain, and clicking save. As with the other types, this does need to have full administrative access.

And the last two options are just combinations of the previous three in case your portal is configured with two layers of security. But the procedure is the same. Just follow the exact same steps as for these ones above. And that’s all there is to it.

Once again, my name is Aaron Oxley, I hope this video was helpful. Thanks for watching!”

To learn more about how Geocortex Analytics can help you get a better understanding of the performance of your GIS, please get in touch and we'd be happy to take you on a tour of the product.


Running Geocortex Essentials workflows from an identify operation [Geocortex Tech Tip]

The Geocortex Essentials identify operation allows you to draw a geometry on the map, and have the application return a collection of features that intersect that geometry. But the identify operation will only return results from your GIS layers, and many (likely most) of us integrate our GIS with various 3rd party business systems, such as asset management, document management, ERP, and business intelligence.

In this week’s Tech Tip, Drew Millen will show you how to invoke a Geocortex Essentials workflow from an identify operation to return non-GIS results. Perhaps you want to see documents in your document management system displayed on the map, or geo-located tweets for a specific area. Kicking off a workflow from the identify operation will allow you to display these types of results and will help you avoid writing a ton of custom code to do so.

Watch on YouTube

Video Transcription

“Hi, I’m Drew Millen, Director of Products at Latitude. In this short Tech Tip video, we’re going to talk about workflows; specifically how you can make Geocortex Essentials workflows run in Geocortex Viewer for HTML5 when you perform an identify operation. Let’s dive in.

I’m going to show you how to use identify workflows, which are Geocortex Essentials workflows that piggyback on top of the identify functionality. In this site, I’ve got the default identify behavior working, so when I perform an identify [operation], I’m going to find cities on top of this map, and I want to run a workflow every time I perform an identify as well.

Let’s look at the configuration file that supplies the configurations for this viewer. There’s a module in here called “identify”, and we want to configure the identify behavior. This [view you’re seeing] is the desktop.json.js file that configures the viewer we were just looking at. Notice that the identify module has a section called “identify providers”. It’s here that we specify which logic will run when an identify is performed by a user: for example, querying a graphics layer, or querying the map itself. And down here, I’ve added a workflow identify provider. I’ve also supplied some configuration to this identify provider, so it’s looking for workflows in my site with the suffix “_identify”. Any workflow I’ve added with this suffix will be run by this workflow identify provider.

With that in place, let’s author our workflow. I’m going to open the Geocortex Essentials Workflow Designer. If you go into the “file” menu and click on “new”, you’ll see that we’ve provided a template for creating identify workflows. This template supplies a basic example to help you get started. If you look at the arguments, an identify workflow is expecting a geometry as an input argument. That geometry comes from the identify the user performs. It’s also expecting a unique identifier just for some bookkeeping. We can just ignore that property.

The other properties are output arguments - things that your workflow must supply. For example, the feature set that’s returned from your query, the display name for that collection of features, and any aliases and formats that you want to use for the features that come back. In this example, we simply query a sample layer from ArcGIS Online that looks at [US] states. The geometry from the identify operation is passed in as a parameter to perform that identify. We set the display name to be "states” and we supply some aliases for the fields that are going to come back, making it readable for the user. And we supply some format strings for features that are going to be displayed in the map tips and results list.

With this workflow developed, we [now need to] attach it to our site so that it can be run every time we perform an identify operation. Let’s look at this app in Geocortex Essentials Manager, and I’ll navigate down to the workflows tab where I want to attach this workflow that we were just looking at. Recall that it must have an “_identify” suffix to be picked up by my workflow identify provider. I’ll give it the name “helloworld_identfiy”. Now it’s looking for the URL or URI of the workflow I just authored. So, I’m going to browse for that, and I’m going to go into the directory that we have for this site. I’ll upload it into a folder I created called “resources”. It’s now stored on my workstation as “helloworld_identfy.xaml”. I’m going to go ahead and upload it to that directory and select it.

Now Geocortex Essentials Manager is smart enough to know that this workflow has parameters, so I’m being prompted to supply them here. Because the geometry and unique identifier are going to be supplied by the identify operation, we don’t need to supply them here.

The workflow is now attached to my site, so I’ll go ahead and save it. Let’s refresh the viewer and see the resulting behavior. I’m going to run an identify again, which will identify the cities, but it should also run my workflow and grab the states. Here we can see the result of my states workflow populating the list of results that I expected.

To view a more sophisticated example, we’ve also done the same thing by integrating a workflow that uses the Twitter API to find tweets within a geographic area. In this case, I’m going to perform an identify at the San Francisco airport and discover all the tweets that have been added in this area in the last hour. This is a more sophisticated example of using an identify workflow in a Geocortex Viewer for HTML5 application. To learn more, please get in touch. Thanks for watching!”

Want to learn more about Geocortex Essentials? Visit our Discovery Center to take it for a spin and explore real-world examples of how Geocortex Essentials helps organizations address common (and not-so-common) business challenges.

Visit the Discovery Center

 


GIS Health Assessment: A new way to think about your system

When we think about the health of our GIS, many of us are used to beginning with the infrastructure. After all, it is what drives the technical performance of your system. The problem with starting at the infrastructure level, though, is that it’s difficult to get a complete picture of which aspects of the GIS are most important to your users.

While the GIS infrastructure is extremely important, not all the resources in your environment are created equal. You might have some layers or services that are used 3-4 times a week, and others that are accessed thousands of times each week. While you want to do all you can to ensure the entire system is performing as it should, there are only so many hours in a day. With an infrastructure-first approach, you’re often unable to hone in on what the most important apps, layers, services, servers, and ArcGIS instances are. 

 

A new way to think about GIS health

It’s time we flip the traditional infrastructure-first approach and begin thinking about GIS health through the lens of end-user productivity. Your GIS is there to help your users do their jobs, so that’s where your analysis should start.

Whether it’s explicitly or implicitly, you’re going to be measured on the productivity of the users you build apps for, not the response time of a specific server. Without the users, there is no need for the GIS infrastructure.

By starting with what your users are trying to accomplish, you’ll be able to map your key business processes and user flows to the GIS infrastructure and resources that are most important to supporting them. Looking at your GIS from users’ perspectives allows you to better understand how it is being used day-to-day and identify the critical resources needed to support your monitoring and optimization efforts.

With so many moving pieces in your GIS, you don’t have time to treat everything equally.  Focusing your efforts will let you be much more productive and spend more time working on high-value activities.

When we talk about a user-first approach to GIS health, there are two major areas that you need to be considering:

Performance: While closely tied to infrastructure performance, what we mean here is the performance of your end-users. Are they able to do their jobs effectively with the tools and applications you’re building? Are your users taking longer than expected to complete certain tasks?

When these things crop up, a user-first approach will help you target your efforts and fix issues quicker. A good example would be if an application had a poorly performing layer. This would be an infrastructure performance issue, but if you understand what specific layers and services are used in that application, you will know where to look to address the issue.

Usability: If your GIS infrastructure is performing as expected, the next area to examine is usability. Usability is all about whether your applications are configured and designed in a way that makes sense for what your users need to do. Strong infrastructure performance combined with poor usability is still poor performance (remember, performance is about end-user performance, not infrastructure).

An example of how usability can affect performance is when a common tool is not in a prominent location in your app. If it’s difficult to find, users will waste time looking for it, take longer to complete a task by using a different method, or abandon it entirely. This is also true when incorrect layers are loaded by default – users end up wasting time searching for the layers they need.

Completing a user-first health assessment

Once you’ve adopted a user-first approach to GIS health, you’re ready to perform a user-first health assessment. What you’re trying to accomplish is mapping out the business processes and use cases that you manage with your GIS to the specific GIS resources that support them.

First, you’ll want to identify the different user groups that leverage your GIS. By user group, we mean a group of users that have common work patterns and engagement with your applications. This could be a group of people (or one person) with the same task in your head office, or it could be a specific field crew that uses an app on a tablet. The key here is to identify people who use the GIS in similar ways.

We’ve created a checklist to help you perform a health assessment; it’ll help you map what your different user groups need to accomplish to the GIS resources and infrastructure required to support their work.

The checklist contains areas to detail the users and what they need to do, the app(s) they use, the most-used layers, the services the app(s) consume, which ArcGIS products are used and how they’re configured, and the server(s) that support it.

Get your GIS health assessment checklist now  

What to do with your health assessment

Once you’ve completed your GIS health assessment, you can use the information you’ve gathered to proactively monitor the GIS resources that are the most important. Tools like Geocortex Analytics allow you to configure personalized dashboards that provide a snapshot of the resources you want to monitor.

You can also configure alarms and notifications in some systems monitoring tools. Because you know what you need to monitor, you can set thresholds for warning signs of potential issues and have notifications sent to your email.

Next, identify anomalies among your use patterns. If certain users are performing notably better or worse than the average, you can dive into the specifics of how those individuals are using the applications you’ve built. Replicate the superior use patterns and examine the weaker patterns to gauge if there is a potential gap in training or understanding of certain functions.

If you want to learn how all of this is possible with Geocortex Analytics, we’d like the chance to show you! We’ve recently added great new features (including individual user reporting) and made significant improvements to performance and reliability. Get in touch with us using the button below.

Let's chat


Build custom activities with the new SDK for Geocortex Workflow 5

It’s now been a few months since we officially launched Geocortex Workflow 5, and it’s great to see our users building some innovative apps with Geocortex and Web AppBuilder for ArcGIS®!

One thing that we’ve been hearing, though, is that developers want the ability to apply their own code in the workflows they’re building.

As of version 5.2 (released a few weeks ago), Geocortex Workflow 5 now offers a software development kit (SDK) for building custom workflow activities. The SDK is TypeScript-based, allowing you to write your own custom code to run in workflows, with your builds producing the JavaScript required to execute the activities at runtime.

So, what are “activities”? In the simplest terms, they’re the building blocks of a workflow - each activity represents a unit of work. Geocortex Workflow 5 offers more than 150 pre-built activities that chain together to automate almost any task. Activities such as geocode, query layer, set the map extent, get user info, calculate distance, buffer geometry, run geoprocessing, and so many more allow you to streamline even the most complex GIS and business tasks.

Flex your development chops and write activities to perform tasks that weren’t previously possible – or were extremely complex to assemble with pre-built activities. You can combine your programming skills with Geocortex Workflow’s intuitive, activity-based design to build powerful applications.

Custom activities can be built for yourself, or for others in your organization; even non-developers can work with activities as they would any others in Geocortex Workflow Designer. And granted that your technology of choice supports the functionality you’re building, custom activities can be consumed in Geocortex and/or Web AppBuilder for ArcGIS applications.

Take Geocortex Workflow 5 even further

While most tasks can be automated with the pre-built, out-of-the-box (OOTB) activities offered with Geocortex Workflow 5, you can now build anything you want with the SDK. Custom integrations, talking to custom web services, connecting with 3rd party APIs, and interfacing with custom code in your existing apps are now all possible.

Here are a few examples of what you can do with custom activities:

  • Perhaps you want to integrate with a 3rd party system like SAP®. While this is possible with pre-built activities, you’ll be manually assembling workflow logic to make the web requests, parse the responses, and execute your business logic. With the latest updates, you can achieve a result that’s more clean, efficient, and consumable by wrapping the logic in a few simple custom activities.
  • Many common tasks are time-consuming to build – maybe you find yourself using the same pattern over and over in one workflow. Instead of following this repetitive pattern, you can bundle all the logic within a single custom activity. An example might be sorting map features by multiple columns. Pre-built activities are available that will sort data by one column, but it’s more efficient to write a custom activity to sort by multiple columns than it is to link activity after activity – especially if you need to perform these tasks across multiple applications and workflows.
  • At the more complex end of the spectrum, you can build custom user interfaces using React (a leading JavaScript library for building user interfaces). This is the most challenging to achieve, but if you’re up for the challenge, it provides complete flexibility. If you’re thinking of doing this, we recommend chatting with us beforehand - we want to help make sure you’re on the right path.

Set a standard

Unless your organization follows strict guidelines for building custom apps and widgets, there is always the risk that developers will use different patterns and approaches to develop custom code. This makes it difficult for others to maintain or update the code; it can be a bit like the wild west.

This can be mitigated with Geocortex Workflow 5’s custom activities. All activities have the same, simple signature of inputs, outputs, and an execute method. Following the activity-based pattern ensures you have a standard practice for building custom logic.

With activities, you are implementing a unit of work rather than a large, rigid solution. This promotes reusability and your code will be easier to write, interpret, test, and maintain. Any developer will be able to pick up your custom activities and understand how to work with them.

You can also control how custom activities are presented to other users in the browser-based Geocortex Workflow Designer. They can be configured to look like the existing OOTB activities, helping ensure a consistent pattern across your apps.

Custom activities in Web AppBuilder for ArcGIS®

At Latitude Geographics, we’ve always built complementary technology to help our customers accomplish even more with Esri’s ArcGIS platform. With Geocortex Workflow 5, we’ve taken this to a new level by allowing you to build workflows that run inside Web AppBuilder for ArcGIS.

If you’re using Web AppBuilder for ArcGIS, creating custom activities with Geocortex Workflow 5 is still the preferred alternative to writing a bunch of custom widgets. Initial deployment will require a similar amount of effort, but ongoing maintenance and modifications of custom activities require significantly less time (and pain!).

If you write a custom widget for Web AppBuilder for ArcGIS and want to deploy it to multiple apps, you need to edit the source code in all the applications using that widget each time a modification is required. With Geocortex Workflow 5, the custom code is packaged in an activity, and you only need to modify the source activity for changes to be applied across all your applications.

Learn more about deploying workflows inside Web AppBuilder for ArcGIS in the Geocortex Workflow Discovery Center.

Start building today

You can access the SDK in our Documentation Center. Just look for the .zip file that contains all the necessary instructions you need to get started.

Let us know how it goes

As you get going with the new SDK, we want to hear your feedback. If you have questions, comments, or concerns, please get in touch with us to let us know.

We’d also love it if you share what you’re building with us and other users in the Geocortex Workflow Community. This is a great place to connect with other users - everyone benefits from sharing tips, tricks, and sample workflows.

Happy building!

 


ArcGIS Pipeline Referencing: Choose the best data model

Over the past few weeks, I’ve shared foundational knowledge about how data is stored and managed in the pipeline industry. My first post introduced ArcGIS Pipeline Referencing (APR) and explained some options operators have in adopting it. My second post worked to demystify the confusing world of pipeline data models (there’s a lot to consider).

In this post, I will outline important information you need to consider when choosing a data model for your organization, including: 

  • Limitations of current data models;
  • How APR is addressing these limitations; and
  • Questions you should ask yourself to help assess the best data model for your organization (should you choose to move to APR).
 

Limitations of Existing Models

A data model is defined as: “An abstract model that organizes elements of data, and standardizes how they relate to one another and to properties of real world entities.”

In the pipeline space, real world entities include not only the pipe and valves that make up the pipeline system, but all of the things that happen to and around the pipe. Things such as repairs & replacements, surveys & patrols, one-call, cathodic protection, ILI & CIS assessments, HCA & class location, land ownership & right-of-ways, and crossing information all have components that, in one form or another, need to be captured in your GIS.

The differing needs of these complex data representations expose limitations in legacy systems. And in a world where critical business decisions must be made from this data, identifying limitations and addressing them is an important step as we move to next-generation solutions.

Limitation #1: Data volume

As the years have progressed, the operational and regulatory needs surrounding pipelines have increased. These needs are driving new levels of inspections and analyses on pipeline systems - resulting in more data, both in terms of volume and complexity. The legacy systems were simply not designed to handle the volume of data current programs produce.

An example is the case of Inline Inspection (ILI) and Close Interval Survey (CIS) data. A single ILI or CIS inspection results in hundreds of thousands of records. With assessment intervals recurring every 1-7 years -- and operators performing dozens of inspections each year -- the resulting records from these inspections alone add millions of records to the database. This doesn’t include follow-up inspections, digs, and run comparison activities.

When you couple the sheer volume of records with complexities surrounding data management and the need to provide a high-performance environment, limitations in the system are quickly exposed. These limitations force operators to make difficult data storage decisions, often choosing to remove subsets of data from the system of record. This is sub-optimal to say the least; it significantly impacts your ability to view and analyze important trends in the data.

Limitation #2: Engineering Stationing

Engineering stationing is important, complex, and rooted in pipeline data management. Before modern GIS, operators kept track of the location of pipelines and associated appurtenances using engineering stationing on paper or mylar sheets. With a vast majority of pipelines that are in use being constructed before the existence of modern GIS technology, large volumes of data were referenced with this approach.

Engineering stationing doesn’t benefit all operators; however, companies that manage gathering and distribution assets find this method burdensome … and dare I say unnecessary?

When traditional data models were developed, the need to adhere to legacy engineering stationing outweighed the need to redesign the entire system to favor a spatial-first approach. But as technology has improved, and more users have embraced spatial data, new methods to blend modern (spatial-first) and legacy (stationing-first) models have emerged. Operators need this flexibility when managing their assets.

Limitation #3: Native support for the Esri Platform

The emergence of the Pipeline Open Data Standard (PODS) represents the last major shift in data storage solutions for pipelines, and it happened nearly 20 years ago. At that time, the GIS landscape was both immature and fragmented. As a by-product, PODS was designed specifically to be GIS-agnostic. In the nearly two decades since, Esri has emerged as the predominant provider of spatial data management, and they have developed a suite of solutions that enable stronger collection, management, and analysis of data.

Chances are your organization embraces Esri for spatial data management and content dissemination, which begs the question: “If your organization has standardized on Esri technology, does it make sense to employ a data structure that does not natively support the environment?” (Hint: probably not.)

Addressing and Improving Limitations

The core of APR has been engineered to address important limitations currently felt due to the existing designs of PODS and APDM. APR directly addresses the three limitations described above.

Improvement #1: Data volume

Understanding the need to support large datasets, time-aware data, and the ability to offload the storage of data to other systems, APR has been engineered to handle the high volume of data more efficiently, with a focus on scalability. To achieve this, available rules can be configured to allow a more fine-grained approach to managing data during routine line maintenance. No longer are implementations limited to keeping the data referenced to the LRS or detaching it.

Changes like these allow operators to keep more data in the system, providing a baseline for more powerful analysis and decision making.

Improvement #2: Engineering Stationing

As explained above, engineering stationing is firmly rooted in pipeline data management, but it’s not required for all operators. New construction, gathering systems, and vertically-integrated distribution companies are finding the rigorous application of stationing to be unnecessary overhead. If your current database repository requires it, and your organization doesn’t rely on it, you are taking on unnecessary data management cycles - costing valuable time and money.

APR not only provides the ability to manage data in stationed and non-stationed methods: its flexibility allows for both stationed and non-stationed lines to exist in the same model. Let that sink in for a bit: Operators that have deployed two separate data management systems can now consolidate the management of these assets! This functionality benefits a majority of the clients I’ve worked with over the years.

Improvement #3: Native support for the Esri Platform

As I stated in my previous post, APR is (possibly most importantly) integrated with the ArcGIS platform. You can perform complex long transactions on your data, analyze it in ways that have not been possible before, keep track of product flow using the Facility Network, and get the data in the hands of your organization with methods that are integrated, fluid, and connected.

Considerations for Implementation

If you’re considering implementing ArcGIS Pipeline Referencing (APR), knowing why, and which data model to use with it is has more to do with your business than with IT -- success can be achieved with either model.

But how do you decide which one is best for your organization?  Here are some questions to consider as you’re laying the foundation for your next-generation GIS implementation.

1) Business focus: What segment of the vertical are you in?

If you are a distribution company with transmission assets, the decision is pretty clear: you should choose Utility and Pipeline Data Model (UPDM). It’s designed as a distribution-first model, allowing you to integrate the management of distribution and transmission assets in a single model.

If your company is ‘upstream’ of distribution, the answer gets a bit trickier. Both models are adequate, but my vote tends to lean towards PODS for a few reasons:

  1. Out-of-the-box PODS supports APR slightly more natively for operators without distribution assets than UPDM.
  2. Are you a liquids operator? As UPDM is focused on gas utility and transmission, the PODS model will provide a better solution for those moving liquid products.
  3. As an organization delivering a comprehensive model to the industry, PODS is a thriving community of operators and vendors working together to design a comprehensive model for the industry. This collection of subject matter expertise is invaluable to operators – and provides an opportunity to share your experience with like-minded individuals.

2) Existing model: What are you using now?

As you consider moving to APR, understand that it’s going to require a data migration. The existing system will need to be mapped and loaded into the new solution. If you are currently using PODS and are a gathering, midstream, or transmission company, APR with PODS is probably the best solution to implement. It’s likely that your existing data will migrate more seamlessly, and the model will make more sense to those that manage and interact with the data.

If your organization is primarily gas distribution, and you’ve implemented a PODS model for a small subset of high-pressure assets in the system you manage, consider UPDM. You can take advantage of the intended benefits and consolidate those assets into a common platform.

3) Other programs: ILI, CIS, other survey, Cathodic Protection

If your company has a strong investment in recurring inspections, PODS again rises as the preferred model, especially considering the efforts of the PODS Next Generation initiative around how to efficiently store and process this data moving forward.

4) Interoperability

With the growing importance of importing and exporting data (due to acquisitions, divestitures, etc.), analysis, and reporting, a system that promotes standard mechanisms to exchange data becomes increasingly more important. With the work the PODS organization is putting into a data interchange standard, it again rises as the preferred model.

There isn’t just one approach, but there is a best approach for your organization

While this change is beneficial for operators, many things need to be considered before you commit to an approach. I hope my series of posts provides some clarity for you. To stay up-to-date on the data model landscape and the tools surrounding it, I encourage you to follow the PODS association and Esri. The work of these two organizations in the pipeline space is a great thing for our industry.

If you’d like to discuss any of these concepts further, or would like to have a conversation about which model is best for your implementation, please get in touch with me here. I, and the rest of the Energy team at Latitude, are eager to offer our years of expertise to help you.