Entitled to Freshness

Imagine you’re a company that has a 2 billion row fact table joined to a 100k row entitlements table built for row-level security. The fact table updates monthly, but entitlements change every time someone gets promoted, quits, changes title, etc. You can’t just refresh the whole data model monthly, because you’d end up with the wrong security rules applied. You also can’t refresh the entire data model multiple times a day, because refreshing that fact table would put a huge toll on your back-end systems as well as your Tableau Server. Within the Tableau UI, however, there’s no option to independently refresh the different parts of a datasource – if you schedule a refresh for your entitlements table, it’ll refresh your entire datasource.

Sometimes when you open Tableau, you’ll find that out-of-the-box functionality takes care of…well pretty much all of your needs. There are a handful of places, however, when you’ll really need to tap into the APIs. One such use case is for row-level security scenarios where you a) can’t use Virtual Connections and b) need to refresh your entitlements table more frequently than your fact table.

Quick vocab break: an Entitlements Table is a table which tells us who can see which data. This table should be updated as people gain/lose access to other data.

So how do we solve this? Well option 1, as mentioned above, is Virtual Connections. This allows independent scheduling for each table in the VConn, giving you the mixed freshness you desire. But some folks can’t use VConns – maybe the fact table refresh would time out, maybe you’re using an unsupported connector, maybe you have weird auth issues. How can we approach the problem in this case?

Luckily, there’s a combination of the Hyper API and the REST API in Tableau that can address this very efficiently.

  1. Create an extract of your fact and entitlements tables using the Hyper API.
  2. Open these in Desktop, create a data model, and publish.
  3. Schedule no refreshes in Tableau at all!
  4. When entitlements change, re-create your Entitlements table using the Hyper API.
  5. Use Tableau’s new(ish) APIs to update just the entitlements portion of your datasource.

Now depending on the type of person you are, that either sounds really easy or really hard, so I’m going to break it down bullet by bullet. There’s a github repository with some shoddy Python code available here.

1. Create Extracts Using Hyper API

This is the heaviest lift part of it all. The first question you’re asking is “why don’t I just use Desktop to create these?”, and it’s a valid question. For the initial publish, it’s certainly possible, but to enable a live-to-Hyper connection (which we’ll need later), we need to make sure we’re not setting this up through the native UI. The Hyper API is designed to build extract files outside of Tableau, which is exactly what we need. I’ve built a sample script here which shows how you could do it from a SQL Server deployment.

The file this creates is a single Hyper file which contains two tables: “Fact” and “Entitlement”. Because a Hyper file is data (as opposed to metadata), this doesn’t yet enforce any data security or have any relationship between the two tables.

2. Create a Data Model

Double-click the Hyper file you created in Step 1 to open it in Desktop. From here, you can build your data model (join the tables but also write whatever calculations you need etc). You’ll also want to take this opportunity to build in your row-level security. Now that you’ve got data which is joined, enhanced, and secured, you can publish it to your Tableau deployment. This is no different than a normal publish process, except that the datasource (having been built/published differently) is eligible for live-to-Hyper updates.

3. Schedule No Refreshes!

You’ve got it. Just resist the temptation to schedule anything in Server or Cloud. Because of the way this was built, you shouldn’t be able to anyway.

4. Re-build Entitlements as Needed

So far, we’ve just found a difficult path to creating a normal datasource, so there better be a payoff. This is where it happens. Now imagine someone leaves your company or changes titles – they need data access revoked immediately. Historically, we’d force you to refresh your entire data model. This could take hours, be quite costly, and may not even finish before you want to update it again.

With these APIs, however, we can just re-build the Entitlements table. We can re-use the script from step 1 here, but cut out all of the lines about the Fact table. We’re rebuilding just the smaller table, which will likely take mere seconds.

5. Replace the Server-Side Entitlements

All we’ve done now is create a smaller local Hyper file. It contains none of my actual data, just my entitlements. If we were to publish this from Desktop, it would create its own datasource. Instead, we can push this table to overwrite just a portion of our existing datasource. The code provided here shows you how to

  1. Execute step 4 (above)
  2. Identify your target datasource
  3. Initiate a file upload (large files must be chunked, small files can be uploaded all as one)
  4. Commit the file upload to the appropriate data source and table name.

This is the final portion of the code. As long as your datasource ID, schema name, and table name line up, you can easily replace the entitlements table without touching the fact table!

6. Confirm It Worked!

Now go re-load any dashboard downstream of that data. The data won’t have changed, but the entitlements will have. Anyone promoted will have their enhanced access, anyone who quit will lose all access. This means you can easily kick off a flow many times a day, as people gain and lose access, without any worry about bogging down your system!

What Else?

This example is intentionally narrow, and built to solve a specific problem. There’s tons more you can do with these APIs, though! If you have a massive fact table and want to add just a small number of rows to it, this API allows you to append. If you want to maintain a file which has a rolling 12-month snapshot, you can write a simple script to delete-and-add rows. All up, this allows you far more flexibility than Tableau’s traditional refresh. Choose what data you care about, down to the row level, and manipulate it as you see fit!

All for Some, and Some for All

Row Level Security: Implementing “All Access” or “Deepest Granularity” methodologies

Virtual Connections, released last year, allowed Tableau an easy way to deploy row level security at scale. You can easily build governance policies in a single place, against a single table, and have them flow down to your entire organization. These policies can be easily audited and edited as your business needs change, and you can be assured that your changes will flow down to all content living downstream of the VConn. The only remaining hurdle is figuring out the appropriate policy for your data.

Tableau’s base recommendation for RLS is to create an entitlements table with one row per user per “thing they should access”, or entitlement. A sample table might look like the below.

For every Person, a Region, and for every Region, a Person.

This works perfectly well for a small group of users, and even scales well as your users and entitlements grow! Where it can begin to struggle, however, is when people have access to multiple regions. I’ve written a post for managing multi-entitlement access, but there’s another type of user it didn’t account for: the superuser. Whether it’s an exec, manager, or simply an entirely different business unit (analysts, for example), there’s often a swath of users who should be able to access everything. We could individually enumerate each user and give them access to every single entitlement, but imagine a scenario in which we have 15,000 entitlements and 15,000 users. Our entitlements table could balloon to tens of millions rapidly!

The old approach, detailed in our RLS whitepaper, required joining 2 additional tables to your fact table. VConns, as currently built, only allow for a single join, so this requires a new approach. Good news, though, it’s a relatively simple approach.

  1. Create a group on your Tableau Server for all of your “superuser” folks. I simply called mine “Superusers”. Add all of your superusers to this group.
  2. Add 1 row to your entitlements table with “Superuser” in both columns.
  3. Modify your fact table. There are a couple things we’ll have to do here.
    • Duplicate the column you use for your Entitlements join (the Region column, in my example).
    • Union your table to itself.
    • In the unioned copy of the table, replace all values in the Entitlements column with “Superuser”

I’ll show these modifications with some images. Consider the below fact table (only 3 rows).

I’ll union this table to itself, doubling the size (6 rows now). Add a new column for Entitlements (as a copy of the Region column). In rows 4-6, however, the Region has been replaced by the word “Superuser” in the entitlements column.

The green indicates rows added via the union, and are a perfect copy of the original fact table. The orange indicates the new column we’ve added for modified entitlements.

With this modified fact table, we’ll no longer need multiple joins. A single join in our VConn, with the appropriate policy, will now be sufficient to pass in all the info we need.

This policy first checks to see if a user is a superuser. If so, they get access to 1 entire copy of the dataset. If not, they’re subjected to the normal RLS rules.

So that’s how, but why?

If all you care about is getting the work done, read no further! If you’re curious about the query execution behind the scenes because you may want to further customize this solution, read on. It might seem like a bit of a convoluted approach at first glance. The simplest approach wouldn’t seem to require any data modification at all. Why not just write a policy which checks ISMEMBEROF(‘Superuser’) and, if true, returns the whole dataset?

The answer lies in join cardinality and join culling.

First, we’ll address join culling. There’s a tendency to assume that we could write a policy like the below, and use our base entitlements table.

We assume that if a user passes the ISMEMBEROF() check in our policy, the entitlements join will no longer happen. We’re not using the entitlements table for anything, so why bother joining it in? The way Tableau operates, however, means that once you’ve added the entitlement table to your policy, it will always be a part of your query, even if no columns are directly referenced in the policy. No matter what happens, the tables will join and the query will execute.

But why is that a problem? That answer comes from cardinality. If each row in your dataset can only be viewed by one person, and each person can only view one row, then you’ll actually be ok with this. Unfortunately, not many businesses are that simple. Most of the time, each user can view multiple rows, and each row can be viewed by multiple people. Take the simple example below, a 5-row entitlements table. It’s the same example from the beginning, but we’ve added one more user who can see the West region.

We now have 2 copies of “West” in the Entitlements columns of the Entitlements table. If we were to join this table to our fact table and query it, we’d end up doubling all the sales from the West. In a non-Superuser experience, however, this doesn’t matter. Tableau would first query the entitlements table to the appropriate user (let’s say Kelly, in this case) and then query the joined tables.

SELECT SUM(Sales) FROM sales JOIN entitlements ON sales.entitlement = entitlement.entitlement WHERE Person = 'Kelly'

The entitlements table would be filtered, the join would execute, and because there are now no duplicate values in the [entitlement.entitlement] column, no duplication occurs. Kelly sees the appropriate sales data. If, on the other hand, a Superuser logs in and queries, they’d receive the entire resulting table.

SELECT SUM(Sales) FROM sales JOIN entitlements ON sales.entitlement = entitlement.entitlement

In this case, there’s no WHERE clause, so they receive the unfiltered data. Because “West” appears twice in the [entitlement.entitlement] column, our sales in the West region get doubled. Of course, in practice, the impact will probably be much larger. There may be 5000 employees who can access the West region, and 3000 who can access the East. We’d have to do some silly math to try to reduce these numbers back to their de-duplicated state, and it would result in a lot of query overhead. Instead, we want to attempt to just query the raw, unduplicated fact table .

…and how does it work?

Really, a union is odd behavior to use here, because all we want to do is cull out the join. Because the join is unavoidable, however, we need to instead find a way to remove all duplication from the join. To do this, we unioned the fact table to itself. The duplication only happens when entitlements are joined together, so we need to make sure we don’t perform a many-to-many join. By materializing a single “Superuser” row in our entitlements table and creating a separate copy of the fact table that joins directly to it, we have effectively made a separate copy of the table for a superusers to query. The query we execute will be the same as above, but we’ve added a WHERE clause back on.

SELECT SUM(Sales) FROM sales JOIN entitlements ON sales.entitlement = entitlement.entitlement WHERE entitlement.entitlement = 'Superuser'

We know that ‘Superuser’ appears only once in our entitlements table (unlike the Region values, which may be repeated). As a result of this, we know that the fact table does not get duplicated. Our superusers see all of the data, but in its unduplicated glory!

Publishing TDS Datasources Using Tableau Prep

NOTE: Tableau went and fixed this in 19.3.1, so if you’re using that or a newer version, stop reading here and just accept that it all works!

Though Tableau originated as a visualization tool, it has added significant ETL processes over the last couple versions. With version 18.1 it added Tableau Prep and the ability to build ETL flows, and 19.1 added Prep Conductor, which comes with the ability to automate workflows to run on a schedule. One current limitation, however, is that Tableau Prep outputs a .hyper file, not a .tdsx file. What’s the difference here?

In Tableau, a .hyper file is a raw data file. It contains the results of the data from the datasources as well as any calculations which can be materialized at the individual row level (calculations like string manipulations, differences between two columns, etc.). Calculations which can’t be materialized on individual rows, however, aren’t stored in a .hyper file, but instead are saved in a .tds file (Tableau Datasource). This file contains the logic for level of detail calculations, aggregate calculations (such as ratios), and the username-based calculations often used for row level security. A .tdsx file is the combination of the raw data (.hyper file) and the associated logic (.tds file). Tableau Prep, however, doesn’t allow for the customization of .tds files. If you want to add aggregate calculations, you can do so in Desktop, but when Conductor runs your flow, it will overwrite your entire Datasource, replacing your .tds file with a generic one and losing all of your calculations in the process. Below is a walk-through of how to avoid that behavior.

Before we go any further, it’s worth noting that this workflow will probably be streamlined at some point, but that for now, this is the easiest way of allowing creating a Datasource with data from Prep and .tds-based logic.

  1. Create a Prep flow which outputs a .hyper file to a network-mapped location.
    1. In the Output step of your Prep flow, do not select “Publish as a data source”, but instead choose “Save to File”. You need to ensure that your Prep inputs and outputs are using UNC file paths, so it will continue to work when published to Server.
  2. Publish and schedule the flow.
    1. Simply publish your flow to Tableau Server. You’ll need to ensure that your Run As User has access to the file input/output locations as well as safelisting those file locations for Prep Conductor.
    2. Though we’ll tie this flow to a schedule, we won’t actually be relying on the schedule’s timing to run the flow. Therefore, you’ll want to make it a schedule that you don’t use for anything else and only runs very infrequently. I set mine to run monthly on a schedule named “PrepScriptSchedule”. The reason we need to tie it to a schedule (even though we aren’t relying on timing) is that tabcmd allows us to run a scheduled task.
  3. Open the output of the flow in Tableau Desktop.
  4. Create your Datasource modifications in Desktop (create calculations, Datasource filters, hierarchies)
  5. Publish the Datasource.
  6. Using tabcmd, refresh the .hyper file and publish it without overwriting the Datasource.
    1. If you’re not already using tabcmd, you’ll need to install it.
    2. Log in to the Server using tabcmd login.
    3. Run the Prep flow using tabcmd runschedule.
      1. Because we’re running a schedule (not executing a task on Tableau Server), we’ll need to build in a wait time for our script. This step has started the Prep Flow, but we’ll need to pause until it finishes creating the file.
    4. Pause the script until the flow is complete using SLEEP. This command takes an argument which is the number of seconds to pause your script. You should make sure that the number you input here is higher than the time your Prep Flow takes to run.
    5. Using the tabcmd publish command, point to the .hyper file output from the Prep flow and overwrite the Datasource in question. Use the –replace option to avoid overwriting the .tds, instead just overwriting the source data contained in the .hyper file.
tabcmd login -s https://<server-name> -u <username> -p <password> -t <siteName>
tabcmd runschedule "PrepScriptSchedule"
sleep 1000
tabcmd publish "\\network\filepath\prepoutput.hyper" -n <targetDatasource> --replace

It’s an easy script to run, and can be run on the schedule of your choice using any task scheduler (most likely Windows Task Scheduler or as a cron job). Using the above script we can create Tableau Datasources with Prep ETL, Desktop metadata, and Server security, and refresh it all on a schedule. Go forth and enjoy your complex data structures with complex governance tied in!