[Multi-Value, Multi-Column] Row Level Security: Solving an Array of Use Cases

Virtual Connections have completely changed how Tableau connects to data, streamlining the processes of credential management and extract refreshes, while also reducing duplicate queries against your expensive databases. It also introduces the concept of centralized row-level security, or CRLS. Having a single point to create, audit, and edit your security policies allows for instant scaling of your data governance, and it also allows for better distribution of responsibilities.

In pre-VConn Tableau, the people building your datasources and workbooks were also the people responsible for building governance. This isn’t how most organizations work, however: data governance and dashboard buildouts are two totally separate functions. By separating VConn creation out from the analytic tasks, we let governance owners do what they do best, and let analysts confidently build, knowing that they don’t have to worry about security.

 Before VConns, governance was applied at every datasource or workbook. Even similar datasources required duplicate credentials, duplicate refreshes, and duplicate RLS policies.
With VConns, there’s a single point of governance application. This frees up backgrounders from duplicate refreshes, allows data stewards to set governance in a single place, and can save on costly duplicate queries against your data stores. 

All that said, this has brought an increased focus on Tableau’s existing row level security approaches. Many of the approaches detailed in the whitepaper rely on multiple joins, and VConns currently support only a single join. On top of that, people often have complex data models and rules. It’s not uncommon for data entitlements to be based on multiple columns, each of which have multiple values.

Consider a Regional Sales VP for certain categories. They may have access to their product categories across the whole country, but also have access to ALL sales within their regions. We need to evaluate two separate entitlements and check if either of them has been fulfilled. Traditionally, these may be kept in two separate entitlements tables. We have techniques for doing this in the whitepaper, but they require multiple joins, and VConns only supports a single entitlements table.

 Traditional RLS approaches would have you join both of these tables and do some calculation magic.

With VConns, however, we can combine these into a single entitlements table that represents all our entitlements, gives us an efficient query path to searching them, and provides a simple audit interface for our admins. Simply create one column for each entitlement (Category and Region, here) and populate them with pipe-separated arrays. You’ll have one row for each user which displays their entitlements.

 It’s easy to look at this table and find who can see what, and it also allows easy insertion of an ALL value instead of a complete list.

All that’s left is stitching this together with our fact table.

  1. Create a cross-join between your fact table and your entitlements table.
    • This is a scary first step, but just trust me for now. I know if we went on to evaluate this entire query, we’d end up with a massive dataset. The good news is that we’ll never actually execute the cross-join against the full tables, so for now, simply join on 1=1.
  2. Create a policy that checks for USERNAME() = [User] match.
    • This is the key to making the cross-join performant. This policy will act as a WHERE clause in our query, and (because we only have 1 row per user) it will filter our entitlements table down to a single row. Because of what’s referred to as “Filter Pushdown”, any decent database will perform this filter before joining the tables, so our cross-join will join our fact table to a 1-row entitlement table, creating no duplication at all.
    • Our data structure is now a non-duplicated copy of the fact table, but with the current user’s entitlements added onto every row as new columns. A small sample may look like this:
  3. Create a policy that checks entitlements.
    • First, we’ll check the Region entitlement. We need to check two things: is the entitlement ALL and, if not, does the Region on each row match the entitlement.
    • [Region Entitlement] = “ALL” OR (CONTAINS([Region Entitlement], [Region])
    • This calculation returns TRUE if the user is entitled to ALL or if the region in question is contained in their [Region Entitlement] array.
    • Repeat for each other entitlement you need! In this case, it’s just one more entitlement. Because we want to check either of these entitlements, we’ll use an OR statement, but if you want to check both you can use AND.

Voila! We’ve implemented governance against a single table, using only 1 entitlements table. It’s a flexible policy, allowing for ANDs and ORs, multi-value lists, and multiple entitlements. It’s easy to read, and, when combined with VConns, it’s easy to implement in a single place…and flow downstream to your entire server. No more worrying about searching out every workbook, no more downloading 500 workbooks to edit them, and no more managing multiple entitlements tables.

Handling governance centrally lets your BI Analysts do BI, and lets your Governance Analysts handle the governance. It allows you to centralize the security, but also all of the connection metadata, making administration a breeze. It lets your end users trust that the data is correct and secure, and reduces the nerves of the folks in your InfoSec department. Implementing complex RLS at scale has never been so easy!

Publishing TDS Datasources Using Tableau Prep

NOTE: Tableau went and fixed this in 19.3.1, so if you’re using that or a newer version, stop reading here and just accept that it all works!

Though Tableau originated as a visualization tool, it has added significant ETL processes over the last couple versions. With version 18.1 it added Tableau Prep and the ability to build ETL flows, and 19.1 added Prep Conductor, which comes with the ability to automate workflows to run on a schedule. One current limitation, however, is that Tableau Prep outputs a .hyper file, not a .tdsx file. What’s the difference here?

In Tableau, a .hyper file is a raw data file. It contains the results of the data from the datasources as well as any calculations which can be materialized at the individual row level (calculations like string manipulations, differences between two columns, etc.). Calculations which can’t be materialized on individual rows, however, aren’t stored in a .hyper file, but instead are saved in a .tds file (Tableau Datasource). This file contains the logic for level of detail calculations, aggregate calculations (such as ratios), and the username-based calculations often used for row level security. A .tdsx file is the combination of the raw data (.hyper file) and the associated logic (.tds file). Tableau Prep, however, doesn’t allow for the customization of .tds files. If you want to add aggregate calculations, you can do so in Desktop, but when Conductor runs your flow, it will overwrite your entire Datasource, replacing your .tds file with a generic one and losing all of your calculations in the process. Below is a walk-through of how to avoid that behavior.

Before we go any further, it’s worth noting that this workflow will probably be streamlined at some point, but that for now, this is the easiest way of allowing creating a Datasource with data from Prep and .tds-based logic.

  1. Create a Prep flow which outputs a .hyper file to a network-mapped location.
    1. In the Output step of your Prep flow, do not select “Publish as a data source”, but instead choose “Save to File”. You need to ensure that your Prep inputs and outputs are using UNC file paths, so it will continue to work when published to Server.
  2. Publish and schedule the flow.
    1. Simply publish your flow to Tableau Server. You’ll need to ensure that your Run As User has access to the file input/output locations as well as safelisting those file locations for Prep Conductor.
    2. Though we’ll tie this flow to a schedule, we won’t actually be relying on the schedule’s timing to run the flow. Therefore, you’ll want to make it a schedule that you don’t use for anything else and only runs very infrequently. I set mine to run monthly on a schedule named “PrepScriptSchedule”. The reason we need to tie it to a schedule (even though we aren’t relying on timing) is that tabcmd allows us to run a scheduled task.
  3. Open the output of the flow in Tableau Desktop.
  4. Create your Datasource modifications in Desktop (create calculations, Datasource filters, hierarchies)
  5. Publish the Datasource.
  6. Using tabcmd, refresh the .hyper file and publish it without overwriting the Datasource.
    1. If you’re not already using tabcmd, you’ll need to install it.
    2. Log in to the Server using tabcmd login.
    3. Run the Prep flow using tabcmd runschedule.
      1. Because we’re running a schedule (not executing a task on Tableau Server), we’ll need to build in a wait time for our script. This step has started the Prep Flow, but we’ll need to pause until it finishes creating the file.
    4. Pause the script until the flow is complete using SLEEP. This command takes an argument which is the number of seconds to pause your script. You should make sure that the number you input here is higher than the time your Prep Flow takes to run.
    5. Using the tabcmd publish command, point to the .hyper file output from the Prep flow and overwrite the Datasource in question. Use the –replace option to avoid overwriting the .tds, instead just overwriting the source data contained in the .hyper file.
tabcmd login -s https://<server-name> -u <username> -p <password> -t <siteName>
tabcmd runschedule "PrepScriptSchedule"
sleep 1000
tabcmd publish "\\network\filepath\prepoutput.hyper" -n <targetDatasource> --replace

It’s an easy script to run, and can be run on the schedule of your choice using any task scheduler (most likely Windows Task Scheduler or as a cron job). Using the above script we can create Tableau Datasources with Prep ETL, Desktop metadata, and Server security, and refresh it all on a schedule. Go forth and enjoy your complex data structures with complex governance tied in!