Table

analyze.table.add_label(project_id, table_id, label, branch='master')

Sets a label on a table

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • label (str) – The label unique name

  • branch (str) – Project branch

Returns

None

analyze.table.add_path(project_id, table_id, path, branch='master')

Adds a table reference path in the hierarchy

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • path (str) – Object rollup path using unix style forward slashes branch (str): Project branch

Returns

None

analyze.table.add_paths(project_id, additions, branch='master')

Adds multiple table reference paths in the hierarchy

Parameters
  • project_id (str) – Unique project identifier

  • additions (list) – List of moves, containing: - table_id (str): Unique table identifier - to_path (str): New path

  • branch (str) – Project branch

Returns

None

analyze.table.clear_data(project_id, table_id, branch='master')

Clears the table of all data

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • branch (str) – Project branch

Returns

None

analyze.table.copy(project_id, table_id, version_id=None, rollup_id=None, after_position=None, branch='master')

Copies a table

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • version_id – Version name

  • rollup_id (str) – Unique rollup identifier

  • after_position (int) – Position within the object rollup the object should be placed after

  • branch (str) – Project branch

Returns

dict with the following properties
  • project_id (str): Unique project identifier

  • branch (str): Project branch

  • id (str): Unique table identifier

  • name (str): Unique model identifier

  • memo (str): Description of tables

  • size (int): Size in bytes

  • size_string (size): Size in friendly sizing (KB, MB, GB, TB)

  • row_count (int): Row count

  • rows_approximated (bool): Indicates the row count is only approximate if True

  • column_count (int): Column count

  • versioned (bool): Is versioning enabled for the table

  • version (int): Current version of the table

  • read_only (bool): Is table set to read-only

  • cluster_id (str): Internal identifier for data cluster assignment

  • update_keys (list): Keys used for determining uniqueness of row for entry and update

  • partitioning_keys (list): Columns by which the data can be logically compartmentalized

  • distribution_keys (list): Columns by which to distribute data throughout the cluster evenly

  • orientation (str): Table orientation, either column or row

  • paths (list): List of hierarchy paths that reference the table

  • labels (list): List of labels assigned to workflow

  • update_time (str): Date and time data was updated as ISO 8601 compliant time

  • updated_by (int): User ID of last updater

  • published_name (str): Name of table published for reporting access

  • view_manager (bool): Table is visible to Manager roles if True

  • view_explorer (bool): Table is visible to Explorer roles if True

Return type

dict

analyze.table.copy_directory(project_id, from_path, to_path, branch='master')

Copies a folder and all paths in the hierarchy but does not duplicate referenced objects

Parameters
  • project_id (str) – Unique project identifier

  • from_path (str) – Original directory path

  • to_path (str) – Destination directory path

  • branch (str) – Project branch

Returns

None

analyze.table.create(project_id, path, name, memo, branch='master', columns=None, versioned=False, read_only=False, distribution_keys=None, update_keys=None, partition_keys=None, orientation='column')

Creates a table

Parameters
  • project_id (str) – Unique project identifier

  • path (str) – Path to place the table in the hierarchy

  • name (str) – The object name

  • memo (str) – The object description

  • branch (str) – Project branch

  • columns (list) – Columns in the table

  • versioned (bool) – Is versioning enabled for the table

  • distribution_keys (list) – Columns by which to distribute data throughout the cluster evenly

  • update_keys (list) – Keys used for determining uniqueness of row for entry and update

  • partition_keys (list) – Columns by which the data can be logically compartmentalized

  • orientation (str) – Table orientation, either column or row

  • sort (list) – List of (column, ascending/descending) tuples to determine a default sort.

Returns

dict with the following properties
  • project_id (str): Unique project identifier

  • branch (str): Project branch

  • id (str): Unique table identifier

  • name (str): Unique model identifier

  • memo (str): Description of tables

  • size (int): Size in bytes

  • size_string (size): Size in friendly sizing (KB, MB, GB, TB)

  • row_count (int): Row count

  • rows_approximated (bool): Indicates the row count is only approximate if True

  • column_count (int): Column count

  • versioned (bool): Is versioning enabled for the table

  • version (int): Current version of the table

  • read_only (bool): Is table set to read-only

  • cluster_id (str): Internal identifier for data cluster assignment

  • update_keys (list): Keys used for determining uniqueness of row for entry and update

  • partitioning_keys (list): Columns by which the data can be logically compartmentalized

  • distribution_keys (list): Columns by which to distribute data throughout the cluster evenly

  • orientation (str): Table orientation, either column or row

  • paths (list): List of hierarchy paths that reference the table

  • labels (list): List of labels assigned to workflow

  • update_time (str): Date and time data was updated as ISO 8601 compliant time

  • updated_by (int): User ID of last updater

  • published_name (str): Name of table published for reporting access

  • view_manager (bool): Table is visible to Manager roles if True

  • view_explorer (bool): Table is visible to Explorer roles if True

  • sort (list): List of tuples that define a default sort

Return type

dict

analyze.table.create_column(project_id, table_id, name, data_type, branch='master')

Adds a table column to an existing table

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • name (str) – The object name

  • data_type (str) – Column data type

  • branch (str) – Project branch

Returns

dict with the following properties
  • project_id (str): Unique project identifier

  • branch (str): Project branch

  • id (str): Unique table identifier

  • name (str): Unique model identifier

  • memo (str): Description of tables

  • size (int): Size in bytes

  • size_string (size): Size in friendly sizing (KB, MB, GB, TB)

  • row_count (int): Row count

  • rows_approximated (bool): Indicates the row count is only approximate if True

  • column_count (int): Column count

  • versioned (bool): Is versioning enabled for the table

  • version (int): Current version of the table

  • read_only (bool): Is table set to read-only

  • cluster_id (str): Internal identifier for data cluster assignment

  • update_keys (list): Keys used for determining uniqueness of row for entry and update

  • partitioning_keys (list): Columns by which the data can be logically compartmentalized

  • distribution_keys (list): Columns by which to distribute data throughout the cluster evenly

  • orientation (str): Table orientation, either column or row

  • paths (list): List of hierarchy paths that reference the table

  • labels (list): List of labels assigned to workflow

  • update_time (str): Date and time data was updated as ISO 8601 compliant time

  • updated_by (int): User ID of last updater

  • published_name (str): Name of table published for reporting access

  • view_manager (bool): Table is visible to Manager roles if True

  • view_explorer (bool): Table is visible to Explorer roles if True

Return type

dict

analyze.table.create_directory(project_id, path, branch='master')

Creates a folder in the table hierarchy

Parameters
  • project_id (str) – Unique project identifier

  • path (str) – Path to directory

  • branch (str) – Project branch

Returns

None

analyze.table.create_directory_paths(project_id, path, branch='master')

Creates all necessary folders in the path in the table hierarchy

Parameters
  • project_id (str) – Unique project identifier

  • path (str) – Path to directory

  • branch (str) – Project branch

Returns

None

analyze.table.delete(project_id, table_id, branch='master', include_versions=True, remove_steps=False)

Deletes a table

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • branch (str) – Project branch

  • include_versions (bool) – True to delete all version history else False

  • remove_steps (bool) – True to delete all related steps not in workflows

Returns

Step objects deleted if remove_steps True, else empty list

Return type

list

analyze.table.delete_column(project_id, table_id, column, branch='master')

Deletes a column in the specified table

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • column (str) – Column name

  • branch (str) – Project branch

Returns

dict with the following properties
  • project_id (str): Unique project identifier

  • branch (str): Project branch

  • id (str): Unique table identifier

  • name (str): Unique model identifier

  • memo (str): Description of tables

  • size (int): Size in bytes

  • size_string (size): Size in friendly sizing (KB, MB, GB, TB)

  • row_count (int): Row count

  • rows_approximated (bool): Indicates the row count is only approximate if True

  • column_count (int): Column count

  • versioned (bool): Is versioning enabled for the table

  • version (int): Current version of the table

  • read_only (bool): Is table set to read-only

  • cluster_id (str): Internal identifier for data cluster assignment

  • update_keys (list): Keys used for determining uniqueness of row for entry and update

  • partitioning_keys (list): Columns by which the data can be logically compartmentalized

  • distribution_keys (list): Columns by which to distribute data throughout the cluster evenly

  • orientation (str): Table orientation, either column or row

  • paths (list): List of hierarchy paths that reference the table

  • labels (list): List of labels assigned to workflow

  • update_time (str): Date and time data was updated as ISO 8601 compliant time

  • updated_by (int): User ID of last updater

  • published_name (str): Name of table published for reporting access

  • view_manager (bool): Table is visible to Manager roles if True

  • view_explorer (bool): Table is visible to Explorer roles if True

Return type

dict

analyze.table.delete_directory(project_id, path, branch='master')

Deletes a folder and removes path from all objects in that directory

Parameters
  • project_id (str) – Unique project identifier

  • path (str) – Path to directory

  • branch (str) – Project branch

Returns

None

analyze.table.delete_unreferenced(project_id, branch='master')

Deletes tables no longer referenced in any step/workflow

Parameters
  • project_id (str) – Unique project identifier

  • branch (str) – Project branch

Returns

The number of tables deleted

Return type

int

analyze.table.exists(project_id, table_id)

Returns True if the table exists in the database, False if it does not.

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – unique table identifier

Returns

True if the table exists in the database, False if it does not.

analyze.table.export(project_id, query, config, document_account, path, file_name, params=None, export_type='csv')

Exports a table to a document account, at the the provided path

Parameters
  • project_id (str) – The ID of the project containing the table

  • query (str) – The table data query to export

  • config (dict) – A dict containing settings for the export

  • document_account (str) – The ID of the document account to upload to

  • path (str) – The path in Document to upload to

  • file_name (str) – The file to which the export will be saved

  • params (dict, optional) – params dict for binding to query

  • export_type (str, optional) – What to export the table as (csv, excel, json). Defaults to csv

Returns

True if the export was successful, otherwise raises an exception.

Return type

bool

analyze.table.export_archive(project_id, table_id, document_account, remote_path, file_name, branch='master', run_id=None)

Exports a whole table and uploads to Document

Parameters
  • project_id (str) – The ID of the project containing the table

  • table_id (str) – The ID of the table to export

  • document_account (str) – The account ID of the document account to use

  • remote_path (str) – The directory to upload the file to within document

  • file_name (str) – What to name the export file

  • branch (str, optional) – Which branch to operate in, defaults to master

  • run_id (str, optional) – Unique export identifier so the temp files can be cleaned up

Returns

None

analyze.table.export_pcm_loader_package(project_id, run_id, jobs, document_account, path)

Exports a pcm loader package to a document account, at the the provided path

Parameters
  • project_id (str) – The ID of the project containing the table

  • run_id (str) – The unique ID for the workflow run requesting the export

  • jobs (list) – A list of dicts containing export configurations for the loader package

  • document_account (str) – The ID of the document account to upload to

  • path (str) – The path in Document to upload to

Returns

The remote path to which the package was placed

Return type

str

analyze.table.export_remote_table_import_package(project_id, run_id, jobs, document_account, path)

Exports a remote table import package, at the the provided path

Parameters
  • project_id (str) – The ID of the project containing the table

  • run_id (str) – The unique ID for the workflow run requesting the export

  • jobs (list) – A list of dicts containing export configurations for the import package

  • document_account (str) – The ID of the document account to upload to

  • path (str) – The path in Document to upload to

Returns

The remote path to which the package was placed

Return type

str

analyze.table.export_to_project(from_project_id, from_table_id, to_project_id, to_table_id, append=False)

Moves data across project boundaries by copying data from one table to another

Data transfer must be authorized between projects before this can occur. This is controlled by in Analyze area under Tools > Manage Data Sharing Between Projects

Parameters
  • from_project_id (str) – Unique source project identifier

  • from_table_id (str) – unique source table identifier

  • to_project_id (str) – Unique target project identifier

  • to_table_id (str) – unique target table identifier

  • append (bool) – Indicates whether the data should be appended or replaced

Returns

None

analyze.table.export_xml(project_id, tables, document_account, path, file_name, header, footer, windows_line_endings)

Exports tables provided to and XML file in a document account, at the the provided path

Parameters
  • project_id (str) – The ID of the project containing the table

  • tables (list of dict) – The table data queries to export

  • document_account (str) – The ID of the document account to upload to

  • path (str) – The path in Document to upload to

  • file_name (str) – The file to which the export will be saved

  • header (str, optional) – A header for the xml document

  • footer (str, optional) – A footer for the xml document

  • windows_line_endings (bool, optional) – If True uses Windows style line endings else Unix style

Returns

True if the export was successful, otherwise raises an exception.

Return type

bool

analyze.table.flashback_backup(project_id, table_id, branch='master')

Schedules a backup of the table

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • branch (str) – Project branch

Returns

None

analyze.table.flashback_hold(auth_id, project_id, table_id, version_id, branch='master')

flashback_versions(project_id, table_id, version_id, branch=’master’) Puts a hold on a version to prevent lifecycle cleanup after retention period

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • version_id (str) – Version of the table to restore

  • branch (str) – Project branch

Returns

None

analyze.table.flashback_release(project_id, table_id, version_id, branch='master')

Releases a hold on a version to allow lifecycle cleanup after retention period

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • version_id (str) – Version of the table to restore

  • branch (str) – Project branch

Returns

None

analyze.table.flashback_restore(project_id, table_id, version_id, target_table_id, branch='master')

Restores table data to the target table specified

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • version_id (str) – Version of the table to restore

  • target_table_id (str) – Target table UUID

  • branch (str) – Project branch

Returns

None

analyze.table.flashback_restore_project(project_id)

Schedules a restore of all tables in the project to their latest version stored

Parameters

project_id (str) – Unique project identifier

Returns

None

analyze.table.flashback_tables(project_id)

Provides list of tables stored in flashback

Parameters

project_id (str) – Unique project identifier

Returns

List of tables

Return type

list

analyze.table.flashback_versions(project_id, table_id, branch='master')

Provides list of versions of the table

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • branch (str) – Project branch

Returns

List of table versions

Return type

list

analyze.table.ids_from_path(project_id, path, branch='master')

Provides the canonical id for the from a provided path

Parameters
  • project_id (str) – Unique project identifier

  • path (str) – Object rollup path using unix style forward slashes

  • branch (str) – Project branch

Returns

id (str): Table unique identifier

Return type

list

analyze.table.import_archive(project_id, table_id, document_account, document_path, branch='master', delete_file=False)

Imports an analyze table archive.

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier for the target table

  • document_account (str) – Document account identifier

  • document_path (str) – Document path for import

  • branch (str) – Project branch

  • search (bool) – Whether to search for a files to import or not.

  • criteria (str) – Search type, if search is true: ‘contains’, ‘exact’, ‘startswith’, ‘endswith’

  • text (str) – String to search for

  • delete_file (bool, optional) – If true then delete the file(s) just imported

Returns

None

analyze.table.import_csv(project_id, table_id, document_account, document_path, source_columns, target_columns, header=None, delimiter=', ', null_as='None', quote='"', escape='\', source_header_name=False, start_row=None, end_row=None, clean=False, file_name=None, ascii_only=False, date_format='YYYY-MM-DD', branch='master', search=False, criteria='contains', text='', delete_file=False, extract_query=None)

Imports the specified delimited file

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Target table identifier

  • document_account (str) – Document account identifier

  • document_path (str) – Document path

  • source_columns (list) – Source columns

  • target_columns (list) – Target columns

  • header (bool) – If True, file is expected to have a header row

  • delimiter (str) – Delimiter

  • null_as (str) – Null character

  • quote (str) – Quote character

  • escape (str) – Escape character

  • source_header_name (bool, optional) – Source header name

  • start_row (int, optional) – Row to begin import from

  • end_row (int, optional) – Row to halt import

  • clean (bool, optional) – If True, the file will be processed by extra cleaning operation. This can slow imports.

  • file_name (str, optional) – File name to add to import

  • last_modified (date, optional) – Last modified date to add to import

  • ascii_only (bool, optional) – Force all data to ASCII only

  • date_format (str, optional) – Date format of dates in the file

  • handle_trailing_negatives (bool, optional) – Whether the import should handle trailing negatives

  • branch (str, optional) – Project branch

  • search (bool, optional) – If true then use document search

  • criteria (str, optional) – The criteria with which to match the text in the document search

  • text (str, optional) – The string to use in the document search

  • delete_file (bool, optional) – If true then delete the file(s) just imported

  • extract_query (sqlalchemy.Insert, optional) – Query to extract the csv data

Returns

None

analyze.table.import_excel(project_id, table_id, document_account, document_path, source_columns, target_columns, header=None, source_header_name=False, start_row=None, date_format='YYYY-MM-DDTHH:MI:SS', branch='master', clean_human_errors=False, sheets_config=None, search=False, criteria='contains', text='', delete_file=False, ascii_only=False, extract_query=None)

Import an Excel file

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier for the target table

  • document_account (str) – Document account identifier

  • document_path (str) – Document path for import

  • source_columns (list) – List of source columns

  • target_columns (list) – List of target columns

  • header (bool) – True if header is present

  • source_header_name (bool, optional) – Source header name

  • start_row (int, optional) – Row to start importing data from

  • date_format (str, optional) – The format for dates within the XML file

  • branch (str) – Project branch

  • clean_human_errors (bool) – If True, a cleaning process will be applied to input data which can slow imports

  • sheets_config (dict) – Sheet configuration

  • search (bool, optional) – If true then use document search

  • criteria (str, optional) – The criteria with which to match the text in the document search

  • text (str, optional) – The string to use in the document search

  • delete_file (bool, optional) – If true then delete the file(s) just imported

  • ascii_only (bool, optional) – Force all data to ASCII only

  • extract_query (sqlalchemy.Insert, optional) – Query to extract the csv data

Returns

None

analyze.table.import_fake(project_id, table_id, rows, params, branch='master')

Imports Faker data into an analyze table

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Table unique identifier

  • rows (int) – Number of rows to generate

  • params (list) – List of columns and their fake data generation parameters

  • branch (str) – Project branch

Returns

None

analyze.table.import_fixedwidth(project_id, table_id, document_account, document_path, source_columns, target_columns, header=None, source_header_name=False, start_row=None, date_format='YYYY-MM-DDTHH:MI:SS', branch='master', width_list=None, search=False, criteria='contains', text='', delete_file=False, ascii_only=False, extract_query=None)

Imports Fixed Width data into an analyze table

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Target table identifier

  • document_account (str) – Document account identifier

  • document_path (str) – Document path for import

  • source_columns (list) – List of source columns

  • target_columns (list) – List of target columns

  • header (bool) – True if header is present

  • source_header_name (bool, optional) – Source header name

  • start_row (int, optional) – Row to start importing data from

  • date_format (str, optional) – The format for dates within the XML file

  • branch (str) – Project branch

  • width_list (list) – List of column widths

  • search (bool) – Whether to search for a files to import or not.

  • criteria (str) – Search type, if search is true: ‘contains’, ‘exact’, ‘startswith’, ‘endswith’

  • text (str) – String to search for

  • delete_file (bool, optional) – If true then delete the file(s) just imported

  • ascii_only (bool, optional) – Force all data to ASCII only

  • extract_query (sqlalchemy.Insert, optional) – Query to extract the csv data

Returns

None

analyze.table.import_from_project(from_project_id, from_table_id, to_project_id, to_table_id, append=False)

Moves data across project boundaries by copying data from one table to another

Data transfer must be authorized between projects before this can occur. This is controlled by in Analyze area under Tools > Manage Data Sharing Between Projects

Parameters
  • from_project_id (str) – Unique source project identifier

  • from_table_id (str) – unique source table identifier

  • to_project_id (str) – Unique target project identifier

  • to_table_id (str) – unique target table identifier

  • append (bool) – Indicates whether the data should be appended or replaced

Returns

None

analyze.table.import_json(project_id, table_id, document_account, document_path, source_columns, target_columns, header=None, source_header_name=False, start_row=None, date_format='YYYY-MM-DDTHH:MI:SS', branch='master', search=False, criteria='contains', text='', delete_file=False, ascii_only=False, extract_query=None)

Imports JSON into an analyze table

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Target table identifier

  • document_account (str) – Document account identifier

  • document_path (str) – Document path for import

  • source_columns (list) – List of source columns

  • target_columns (list) – List of target columns

  • header (bool) – True if header is present

  • source_header_name (bool, optional) – Source header name

  • date_format (str, optional) – The format for dates within the XML file

  • start_row (int, optional) – Row to start importing data from

  • branch (str) – Project branch

  • search (bool) – Whether to search for a files to import or not.

  • criteria (str) – Search type, if search is true: ‘contains’, ‘exact’, ‘startswith’, ‘endswith’

  • text (str) – String to search for

  • delete_file (bool, optional) – If true then delete the file(s) just imported

  • ascii_only (bool, optional) – Force all data to ASCII only

  • extract_query (sqlalchemy.Insert, optional) – Query to extract the csv data

Returns

None

analyze.table.import_remote_direct_sql(project_id, table_id, connection_id, environment_id, sql, source_columns, target_columns, date_format='YYYY-MM-DD', branch='master')

Import to a table from a sql query

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier for the target table

  • connection_id (str) – Unique id of the sql connection to use for the query

  • environment_id (str) – Unique id of the environment to use for the query

  • sql (str) – SQL query to use as a source for the import

  • source_columns (dict) – source_columns config

  • target_columns (dict) – target_columns config

  • date_format (str) – date format for date type columns

  • branch (str) – Project branch

Returns

None

analyze.table.import_xml(project_id, document_account, document_path, source_header_name=False, date_format='YYYY-MM-DDTHH:MI:SS', branch='master', tags=None, fields=None, ignores=None, constants=None, guess_format=None, targets=None, search=False, criteria='contains', text='', delete_file=False, ascii_only=False, import_attributes=False)

Imports XML into an analyze table

Parameters
  • project_id (str) – Unique project identifier

  • document_account (str) – Document account identifier

  • document_path (str) – Document path for import

  • source_header_name (bool, optional) – Source header name

  • date_format (str, optional) – The format for dates within the XML file

  • branch (str) – Project branch

  • tags (list, optional) – List of tags to include for import

  • fields (list, optional) – List of fields to include for import

  • ignores (list, optional) – List of fields to ignore

  • constants (dict, optional) – Map of constants for each record

  • guess_format (str, optional) – Guess format

  • targets (list, optional) – List of targets

  • search (bool) – Whether to search for a files to import or not.

  • criteria (str) – Search type, if search is true: ‘contains’, ‘exact’, ‘startswith’, ‘endswith’

  • text (str) – String to search for

  • delete_file (bool, optional) – If true then delete the file(s) just imported

  • ascii_only (bool, optional) – Force all data to ASCII only

  • import_attributes (bool, optional) – Whether to import attributes as additional fields

Returns

None

analyze.table.labels(project_id, table_id, branch='master')

Provides a list of labels for the table

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • branch (str) – Project branch

Returns

list of labels associated with Table

Return type

list

analyze.table.load_csv(project_id, table_id, meta, csv_data, header, delimiter, null_as, quote, escape, date_format='YYYY-MM-DD', handle_trailing_negatives=False, branch='master', source_columns=None, append=False, update_table_shape=True)

Saves data from a csv (sent as a string) into an analyze table

Append mode allows multiple csvs to go into the same table. That in turn means that csvs could be separated into headerless chunks.

Does a bunch of magic stuff used by the faas_service import_csv transform. If you want that magic stuff, it can work, but you’ll have to send exactly the right things.

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – unique table identifier

  • meta (list, optional) – A dict of dicts representing the table’s metadata. Only required if append is False. Keys are the following: - id: column name - dtype: analyze data type - source (optional): source column to pull from. If not provided, will use id.

  • csv_data (str) – A string containing the csv

  • header (bool) – used to parse csv

  • delimiter (str) – used to parse csv

  • null_as (str) – used to parse csv

  • quote (str) – used to parse csv

  • escape (str) – used to parse csv

  • date_format (str, optional) – Date format of dates in the file

  • handle_trailing_negatives (bool, optional) – Whether the import should handle trailing negatives

  • branch (str, optional) – Project branch

  • source_columns (list, optional) – A list of source column names in the csv. If not provided, will use column ids or sources from meta.

  • append (bool, optional) – If false, create a new table (or empty out an existing one.) If true, append to an existing one.

  • update_table_shape (bool, optional) – If true, the table shape is updated

analyze.table.lookup_by_full_path(project_id, path, branch='master')

Returns ID for path provided

Parameters
  • project_id (str) – Unique project identifier

  • path (str) – Object rollup path using unix style forward slashes

  • branch (str) – Project branch

Returns

Table unique identifier

Return type

str

analyze.table.lookup_by_name(project_id, name, branch='master')

Returns ID for path provided

Parameters
  • project_id (str) – Unique project identifier

  • name (str) – Unique table name

  • branch (str) – Project branch

Returns

Table unique identifier

Return type

str

analyze.table.move_directory(project_id, from_path, to_path, branch='master')

Updates a folder name in the hierarchy

Parameters
  • project_id (str) – Unique project identifier

  • from_path (str) – Original directory path

  • to_path (str) – Destination directory path

  • branch (str) – Project branch

Returns

None

analyze.table.move_path(project_id, table_id, from_path, to_path, branch='master')

Moves a table reference path in the hierarchy

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • from_path (str) – Original path

  • to_path (str) – Destination path

  • branch (str) – Project branch

Returns

None

analyze.table.move_paths(project_id, moves, branch='master')

Moves a table reference path in the hierarchy

Parameters
  • project_id (str) – Unique project identifier

  • moves (list) – List of moves, containing: table_id (str): Unique table identifier from_path (str): Original path to_path (str): Destination path

  • branch (str) – Project branch

Returns

None

analyze.table.paths(project_id, branch='master', path=None)

Provides a list of paths for all tables and folders

Parameters
  • project_id (str) – Unique project identifier

  • branch (str) – Project branch

  • path (str) – Initial path to search

Returns

List of table and directory paths

Return type

list

analyze.table.paths_from_id(project_id, table_id, branch='master')

Provides a list of paths associated with the cannonical id

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • branch (str) – Project branch

Returns

List of paths that point to the table specified

Return type

list

analyze.table.rebuild(project_id, table_id)

Recreates the table in the database.

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – unique table identifier

  • branch (str, optional) – Project branch

Returns

None

analyze.table.references(project_id, table_id, branch='master', member_details=False)

Provides list of references to the table for both data creation and usage

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • branch (str) – Project branch

  • member_details (bool, optional) – If True then returns member info relating to updated_by

Returns

result list of dicts with the following properties
  • project_id (str): Unique project identifier

  • id (str): Unique step identifier

  • branch (str): Project branch

  • name (str): Step name

  • memo (str): Step description

  • paths (list): List of paths assigned to step

  • labels (list): List of labels assigned to step

  • on_error (str): Possible values are stop or continue

  • locked (bool): True if the project is locked from editing

  • view_manager (bool): Step is visible to Manager roles if True

  • view_explorer (bool): Step is visible to Explorer roles if True

  • operation (str): Step operation type

  • on_error_retry_limit (int): Number of times to retry a failed step before giving up

  • on_error_retry_delay (int): Number of seconds between retry attempts

  • check_conditions (bool): True if conditions should be evaluated before running the step

  • conditions (list): List of conditions to evaluate before running the step

  • source_type (str): Type of input source

  • source_name (str): Input source name or path

  • target_type (str): Type of output

  • target_name (str): Output name or path

  • update_time (str): Date and time data was updated as ISO 8601 compliant time

  • updated_by (int): User ID of last updater

If member_details is set to True, the dict will also contain:
  • updated_by_full_name (str): The User Name of the user that last updated the project

  • updated_by_user_id (int): The id of the user that last updated the project

  • updated_by_gravatar_hash (str): Unique hash to pull gravatar for the user.

Return type

List

analyze.table.remove_label(project_id, table_id, label, branch='master')

Removes a label on a table

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • label (str) – The label unique name

  • branch (str) – Project branch

Returns

None

analyze.table.remove_path(project_id, table_id, path, branch='master')

Removes a table reference path in the hierarchy

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • path (str) – Object rollup path using unix style forward slashes branch (str): Project branch

Returns

None

analyze.table.restore_archive(project_id, table_id, source_path)

Restores a table archive

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • source_path (str) – Archive source path

Returns

None

analyze.table.save(project_id, table_id, meta, data, branch='master')

Saves data into an analyze table

Saving is accomplished by loading data into a pandas dataframe, then calling save_frame on it.

Note: for unknown reasons, may have a delayed effect.

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – unique table identifier

  • meta (list) – A list of dicts representing the table’s metadata having the following keys: - id: column name - dtype: analyze data type

  • data (dict) – A list of dicts representing the table

  • branch (str) – Project branch

Returns

None

analyze.table.save_records(project_id, table_id, meta, data, append, branch='master')

Saves data into an analyze table

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – unique table identifier

  • meta (list) – A list of dicts representing the table’s metadata having the following keys: - id: column name - dtype: analyze data type

  • data (list) – A list of tuples representing the records

  • append (bool) – If true, append the data, otherwise overwrite with this data

  • branch (str) – Project branch

Returns

None

analyze.table.search_by_label(project_id, text, criteria='contains', branch='master', sort=None, keys=None)

Returns list of tables that match the search criteria

Parameters
  • project_id (str) – Unique project identifier

  • text (str) – The search text to locate

  • criteria (str) – The search criteria (contains, exact, startswith, or endswith)

  • branch (str) – Project branch

  • sort (list) – List of sort tuples using syntax (key, reverse). e.g. [(‘name’, False)]

  • keys (list) – List of keys to return. Defaults to all.

Returns

result list of dicts with the following properties
  • project_id (str): Unique project identifier

  • branch (str): Project branch

  • id (str): Unique table identifier

  • name (str): Unique model identifier

  • memo (str): Description of tables

  • size (int): Size in bytes

  • size_string (size): Size in friendly sizing (KB, MB, GB, TB)

  • row_count (int): Row count

  • rows_approximated (bool): Indicates the row count is only approximate if True

  • column_count (int): Column count

  • versioned (bool): Is versioning enabled for the table

  • version (int): Current version of the table

  • read_only (bool): Is table set to read-only

  • cluster_id (str): Internal identifier for data cluster assignment

  • update_keys (list): Keys used for determining uniqueness of row for entry and update

  • partitioning_keys (list): Columns by which the data can be logically compartmentalized

  • distribution_keys (list): Columns by which to distribute data throughout the cluster evenly

  • orientation (str): Table orientation, either column or row

  • paths (list): List of hierarchy paths that reference the table

  • labels (list): List of labels assigned to workflow

  • update_time (str): Date and time data was updated as ISO 8601 compliant time

  • updated_by (int): User ID of last updater

  • published_name (str): Name of table published for reporting access

  • view_manager (bool): Table is visible to Manager roles if True

  • view_explorer (bool): Table is visible to Explorer roles if True

Return type

list

analyze.table.search_by_name(project_id, text, criteria='contains', path='/', branch='master', sort=None, keys=None)

Returns list of tables that match the search criteria

Parameters
  • project_id (str) – Unique project identifier

  • text (str) – The search text to locate

  • criteria (str) – The search criteria (contains, exact, startswith, or endswith)

  • path (str) – Path to search below for matches

  • branch (str) – Project branch

  • sort (list, optional) – List of sort tuples using syntax (key, reverse). e.g. [(‘name’, False)]

  • keys (list, optional) – List of keys to return. Defaults to all.

Returns

result list of dicts with the following properties
  • project_id (str): Unique project identifier

  • branch (str): Project branch

  • id (str): Unique table identifier

  • name (str): Unique model identifier

  • memo (str): Description of tables

  • size (int): Size in bytes

  • size_string (size): Size in friendly sizing (KB, MB, GB, TB)

  • row_count (int): Row count

  • rows_approximated (bool): Indicates the row count is only approximate if True

  • column_count (int): Column count

  • versioned (bool): Is versioning enabled for the table

  • version (int): Current version of the table

  • read_only (bool): Is table set to read-only

  • cluster_id (str): Internal identifier for data cluster assignment

  • update_keys (list): Keys used for determining uniqueness of row for entry and update

  • partitioning_keys (list): Columns by which the data can be logically compartmentalized

  • distribution_keys (list): Columns by which to distribute data throughout the cluster evenly

  • orientation (str): Table orientation, either column or row

  • paths (list): List of hierarchy paths that reference the table

  • labels (list): List of labels assigned to workflow

  • update_time (str): Date and time data was updated as ISO 8601 compliant time

  • updated_by (int): User ID of last updater

  • published_name (str): Name of table published for reporting access

  • view_manager (bool): Table is visible to Manager roles if True

  • view_explorer (bool): Table is visible to Explorer roles if True

Return type

list

analyze.table.set_distribution_keys(project_id, table_id, keys, branch='master')

Sets table distribution keys

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • keys (list) – List of columns to use as cluster distribution keys

  • branch (str) – Project branch

Returns

dict with the following properties
  • project_id (str): Unique project identifier

  • branch (str): Project branch

  • id (str): Unique table identifier

  • name (str): Unique model identifier

  • memo (str): Description of tables

  • size (int): Size in bytes

  • size_string (size): Size in friendly sizing (KB, MB, GB, TB)

  • row_count (int): Row count

  • rows_approximated (bool): Indicates the row count is only approximate if True

  • column_count (int): Column count

  • versioned (bool): Is versioning enabled for the table

  • version (int): Current version of the table

  • read_only (bool): Is table set to read-only

  • cluster_id (str): Internal identifier for data cluster assignment

  • update_keys (list): Keys used for determining uniqueness of row for entry and update

  • partitioning_keys (list): Columns by which the data can be logically compartmentalized

  • distribution_keys (list): Columns by which to distribute data throughout the cluster evenly

  • orientation (str): Table orientation, either column or row

  • paths (list): List of hierarchy paths that reference the table

  • labels (list): List of labels assigned to workflow

  • update_time (str): Date and time data was updated as ISO 8601 compliant time

  • updated_by (int): User ID of last updater

  • published_name (str): Name of table published for reporting access

  • view_manager (bool): Table is visible to Manager roles if True

  • view_explorer (bool): Table is visible to Explorer roles if True

Return type

dict

analyze.table.set_partition_keys(project_id, table_id, keys, branch='master')

Sets table partition keys

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • keys (list) – Columns to include in the partition keys

  • branch (str) – Project branch

Returns

dict with the following properties
  • project_id (str): Unique project identifier

  • branch (str): Project branch

  • id (str): Unique table identifier

  • name (str): Unique model identifier

  • memo (str): Description of tables

  • size (int): Size in bytes

  • size_string (size): Size in friendly sizing (KB, MB, GB, TB)

  • row_count (int): Row count

  • rows_approximated (bool): Indicates the row count is only approximate if True

  • column_count (int): Column count

  • versioned (bool): Is versioning enabled for the table

  • version (int): Current version of the table

  • read_only (bool): Is table set to read-only

  • cluster_id (str): Internal identifier for data cluster assignment

  • update_keys (list): Keys used for determining uniqueness of row for entry and update

  • partitioning_keys (list): Columns by which the data can be logically compartmentalized

  • distribution_keys (list): Columns by which to distribute data throughout the cluster evenly

  • orientation (str): Table orientation, either column or row

  • paths (list): List of hierarchy paths that reference the table

  • labels (list): List of labels assigned to workflow

  • update_time (str): Date and time data was updated as ISO 8601 compliant time

  • updated_by (int): User ID of last updater

  • published_name (str): Name of table published for reporting access

  • view_manager (bool): Table is visible to Manager roles if True

  • view_explorer (bool): Table is visible to Explorer roles if True

Return type

dict

analyze.table.set_update_keys(project_id, table_id, keys, branch='master')

Provides list of table update keys that define a unique row

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • keys (list) – List of columns required to update a single row

  • branch (str) – Project branch

Returns

dict with the following properties
  • project_id (str): Unique project identifier

  • branch (str): Project branch

  • id (str): Unique table identifier

  • name (str): Unique model identifier

  • memo (str): Description of tables

  • size (int): Size in bytes

  • size_string (size): Size in friendly sizing (KB, MB, GB, TB)

  • row_count (int): Row count

  • rows_approximated (bool): Indicates the row count is only approximate if True

  • column_count (int): Column count

  • versioned (bool): Is versioning enabled for the table

  • version (int): Current version of the table

  • read_only (bool): Is table set to read-only

  • cluster_id (str): Internal identifier for data cluster assignment

  • update_keys (list): Keys used for determining uniqueness of row for entry and update

  • partitioning_keys (list): Columns by which the data can be logically compartmentalized

  • distribution_keys (list): Columns by which to distribute data throughout the cluster evenly

  • orientation (str): Table orientation, either column or row

  • paths (list): List of hierarchy paths that reference the table

  • labels (list): List of labels assigned to workflow

  • update_time (str): Date and time data was updated as ISO 8601 compliant time

  • updated_by (int): User ID of last updater

  • published_name (str): Name of table published for reporting access

  • view_manager (bool): Table is visible to Manager roles if True

  • view_explorer (bool): Table is visible to Explorer roles if True

Return type

dict

analyze.table.table(project_id, table_id, branch='master', keys=None))

Provides detailed table information

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • branch (str) – Project branch

  • keys (list) – List of keys to return. Defaults to all.

Returns

dict with the following properties
  • project_id (str): Unique project identifier

  • branch (str): Project branch

  • id (str): Unique table identifier

  • name (str): Unique model identifier

  • memo (str): Description of tables

  • size (int): Size in bytes

  • size_string (size): Size in friendly sizing (KB, MB, GB, TB)

  • row_count (int): Row count

  • rows_approximated (bool): Indicates the row count is only approximate if True

  • column_count (int): Column count

  • versioned (bool): Is versioning enabled for the table

  • version (int): Current version of the table

  • read_only (bool): Is table set to read-only

  • cluster_id (str): Internal identifier for data cluster assignment

  • update_keys (list): Keys used for determining uniqueness of row for entry and update

  • partitioning_keys (list): Columns by which the data can be logically compartmentalized

  • distribution_keys (list): Columns by which to distribute data throughout the cluster evenly

  • orientation (str): Table orientation, either column or row

  • paths (list): List of hierarchy paths that reference the table

  • labels (list): List of labels assigned to workflow

  • update_time (str): Date and time data was updated as ISO 8601 compliant time

  • updated_by (int): User ID of last updater

  • published_name (str): Name of table published for reporting access

  • view_manager (bool): Table is visible to Manager roles if True

  • view_explorer (bool): Table is visible to Explorer roles if True

  • sort (list): List of tuples defining this table’s default sort.

Return type

dict

analyze.table.table_meta(project_id, table_id, branch='master')

Provides table metadata from the database

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • branch (str) – Project branch

Returns

list of dicts with the following properties
  • id (str): Column Name

  • dtype (str): Analyze data type

  • memo (str): Not currently populated

Return type

list

analyze.table.tables(project_id, branch='master', id_filter=None, sort=None, keys=None, member_details=False)

Returns the list of tables in the specified project

Parameters
  • project_id (str) – Unique project identifier

  • branch (str, optional) – Project branch

  • id_filter (list, optional) – List of identifiers to which to limit the results. If None or empty, don’t filter

  • sort (list, optional) – List of sort tuples using syntax (key, reverse). e.g. [(‘name’, False)]

  • keys (list, optional) – List of keys to return. Defaults to all.

  • member_details (bool, optional) – If True then returns member info relating to updated_by

Returns

result list of dicts with the following properties
  • project_id (str): Unique project identifier

  • branch (str): Project branch

  • id (str): Unique table identifier

  • name (str): Unique model identifier

  • memo (str): Description of tables

  • size (int): Size in bytes

  • size_string (size): Size in friendly sizing (KB, MB, GB, TB)

  • row_count (int): Row count

  • rows_approximated (bool): Indicates the row count is only approximate if True

  • column_count (int): Column count

  • versioned (bool): Is versioning enabled for the table

  • version (int): Current version of the table

  • read_only (bool): Is table set to read-only

  • cluster_id (str): Internal identifier for data cluster assignment

  • update_keys (list): Keys used for determining uniqueness of row for entry and update

  • partitioning_keys (list): Columns by which the data can be logically compartmentalized

  • distribution_keys (list): Columns by which to distribute data throughout the cluster evenly

  • orientation (str): Table orientation, either column or row

  • paths (list): List of hierarchy paths that reference the table

  • labels (list): List of labels assigned to workflow

  • update_time (str): Date and time data was updated as ISO 8601 compliant time

  • updated_by (int): User ID of last updater

  • published_name (str): Name of table published for reporting access

  • view_manager (bool): Table is visible to Manager roles if True

  • view_explorer (bool): Table is visible to Explorer roles if True

If member_details is set to True, the dict will also contain:
  • updated_by_full_name (str): The User Name of the user that last updated the project

  • updated_by_user_id (int): The id of the user that last updated the project

  • updated_by_gravatar_hash (str): Unique hash to pull gravatar for the user.

Return type

list

analyze.table.tables_from_path(project_id, path, branch='master', sort=None, keys=None)

Provides list of tables based on a path fragment. Path fragment can be to parent level.

Parameters
  • project_id (str) – Unique project identifier

  • path (str) – Object rollup path using unix style forward slashes

  • branch (str) – Project branch

  • sort (list) – List of sort tuples using syntax (key, reverse). e.g. [(‘name’, False)]

  • keys (list) – List of keys to return. Defaults to all.

Returns

result list of dicts with the following properties
  • project_id (str): Unique project identifier

  • branch (str): Project branch

  • id (str): Unique table identifier

  • name (str): Unique model identifier

  • memo (str): Description of tables

  • size (int): Size in bytes

  • size_string (size): Size in friendly sizing (KB, MB, GB, TB)

  • row_count (int): Row count

  • rows_approximated (bool): Indicates the row count is only approximate if True

  • column_count (int): Column count

  • versioned (bool): Is versioning enabled for the table

  • version (int): Current version of the table

  • read_only (bool): Is table set to read-only

  • cluster_id (str): Internal identifier for data cluster assignment

  • update_keys (list): Keys used for determining uniqueness of row for entry and update

  • partitioning_keys (list): Columns by which the data can be logically compartmentalized

  • distribution_keys (list): Columns by which to distribute data throughout the cluster evenly

  • orientation (str): Table orientation, either column or row

  • paths (list): List of hierarchy paths that reference the table

  • labels (list): List of labels assigned to workflow

  • update_time (str): Date and time data was updated as ISO 8601 compliant time

  • updated_by (int): User ID of last updater

  • published_name (str): Name of table published for reporting access

  • view_manager (bool): Table is visible to Manager roles if True

  • view_explorer (bool): Table is visible to Explorer roles if True

Return type

list

analyze.table.touch(project_id, table_id, meta, overwrite=True, branch='master')

Creates an empty table in the database.

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – unique table identifier

  • meta (list) – A list of dicts representing the table’s metadata having the following keys: - id: column name - dtype: analyze data type

  • overwrite (bool) – Defaults to true. If true, this will wipe out any existing data in the table. If false, this will not do anything if the table already exists.

Returns

None

analyze.table.update(project_id, table_id, branch='master', **kwargs)

Updates the settable properties on table

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • branch (str) – Project branch

Kwargs:

name (str): The table name memo (str): The table description versioned (bool): True if versioning enabled for the table read_only (bool): True if table set to read-only view_manager (bool): Table is visible to Manager roles if True view_explorer (bool): Table is visible to Explorer roles if True sort (list): List of tuples defining a default sort on this table

Returns

dict with the following properties
  • project_id (str): Unique project identifier

  • branch (str): Project branch

  • id (str): Unique table identifier

  • name (str): Table name

  • memo (str): Description of table

  • size (int): Size in bytes

  • size_string (size): Size in friendly sizing (KB, MB, GB, TB)

  • row_count (int): Row count

  • rows_approximated (bool): Indicates the row count is only approximate if True

  • column_count (int): Column count

  • versioned (bool): Is versioning enabled for the table

  • version (int): Current version of the table

  • read_only (bool): Is table set to read-only

  • cluster_id (str): Internal identifier for data cluster assignment

  • update_keys (list): Keys used for determining uniqueness of row for entry and update

  • partitioning_keys (list): Columns by which the data can be logically compartmentalized

  • distribution_keys (list): Columns by which to distribute data throughout the cluster evenly

  • orientation (str): Table orientation, either column or row

  • paths (list): List of hierarchy paths that reference the table

  • labels (list): List of labels assigned to workflow

  • update_time (str): Date and time data was updated as ISO 8601 compliant time

  • updated_by (int): User ID of last updater

  • published_name (str): Name of table published for reporting access

  • view_manager (bool): Table is visible to Manager roles if True

  • view_explorer (bool): Table is visible to Explorer roles if True

Return type

dict

analyze.table.update_column(project_id, table_id, column, name, branch='master')

Updates the column name on a table column

Parameters
  • project_id (str) – Unique project identifier

  • table_id (str) – Unique table identifier

  • column (str) – Current column name

  • name (str) – New column name

  • branch (str) – Project branch

Returns

dict with the following properties
  • project_id (str): Unique project identifier

  • branch (str): Project branch

  • id (str): Unique table identifier

  • name (str): Unique model identifier

  • memo (str): Description of tables

  • size (int): Size in bytes

  • size_string (size): Size in friendly sizing (KB, MB, GB, TB)

  • row_count (int): Row count

  • rows_approximated (bool): Indicates the row count is only approximate if True

  • column_count (int): Column count

  • versioned (bool): Is versioning enabled for the table

  • version (int): Current version of the table

  • read_only (bool): Is table set to read-only

  • cluster_id (str): Internal identifier for data cluster assignment

  • update_keys (list): Keys used for determining uniqueness of row for entry and update

  • partitioning_keys (list): Columns by which the data can be logically compartmentalized

  • distribution_keys (list): Columns by which to distribute data throughout the cluster evenly

  • orientation (str): Table orientation, either column or row

  • paths (list): List of hierarchy paths that reference the table

  • labels (list): List of labels assigned to workflow

  • update_time (str): Date and time data was updated as ISO 8601 compliant time

  • updated_by (int): User ID of last updater

  • published_name (str): Name of table published for reporting access

  • view_manager (bool): Table is visible to Manager roles if True

  • view_explorer (bool): Table is visible to Explorer roles if True

Return type

dict

analyze.table.update_shape(project_id, table_ids, branch='master', update_stats=False)

Updates the table shape information for the table

Parameters
  • project_id (str) – Unique project identifier

  • table_ids (str or list) – Unique table identifier or list of unique table identifiers

  • branch (str) – Project branch

  • update_stats (bool) – If true, VACUUM ANALYZE the table first

Returns

None

analyze.table.view_explorer(project_id, table_ids, allowed=True, branch='master')

Sets View Explorer flag

Parameters
  • project_id (str) – Unique project identifier

  • table_ids (str) – Unique table identifier or list of unique table identifiers

  • allowed (bool) – Indicate whether Explorer role can view item

  • branch (str) – Project branch

Returns

None

analyze.table.view_manager(project_id, table_ids, allowed=True, branch='master')

Sets View Manager flag

Parameters
  • project_id (str) – Unique project identifier

  • table_ids (str) – Unique table identifier or list of unique table identifiers

  • allowed (bool) – Indicate whether Manager role can view item

  • branch (str) – Project branch

Returns

None