Configuring Data Warehouses
Version: 0.3.0
DecisionBox connects to your existing data warehouse in read-only mode. This guide covers setup for each supported warehouse.
Google BigQuery
Prerequisites
- A GCP project with BigQuery datasets
- One of:
- On GCP: Application Default Credentials (Workload Identity, gcloud auth)
- Outside GCP: A service account JSON key
Dashboard Setup
- Select Google BigQuery as warehouse provider
- Fill in:
- Project ID: Your GCP project ID (e.g.,
my-gcp-project) - Location: Dataset location (e.g.,
US,us-central1,us-east5)
- Project ID: Your GCP project ID (e.g.,
- Enter Datasets: Comma-separated (e.g.,
analytics, features_prod) - Optionally set Filter: field + value for multi-tenant data
Authentication
When creating a project, select one of the available auth methods:
Application Default Credentials (ADC) — No credentials needed.
Works automatically on GKE (Workload Identity), Cloud Run, Compute Engine, or after gcloud auth application-default login.
Service Account Key — For cross-cloud, local, or federated access.
- Create a service account in GCP Console with
BigQuery Data ViewerandBigQuery Job Userroles - Download the JSON key
- In the project creation form, select Service Account Key as auth method and paste the JSON key
This method also supports Workload Identity Federation (WIF) — keyless access from AWS, Azure, or any OIDC identity provider.
Instead of a service account key, paste the WIF credential config JSON generated by gcloud iam workload-identity-pools create-cred-config.
The GCP SDK auto-detects whether the JSON is a service account key or a WIF credential config.
Multi-Dataset Support
BigQuery projects can have multiple datasets. List all datasets you want the agent to explore:
Datasets: events_prod, features_prod, analytics
The agent discovers table schemas from all listed datasets and can query across them.
Filtering
For shared datasets with data from multiple apps/tenants:
Filter Field: app_id
Filter Value: 68a42f378e3b227c8e41b0e5
The agent adds WHERE app_id = '68a42f378e3b227c8e41b0e5' to all queries.
Cost
BigQuery charges per TB scanned (default: $7.50/TB for on-demand pricing). The cost estimation feature uses BigQuery's dry-run API to preview costs before running.
Amazon Redshift
Prerequisites
- A Redshift cluster (provisioned) or Redshift Serverless workgroup
- AWS credentials with Redshift Data API access
Dashboard Setup — Serverless
- Select Amazon Redshift as warehouse provider
- Fill in:
- Workgroup Name: Your Serverless workgroup (e.g.,
default-workgroup) - Database: Database name (e.g.,
dev) - Region: AWS region (e.g.,
us-east-1)
- Workgroup Name: Your Serverless workgroup (e.g.,
- Enter Datasets: Schema names (e.g.,
public)
Dashboard Setup — Provisioned
- Select Amazon Redshift as warehouse provider
- Fill in:
- Cluster Identifier: Your cluster ID (e.g.,
my-redshift-cluster) - Database: Database name
- Region: AWS region
- Cluster Identifier: Your cluster ID (e.g.,
- Enter Datasets: Schema names
Authentication
When creating a project, select one of the available auth methods:
IAM Role — No credentials needed.
Works automatically on EKS (pod IAM role), EC2 (instance profile), or with environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY).
Access Keys — For cross-cloud or local access.
Enter your AWS access key pair in the format ACCESS_KEY_ID:SECRET_ACCESS_KEY.
The IAM user needs these permissions:
redshift-data:ExecuteStatement,redshift-data:DescribeStatement,redshift-data:GetStatementResultredshift-serverless:GetCredentials(Serverless) orredshift:GetClusterCredentials(Provisioned)
Assume Role — For cross-account access.
Enter the Role ARN of the target role (e.g., arn:aws:iam::123456789012:role/RedshiftRole).
Optionally provide an External ID if the role's trust policy requires one.
The agent assumes this role via STS using its base credentials (IAM role or environment).
How Redshift Queries Work
DecisionBox uses the Redshift Data API (not JDBC), which works asynchronously:
ExecuteStatement— Submit SQLDescribeStatement— Poll until completeGetStatementResult— Fetch results
This means no JDBC driver is needed, and it works with both Serverless and Provisioned clusters.
Data Type Handling
Redshift types are automatically normalized:
INTEGER,BIGINT,SMALLINT→INT64VARCHAR,TEXT,CHAR→STRINGDECIMAL,NUMERIC→FLOAT64(parsed from column metadata, not string guessing)BOOLEAN→BOOLTIMESTAMP,TIMESTAMPTZ→TIMESTAMP
System Table Filtering
The agent automatically excludes Redshift system tables from discovery:
pg_*tablesstl_*tables (system log)svv_*tables (system views)
Snowflake
Prerequisites
- A Snowflake account (trial or production)
- Username with access to the target database and schema
- A virtual warehouse (e.g.,
COMPUTE_WH)
Dashboard Setup
- Select Snowflake as warehouse provider
- Fill in:
- Account Identifier: Your Snowflake account (e.g.,
ORGNAME-ACCOUNTNAME) - Username: Snowflake user
- Warehouse: Virtual warehouse name (e.g.,
COMPUTE_WH) - Database: Database name (e.g.,
ANALYTICS_DB)
- Account Identifier: Your Snowflake account (e.g.,
- Optionally set Schema (default:
PUBLIC) and Role - Enter Datasets: Schema names (e.g.,
PUBLIC)
Authentication
When creating a project, select one of the available auth methods:
Username / Password — Enter your Snowflake password.
Key Pair (JWT) — Recommended for production.
- Generate an RSA key pair:
openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -out rsa_key.p8 -nocrypt
openssl rsa -in rsa_key.p8 -pubout -out rsa_key.pub - Assign the public key to your Snowflake user:
ALTER USER my_user SET RSA_PUBLIC_KEY='MIIBIjANBg...'; - In the project creation form, select Key Pair (JWT) and paste the PEM private key content.
Data Type Handling
Snowflake types are automatically normalized:
NUMBER,INT,BIGINT,SMALLINT,TINYINT,BYTEINT→INT64(in schema metadata)FLOAT,DOUBLE,REAL,DECIMAL(p,s),NUMERIC(p,s)→FLOAT64- In query results,
NUMBERvalues with decimals are returned asFLOAT64(the driver reports actual precision) VARCHAR,STRING,CHAR,TEXT→STRINGBOOLEAN→BOOLDATE→DATETIMESTAMP_NTZ,TIMESTAMP_LTZ,TIMESTAMP_TZ→TIMESTAMPVARIANT,OBJECT,ARRAY→RECORD(JSON string in query results)BINARY,VARBINARY→BYTES
Schema Metadata
The provider uses INFORMATION_SCHEMA for table listing and column metadata.
Row counts come from INFORMATION_SCHEMA.TABLES.ROW_COUNT — no full-table scans needed.
Cost
Snowflake charges per-second based on warehouse size (credits per hour). There is no dry-run API, so the cost estimation feature is not available for Snowflake.
PostgreSQL
Prerequisites
- A PostgreSQL 12+ server accessible from the DecisionBox deployment
- A database user with read access to the target schema
- SSL configured (recommended for remote connections)
Dashboard Setup
- Select PostgreSQL as warehouse provider
- Fill in:
- Host: Database hostname (e.g.,
db.example.com) - Port: Database port (default:
5432) - Database: Database name
- Username: Database user
- Host: Database hostname (e.g.,
- Optionally set Schema (default:
public) and SSL Mode (default:require) - Enter Datasets: Schema names (e.g.,
public)
Authentication
When creating a project, select one of the available auth methods:
Username / Password — Enter host, port, database, username, and password.
The connection uses the lib/pq driver with the sslmode you configure.
Connection String — For advanced configurations (Heroku, RDS, Cloud SQL, Supabase). Enter a full PostgreSQL connection string:
postgres://user:password@host:5432/dbname?sslmode=require
This method supports all lib/pq DSN parameters including sslmode, connect_timeout, search_path, and application_name.
SSL Mode
| Mode | Description |
|---|---|
disable | No SSL (only for localhost or trusted networks) |
allow | Try non-SSL first, fall back to SSL |
prefer | Try SSL first, fall back to non-SSL (common RDS default) |
require | SSL required, no certificate verification (default) |
verify-ca | SSL with CA certificate verification |
verify-full | SSL with CA + hostname verification (most secure) |
Data Type Handling
PostgreSQL types are automatically normalized:
INTEGER,BIGINT,SMALLINT,SERIAL,BIGSERIAL→INT64REAL,DOUBLE PRECISION→FLOAT64NUMERIC,DECIMAL→FLOAT64(parsed from driver's[]byterepresentation)VARCHAR,TEXT,CHAR,UUID,INET,CIDR,INTERVAL,MONEY→STRINGBOOLEAN→BOOLDATE→DATETIMESTAMP,TIMESTAMPTZ→TIMESTAMPJSON,JSONB→RECORD(JSON string in query results)BYTEA→BYTESARRAYtypes →STRING(PostgreSQL text representation, e.g.,{1,2,3})
Schema Metadata
The provider uses information_schema.tables and information_schema.columns for table listing and column metadata.
Row counts come from pg_class.reltuples — an estimate maintained by PostgreSQL's autovacuum, no full-table scans needed.
Cost
PostgreSQL is self-hosted or managed (RDS, Cloud SQL, AlloyDB, Supabase). There is no per-query cost model, so the cost estimation feature is not available.
Databricks
Prerequisites
- A Databricks workspace (AWS, Azure, or GCP)
- A SQL warehouse (serverless or classic)
- Unity Catalog enabled with access to the target catalog and schema
Dashboard Setup
- Select Databricks as warehouse provider
- Fill in:
- Server Hostname: Your workspace hostname (e.g.,
xxx.cloud.databricks.com) - HTTP Path: SQL warehouse endpoint (e.g.,
/sql/1.0/warehouses/xxx) - Catalog: Unity Catalog catalog name (e.g.,
main)
- Server Hostname: Your workspace hostname (e.g.,
- Optionally set Schema (default:
default) - Enter Datasets: Schema names (e.g.,
default)
Authentication
When creating a project, select one of the available auth methods:
Personal Access Token (PAT) — Simplest setup.
- In Databricks, go to Settings > Developer > Access tokens
- Generate a new token
- In the project creation form, select Personal Access Token and paste the token
OAuth M2M (Service Principal) — Recommended for production.
- Create a service principal in your Databricks workspace
- Create an OAuth secret for the service principal
- Grant the service principal access to your SQL warehouse and catalog
- In the project creation form, select OAuth M2M and enter
client_id:client_secret
Unity Catalog Namespace
Databricks uses a 3-level namespace: catalog.schema.table.
The agent qualifies table references as catalog.information_schema.tables for metadata queries.
SQL queries use the schema set in the project's datasets configuration.
Data Type Handling
Databricks types are automatically normalized:
TINYINT,SMALLINT,INT,BIGINT→INT64FLOAT,DOUBLE→FLOAT64DECIMAL(p,s)→FLOAT64(parsed from driver's string representation)STRING,CHAR,VARCHAR→STRINGBOOLEAN→BOOLDATE→DATETIMESTAMP,TIMESTAMP_NTZ→TIMESTAMPBINARY→BYTESSTRUCT,ARRAY,MAP,VARIANT→RECORD(JSON string in query results)INTERVAL→STRING
SQL Dialect
Databricks SQL extends ANSI SQL with:
- QUALIFY — Filter window function results directly (like Snowflake)
- PIVOT / UNPIVOT — Rotate rows to columns and vice versa
- explode / explode_outer — Expand arrays and maps into rows
- Delta time travel —
TIMESTAMP AS OFandVERSION AS OF - Java SimpleDateFormat — Use
yyyy-MM-dd(notYYYY-MM-DD)
Cost
Databricks SQL warehouses charge per DBU (Databricks Unit) based on cluster size and runtime. There is no per-query cost estimation API, so the cost estimation feature is not available.
Cross-Cloud Authentication
DecisionBox supports accessing warehouses from a different cloud:
| Scenario | Auth Method | How |
|---|---|---|
| BigQuery from AWS/Azure | Service Account Key | Paste SA key JSON or WIF credential config |
| BigQuery via WIF (keyless) | Service Account Key | Paste WIF credential config from gcloud iam workload-identity-pools create-cred-config |
| Redshift from GCP/Azure | Access Keys | Paste ACCESS_KEY_ID:SECRET_ACCESS_KEY |
| Redshift cross-account | Assume Role | Enter Role ARN + External ID |
| Snowflake from any cloud | Password or Key Pair | Paste password or PEM private key |
| PostgreSQL from any cloud | Password or Connection String | Enter credentials or full DSN |
| Databricks from any cloud | PAT or OAuth M2M | Paste token or client_id:client_secret |
| Any from local dev | ADC / IAM Role | Configure cloud CLI (gcloud auth, aws configure) |
The key concept: warehouse credentials are stored encrypted via the secret provider. When creating a project, select the appropriate auth method and enter credentials inline. The agent reads credentials from the secret provider before initializing the warehouse provider.
Next Steps
- Configuration Reference — All environment variables
- Configuring Secrets — Secret provider setup
- Adding Warehouse Providers — Support a new warehouse