Connecting your database

IP Restrictions

dbt Cloud will always connect to your warehouse from the following IP addresses. Be sure to allow traffic from these IPs in your firewall, and include them in any database grants.

  • 52.45.144.63
  • 54.81.134.249
  • 52.22.161.231
Changelog

Connecting to Redshift and Postgres

The following fields are required when creating a Redshift connection:

FieldDescriptionExamples
Host NameThe hostname of the Postgres or Redshift database to connect to. This can either be a hostname an IP address.xxx.us-east-1.amazonaws.com
PortUsually 5432 (Postgres) or 5439 (Redshift)5439
DatabaseThe logical database to connect to and run queries against.analytics
Configuring a Redshift connection

Configuring a Redshift connection

Connecting via an SSH Tunnel

To connect to a Postgres or Redshift instance via an SSH tunnel, check the "Use SSH Tunnel" option when creating your connection. When configuring the tunnel, you'll need to supply the hostname, username, and port for the bastion server.

Once the connection is saved, a public key will be generated and displayed for the Connection. You can copy this public key to the bastion server to authorize dbt Cloud to connect to your database via the bastion server.

A generated public key for a Redshift connection

A generated public key for a Redshift connection

Connecting to Snowflake

The following fields are required when creating a Snowflake connection:

FieldDescriptionExamples
AccountThe Snowflake account to connect to. Take a look here to determine what the account field should look like based on your region.db5261993,db5261993.east-us-2.azure
RoleAn optional field indicating what role should be assumed after connecting to Snowflaketransformer
DatabaseThe logical database to connect to and run queries against.analytics
WarehouseThe virtual warehouse to use for running queries.transforming

Snowflake connection details

Username / Password

Available in: Development environments, Deployment environments

The Username / Password auth method is the simplest way to authenticate Development or Deployment credentials in a dbt project. Simply enter your Snowflake username (specifically, the login_name) and the corresponding user's Snowflake password to authenticate dbt Cloud to run queries against Snowflake on behalf of a Snowflake user.

Snowflake username/password auth

Key Pair

Available in: Development environments, Deployment environments

The Keypair auth method uses Snowflake's Key Pair Authentication to authenticate Development or Deployment credentials for a dbt Cloud project.

After generating an encrypted key pair, be sure to set the rsa_public_key for the Snowflake user to authenticate in dbt Cloud:

alter user jsmith set rsa_public_key='MIIBIjANBgkqh...';

Finally, set the "Private Key" and "Private Key Passphrase" fields in the "Edit Credentials" page to finish configuring dbt Cloud to authenticate with Snowflake using a key pair.

Note: You can optionally add an ssh key without a passphrase by leaving the PRIVATE KEY PASSPHRASE field blank.

Snowflake keypair auth

Snowflake OAuth

Available in: Development environments, Enterprise plans only

The OAuth auth method permits dbt Cloud to run development queries on behalf of a Snowflake user without the configuration of Snowflake password in dbt Cloud. For more information on configuring a Snowflake OAuth connection in dbt Cloud, please see the docs on setting up Snowflake OAuth.

Configuring Snowflake OAuth for a connection

Connecting to BigQuery

JSON keyfile

Uploading a service account JSON keyfile

While the fields in a BigQuery connection can be specified manually, we recommend uploading a service account JSON keyfile to quickly and accurately configure a connection to BigQuery.

Uploading a JSON keyfile should populate the following fields:

  • Project id
  • Private key id
  • Private key
  • Client email
  • Client id
  • Auth uri
  • Token uri
  • Auth provider x509 cert url
  • Client x509 cert url

In addition to these fields, there are two other optional fields that can be configured in a BigQuery connection:

FieldDescriptionExamples
TimeoutDeprecated; exists for backwards compatibility with older versions of dbt and will be removed in the future.300
LocationThe location where dbt should create datasets.US, EU
A valid BigQuery connection

A valid BigQuery connection

BigQuery OAuth

Available in: Development environments, Enterprise plans only

The OAuth auth method permits dbt Cloud to run development queries on behalf of a BigQuery user without the configuration of BigQuery service account keyfile in dbt Cloud. For more information on the initial configuration of a BigQuery OAuth connection in dbt Cloud, please see the docs on setting up BigQuery OAuth.

As an end user, if your organization has set up BigQuery OAuth, you can link a project with your personal BigQuery account in your personal Profile in dbt Cloud, like so:

Link Button in dbt Cloud Credentials Screen

Link Button in dbt Cloud Credentials Screen

Connecting to Databricks

ODBC

dbt Cloud supports connecting to Databricks using a Cluster or a SQL Endpoint. Depending on how you connect to Databricks, either one of the Cluster or Endpoint configurations must be provided, but setting both values is not allowed.

The following fields are available when creating a Databricks connection:

FieldDescriptionExamples
Host NameThe hostname of the Databricks account to connect toavc-def1234ghi-9999.cloud.databricks.com
PortThe port to connect to Databricks for this connection443
OrganizationOptional (default: 0)0123456789
ClusterThe ID of the cluster to connect to (required if using a cluster)1234-567890-abc12345
EndpointThe ID of the endpoint to connect to (required if using Databricks SQL)0123456789
UserOptionaldbt_cloud_user
Configuring a Databricks connection

Configuring a Databricks connection

Connecting to Apache Spark

HTTP and Thrift

dbt Cloud supports connecting to an Apache Spark cluster using the HTTP method or the Thrift method. Note: While the HTTP method can be used to connect to an all-purpose Databricks cluster, the ODBC method is recommended for all Databricks connections. For further details on configuring these connection parameters, please see the dbt-spark documentation

The following fields are available when creating an Apache Spark connection using the HTTP and Thrift connection methods:

FieldDescriptionExamples
Host NameThe hostname of the Spark cluster to connect toyourorg.sparkhost.com
PortThe port to connect to Spark on443
OrganizationOptional (default: 0)0123456789
ClusterThe ID of the cluster to connect to1234-567890-abc12345
Connection TimeoutNumber of seconds after which to timeout a connection10
Connection RetriesNumber of times to attempt connecting to cluster before failing10
UserOptionaldbt_cloud_user
AuthOptional, supply if using KerberosKERBEROS
Kerberos Service NameOptional, supply if using Kerberoshive
Configuring a Spark connection

Configuring a Spark connection