| page_title | airbyte_destination - terraform-provider-airbyte |
|---|---|
| subcategory | |
| description | Manages an Airbyte destination connector. This is the generic resource for all destination types. |
Manages an Airbyte destination connector.
This is the generic destination resource that works with any Airbyte destination connector type. Pass the connector's definition_id and a JSON configuration blob. Use the airbyte_connector_configuration data source to resolve connector names to definition IDs and to get clean Terraform diffs on non-sensitive values.
~> Migrating from typed resources? If you are upgrading from a pre-1.0 provider version or moving from a typed resource like airbyte_destination_bigquery, see the Migrating to 1.0 guide for step-by-step instructions using Terraform's moved block.
Use the airbyte_connector_configuration data source to resolve the connector's definition_id automatically, validate configuration at plan time, and separate sensitive from non-sensitive values for clean diffs:
data "airbyte_connector_configuration" "bigquery" {
connector_name = "destination-bigquery"
connector_version = "2.9.4"
configuration = {
project_id = "my-gcp-project"
dataset_id = "my_dataset"
dataset_location = "US"
loading_method = {
method = "GCS Staging"
gcs_bucket_name = "my-staging-bucket"
gcs_bucket_path = "airbyte-staging"
}
}
configuration_secrets = {
credentials_json = var.bigquery_credentials
}
}
resource "airbyte_destination" "bigquery" {
name = "BigQuery Production"
workspace_id = var.workspace_id
definition_id = data.airbyte_connector_configuration.bigquery.definition_id
configuration = data.airbyte_connector_configuration.bigquery.configuration_json
}For simpler cases where you don't need the data source, pass JSON configuration directly. The entire configuration attribute is sensitive, so all values are hidden in plan output:
resource "airbyte_destination" "s3" {
name = "S3 Data Lake"
workspace_id = var.workspace_id
definition_id = "4816b78f-1489-44c1-9060-4b19d5fa9571"
configuration = jsonencode({
s3_bucket_name = "my-data-lake"
s3_bucket_path = "airbyte"
s3_bucket_region = "us-east-1"
access_key_id = var.aws_access_key
secret_access_key = var.aws_secret_key
format = {
format_type = "Parquet"
}
})
}Typed destination resources (airbyte_destination_bigquery, airbyte_destination_snowflake, etc.) are replaced by this generic airbyte_destination resource in provider 1.0+. Use a moved block (Terraform >= 1.8) for a zero-downtime migration:
moved {
from = airbyte_destination_bigquery.my_dest
to = airbyte_destination.my_dest
}
resource "airbyte_destination" "my_dest" {
name = "BigQuery"
workspace_id = var.workspace_id
definition_id = "22f6c74f-5699-40ff-833c-4a879ea40133"
configuration = jsonencode({
project_id = "my-gcp-project"
dataset_id = "my_dataset"
dataset_location = "US"
credentials_json = var.bigquery_credentials
})
}For full details including alternative methods for older Terraform versions, see the Migrating to 1.0 guide.
configuration(String, Sensitive) The values required to configure the destination. The schema for this must match the schema return by destination_definition_specifications/get for the destinationDefinition. Parsed as JSON.name(String) Name of the destination e.g. dev-mysql-instance.workspace_id(String) Requires replacement if changed.
definition_id(String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.resource_allocation(Attributes) actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level. (see below for nested schema)
created_at(Number)destination_id(String)destination_type(String)
Optional:
default(Attributes) optional resource requirements to run workers (blank for unbounded allocations) (see below for nested schema)job_specific(Attributes List) (see below for nested schema)
Optional:
cpu_limit(String)cpu_request(String)ephemeral_storage_limit(String)ephemeral_storage_request(String)memory_limit(String)memory_request(String)
Optional:
job_type(String) enum that describes the different types of jobs that the platform runs. Not Null; must be one of ["get_spec", "check_connection", "discover_schema", "sync", "reset_connection", "connection_updater", "replicate"]resource_requirements(Attributes) optional resource requirements to run workers (blank for unbounded allocations). Not Null (see below for nested schema)
Optional:
cpu_limit(String)cpu_request(String)ephemeral_storage_limit(String)ephemeral_storage_request(String)memory_limit(String)memory_request(String)
Import is supported using the following syntax:
terraform import airbyte_destination.my_airbyte_destination "..."In Terraform v1.5.0 and later, the import block can be used:
import {
to = airbyte_destination.my_airbyte_destination
id = "..."
}