Copy a BigQuery table or partition to another one.
type: "io.kestra.plugin.gcp.bigquery.copy"id: gcp_bq_copy
namespace: company.team
tasks:
  - id: copy
    type: io.kestra.plugin.gcp.bigquery.Copy
    operationType: COPY
    sourceTables:
      - "my_project.my_dataset.my_table$20130908"
    destinationTable: "my_project.my_dataset.my_table"
YESThe destination table.
If not provided a new table is created.
YESThe source tables.
Can be table or partitions.
YESCREATE_IF_NEEDEDCREATE_NEVERWhether the job is allowed to create tables.
NOfalseWhether the job has to be dry run or not.
A valid query will mostly return an empty response with some processing statistics, while an invalid query will return the same error as it would if it were an actual run.
YESThe GCP service account to impersonate.
YESdurationJob timeout.
If this time limit is exceeded, BigQuery may attempt to terminate the job.
YESThe labels associated with this job.
You can use these to organize and group your jobs. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter and each label in the list must have a different key.
YESThe geographic location where the dataset should reside.
This property is experimental and might be subject to change or removed.
See Dataset Location
YESCOPYSNAPSHOTRESTORECLONESets the supported operation types in table copy job.
COPY: The source and destination table have the same table type.SNAPSHOT: The source table type is TABLE and the destination table type is SNAPSHOT.RESTORE: The source table type is SNAPSHOT and the destination table type is TABLE.CLONE: The source and destination table have the same table type, but only bill for unique data.
YESThe GCP project ID.
NOAutomatic retry for retryable BigQuery exceptions.
Some exceptions (especially rate limit) are not retried by default by BigQuery client, we use by default a transparent retry (not the kestra one) to handle this case. The default values are exponential of 5 seconds for a maximum of 15 minutes and ten attempts
YES["due to concurrent update","Retrying the job may solve the problem","Retrying may solve the problem"]The messages which would trigger an automatic retry.
Message is tested as a substring of the full message, and is case insensitive.
YES["rateLimitExceeded","jobBackendError","backendError","internalError","jobInternalError"]The reasons which would trigger an automatic retry.
YES["https://www.googleapis.com/auth/cloud-platform"]The GCP scopes to be used.
YESThe GCP service account.
YESWRITE_TRUNCATEWRITE_TRUNCATE_DATAWRITE_APPENDWRITE_EMPTYThe action that should occur if the destination table already exists.
The job id
NOdurationNONORETRY_FAILED_TASKRETRY_FAILED_TASKCREATE_NEW_EXECUTIONNO >= 1NOdurationNOfalseNOdurationNOdurationNONORETRY_FAILED_TASKRETRY_FAILED_TASKCREATE_NEW_EXECUTIONNO >= 1NOdurationNOfalseNOdurationNOdurationNONORETRY_FAILED_TASKRETRY_FAILED_TASKCREATE_NEW_EXECUTIONNONO >= 1NOdurationNOfalse