Which command can you use to create a new table using data in an existing table?

This document describes how to create and use standard (built-in) tables in BigQuery. For information about creating other table types, see:

  • Creating partitioned tables
  • Creating and using clustered tables

After creating a table, you can:

  • Control access to your table data
  • Get information about your tables
  • List the tables in a dataset
  • Get table metadata

For more information about managing tables including updating table properties, copying a table, and deleting a table, see Managing tables.

Before you begin

Before creating a table in BigQuery, first:

  • Set up a project by following a BigQuery getting started guide.
  • Create a BigQuery dataset.
  • Optionally, read Introduction to tables to understand table limitations, quotas, and pricing.

Table naming

When you create a table in BigQuery, the table name must be unique per dataset. The table name can:

  • Contain up to 1,024 characters.
  • Contain Unicode characters in category L (letter), M (mark), N (number), Pc (connector, including underscore), Pd (dash), Zs (space). For more information, see General Category.

The following are all examples of valid table names:


using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
5,

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
6,

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
7,

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
8.

Caveats:

  • Some table names and table name prefixes are reserved. If you receive an error saying that your table name or prefix is reserved, then select a different name and try again.
  • If you include multiple dot operators (

    
    using Google.Cloud.BigQuery.V2;
    using System;
    
    public class BigQueryCreateTable
    {
        public BigQueryTable CreateTable(
            string projectId = "your-project-id",
            string datasetId = "your_dataset_id"
        )
        {
            BigQueryClient client = BigQueryClient.Create(projectId);
            var dataset = client.GetDataset(datasetId);
            // Create schema for new table.
            var schema = new TableSchemaBuilder
            {
                { "full_name", BigQueryDbType.String },
                { "age", BigQueryDbType.Int64 }
            }.Build();
            // Create the table
            return dataset.CreateTable(tableId: "your_table_id", schema: schema);
        }
    }
    9) in a sequence, the duplicate operators are implicitly stripped.

    For example, this:

    import (
    	"context"
    	"fmt"
    	"time"
    
    	"cloud.google.com/go/bigquery"
    )
    
    // createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
    func createTableExplicitSchema(projectID, datasetID, tableID string) error {
    	// projectID := "my-project-id"
    	// datasetID := "mydatasetid"
    	// tableID := "mytableid"
    	ctx := context.Background()
    
    	client, err := bigquery.NewClient(ctx, projectID)
    	if err != nil {
    		return fmt.Errorf("bigquery.NewClient: %v", err)
    	}
    	defer client.Close()
    
    	sampleSchema := bigquery.Schema{
    		{Name: "full_name", Type: bigquery.StringFieldType},
    		{Name: "age", Type: bigquery.IntegerFieldType},
    	}
    
    	metaData := &bigquery.TableMetadata{
    		Schema:         sampleSchema,
    		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
    	}
    	tableRef := client.Dataset(datasetID).Table(tableID)
    	if err := tableRef.Create(ctx, metaData); err != nil {
    		return err
    	}
    	return nil
    }
    
    0

    Becomes this:

    import (
    	"context"
    	"fmt"
    	"time"
    
    	"cloud.google.com/go/bigquery"
    )
    
    // createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
    func createTableExplicitSchema(projectID, datasetID, tableID string) error {
    	// projectID := "my-project-id"
    	// datasetID := "mydatasetid"
    	// tableID := "mytableid"
    	ctx := context.Background()
    
    	client, err := bigquery.NewClient(ctx, projectID)
    	if err != nil {
    		return fmt.Errorf("bigquery.NewClient: %v", err)
    	}
    	defer client.Close()
    
    	sampleSchema := bigquery.Schema{
    		{Name: "full_name", Type: bigquery.StringFieldType},
    		{Name: "age", Type: bigquery.IntegerFieldType},
    	}
    
    	metaData := &bigquery.TableMetadata{
    		Schema:         sampleSchema,
    		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
    	}
    	tableRef := client.Dataset(datasetID).Table(tableID)
    	if err := tableRef.Create(ctx, metaData); err != nil {
    		return err
    	}
    	return nil
    }
    
    1

Create tables

You can create a table in BigQuery in the following ways:

  • Manually using the Google Cloud console or the
    import (
    	"context"
    	"fmt"
    	"time"
    
    	"cloud.google.com/go/bigquery"
    )
    
    // createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
    func createTableExplicitSchema(projectID, datasetID, tableID string) error {
    	// projectID := "my-project-id"
    	// datasetID := "mydatasetid"
    	// tableID := "mytableid"
    	ctx := context.Background()
    
    	client, err := bigquery.NewClient(ctx, projectID)
    	if err != nil {
    		return fmt.Errorf("bigquery.NewClient: %v", err)
    	}
    	defer client.Close()
    
    	sampleSchema := bigquery.Schema{
    		{Name: "full_name", Type: bigquery.StringFieldType},
    		{Name: "age", Type: bigquery.IntegerFieldType},
    	}
    
    	metaData := &bigquery.TableMetadata{
    		Schema:         sampleSchema,
    		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
    	}
    	tableRef := client.Dataset(datasetID).Table(tableID)
    	if err := tableRef.Create(ctx, metaData); err != nil {
    		return err
    	}
    	return nil
    }
    
    2 command-line tool
    import (
    	"context"
    	"fmt"
    	"time"
    
    	"cloud.google.com/go/bigquery"
    )
    
    // createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
    func createTableExplicitSchema(projectID, datasetID, tableID string) error {
    	// projectID := "my-project-id"
    	// datasetID := "mydatasetid"
    	// tableID := "mytableid"
    	ctx := context.Background()
    
    	client, err := bigquery.NewClient(ctx, projectID)
    	if err != nil {
    		return fmt.Errorf("bigquery.NewClient: %v", err)
    	}
    	defer client.Close()
    
    	sampleSchema := bigquery.Schema{
    		{Name: "full_name", Type: bigquery.StringFieldType},
    		{Name: "age", Type: bigquery.IntegerFieldType},
    	}
    
    	metaData := &bigquery.TableMetadata{
    		Schema:         sampleSchema,
    		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
    	}
    	tableRef := client.Dataset(datasetID).Table(tableID)
    	if err := tableRef.Create(ctx, metaData); err != nil {
    		return err
    	}
    	return nil
    }
    
    3 command.
  • Programmatically by calling the
    import (
    	"context"
    	"fmt"
    	"time"
    
    	"cloud.google.com/go/bigquery"
    )
    
    // createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
    func createTableExplicitSchema(projectID, datasetID, tableID string) error {
    	// projectID := "my-project-id"
    	// datasetID := "mydatasetid"
    	// tableID := "mytableid"
    	ctx := context.Background()
    
    	client, err := bigquery.NewClient(ctx, projectID)
    	if err != nil {
    		return fmt.Errorf("bigquery.NewClient: %v", err)
    	}
    	defer client.Close()
    
    	sampleSchema := bigquery.Schema{
    		{Name: "full_name", Type: bigquery.StringFieldType},
    		{Name: "age", Type: bigquery.IntegerFieldType},
    	}
    
    	metaData := &bigquery.TableMetadata{
    		Schema:         sampleSchema,
    		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
    	}
    	tableRef := client.Dataset(datasetID).Table(tableID)
    	if err := tableRef.Create(ctx, metaData); err != nil {
    		return err
    	}
    	return nil
    }
    
    4 API method.
  • By using the client libraries.
  • From query results.
  • By defining a table that references an external data source.
  • When you load data.
  • By using a
    import (
    	"context"
    	"fmt"
    	"time"
    
    	"cloud.google.com/go/bigquery"
    )
    
    // createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
    func createTableExplicitSchema(projectID, datasetID, tableID string) error {
    	// projectID := "my-project-id"
    	// datasetID := "mydatasetid"
    	// tableID := "mytableid"
    	ctx := context.Background()
    
    	client, err := bigquery.NewClient(ctx, projectID)
    	if err != nil {
    		return fmt.Errorf("bigquery.NewClient: %v", err)
    	}
    	defer client.Close()
    
    	sampleSchema := bigquery.Schema{
    		{Name: "full_name", Type: bigquery.StringFieldType},
    		{Name: "age", Type: bigquery.IntegerFieldType},
    	}
    
    	metaData := &bigquery.TableMetadata{
    		Schema:         sampleSchema,
    		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
    	}
    	tableRef := client.Dataset(datasetID).Table(tableID)
    	if err := tableRef.Create(ctx, metaData); err != nil {
    		return err
    	}
    	return nil
    }
    
    5 data definition language (DDL) statement.

Required permissions

To create a table, you need the following IAM permissions:

  • import (
    	"context"
    	"fmt"
    	"time"
    
    	"cloud.google.com/go/bigquery"
    )
    
    // createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
    func createTableExplicitSchema(projectID, datasetID, tableID string) error {
    	// projectID := "my-project-id"
    	// datasetID := "mydatasetid"
    	// tableID := "mytableid"
    	ctx := context.Background()
    
    	client, err := bigquery.NewClient(ctx, projectID)
    	if err != nil {
    		return fmt.Errorf("bigquery.NewClient: %v", err)
    	}
    	defer client.Close()
    
    	sampleSchema := bigquery.Schema{
    		{Name: "full_name", Type: bigquery.StringFieldType},
    		{Name: "age", Type: bigquery.IntegerFieldType},
    	}
    
    	metaData := &bigquery.TableMetadata{
    		Schema:         sampleSchema,
    		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
    	}
    	tableRef := client.Dataset(datasetID).Table(tableID)
    	if err := tableRef.Create(ctx, metaData); err != nil {
    		return err
    	}
    	return nil
    }
    
    6
  • import (
    	"context"
    	"fmt"
    	"time"
    
    	"cloud.google.com/go/bigquery"
    )
    
    // createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
    func createTableExplicitSchema(projectID, datasetID, tableID string) error {
    	// projectID := "my-project-id"
    	// datasetID := "mydatasetid"
    	// tableID := "mytableid"
    	ctx := context.Background()
    
    	client, err := bigquery.NewClient(ctx, projectID)
    	if err != nil {
    		return fmt.Errorf("bigquery.NewClient: %v", err)
    	}
    	defer client.Close()
    
    	sampleSchema := bigquery.Schema{
    		{Name: "full_name", Type: bigquery.StringFieldType},
    		{Name: "age", Type: bigquery.IntegerFieldType},
    	}
    
    	metaData := &bigquery.TableMetadata{
    		Schema:         sampleSchema,
    		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
    	}
    	tableRef := client.Dataset(datasetID).Table(tableID)
    	if err := tableRef.Create(ctx, metaData); err != nil {
    		return err
    	}
    	return nil
    }
    
    7
  • import (
    	"context"
    	"fmt"
    	"time"
    
    	"cloud.google.com/go/bigquery"
    )
    
    // createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
    func createTableExplicitSchema(projectID, datasetID, tableID string) error {
    	// projectID := "my-project-id"
    	// datasetID := "mydatasetid"
    	// tableID := "mytableid"
    	ctx := context.Background()
    
    	client, err := bigquery.NewClient(ctx, projectID)
    	if err != nil {
    		return fmt.Errorf("bigquery.NewClient: %v", err)
    	}
    	defer client.Close()
    
    	sampleSchema := bigquery.Schema{
    		{Name: "full_name", Type: bigquery.StringFieldType},
    		{Name: "age", Type: bigquery.IntegerFieldType},
    	}
    
    	metaData := &bigquery.TableMetadata{
    		Schema:         sampleSchema,
    		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
    	}
    	tableRef := client.Dataset(datasetID).Table(tableID)
    	if err := tableRef.Create(ctx, metaData); err != nil {
    		return err
    	}
    	return nil
    }
    
    8

Additionally, you might require the

import (
	"context"
	"fmt"
	"time"

	"cloud.google.com/go/bigquery"
)

// createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
func createTableExplicitSchema(projectID, datasetID, tableID string) error {
	// projectID := "my-project-id"
	// datasetID := "mydatasetid"
	// tableID := "mytableid"
	ctx := context.Background()

	client, err := bigquery.NewClient(ctx, projectID)
	if err != nil {
		return fmt.Errorf("bigquery.NewClient: %v", err)
	}
	defer client.Close()

	sampleSchema := bigquery.Schema{
		{Name: "full_name", Type: bigquery.StringFieldType},
		{Name: "age", Type: bigquery.IntegerFieldType},
	}

	metaData := &bigquery.TableMetadata{
		Schema:         sampleSchema,
		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
	}
	tableRef := client.Dataset(datasetID).Table(tableID)
	if err := tableRef.Create(ctx, metaData); err != nil {
		return err
	}
	return nil
}
9 permission to access the data that you write to the table.

Each of the following predefined IAM roles includes the permissions that you need in order to create a table:

  • import com.google.cloud.bigquery.BigQuery;
    import com.google.cloud.bigquery.BigQueryException;
    import com.google.cloud.bigquery.BigQueryOptions;
    import com.google.cloud.bigquery.Field;
    import com.google.cloud.bigquery.Schema;
    import com.google.cloud.bigquery.StandardSQLTypeName;
    import com.google.cloud.bigquery.StandardTableDefinition;
    import com.google.cloud.bigquery.TableDefinition;
    import com.google.cloud.bigquery.TableId;
    import com.google.cloud.bigquery.TableInfo;
    
    public class CreateTable {
    
      public static void runCreateTable() {
        // TODO(developer): Replace these variables before running the sample.
        String datasetName = "MY_DATASET_NAME";
        String tableName = "MY_TABLE_NAME";
        Schema schema =
            Schema.of(
                Field.of("stringField", StandardSQLTypeName.STRING),
                Field.of("booleanField", StandardSQLTypeName.BOOL));
        createTable(datasetName, tableName, schema);
      }
    
      public static void createTable(String datasetName, String tableName, Schema schema) {
        try {
          // Initialize client that will be used to send requests. This client only needs to be created
          // once, and can be reused for multiple requests.
          BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
    
          TableId tableId = TableId.of(datasetName, tableName);
          TableDefinition tableDefinition = StandardTableDefinition.of(schema);
          TableInfo tableInfo = TableInfo.newBuilder(tableId, tableDefinition).build();
    
          bigquery.create(tableInfo);
          System.out.println("Table created successfully");
        } catch (BigQueryException e) {
          System.out.println("Table was not created. \n" + e.toString());
        }
      }
    }
    0
  • import com.google.cloud.bigquery.BigQuery;
    import com.google.cloud.bigquery.BigQueryException;
    import com.google.cloud.bigquery.BigQueryOptions;
    import com.google.cloud.bigquery.Field;
    import com.google.cloud.bigquery.Schema;
    import com.google.cloud.bigquery.StandardSQLTypeName;
    import com.google.cloud.bigquery.StandardTableDefinition;
    import com.google.cloud.bigquery.TableDefinition;
    import com.google.cloud.bigquery.TableId;
    import com.google.cloud.bigquery.TableInfo;
    
    public class CreateTable {
    
      public static void runCreateTable() {
        // TODO(developer): Replace these variables before running the sample.
        String datasetName = "MY_DATASET_NAME";
        String tableName = "MY_TABLE_NAME";
        Schema schema =
            Schema.of(
                Field.of("stringField", StandardSQLTypeName.STRING),
                Field.of("booleanField", StandardSQLTypeName.BOOL));
        createTable(datasetName, tableName, schema);
      }
    
      public static void createTable(String datasetName, String tableName, Schema schema) {
        try {
          // Initialize client that will be used to send requests. This client only needs to be created
          // once, and can be reused for multiple requests.
          BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
    
          TableId tableId = TableId.of(datasetName, tableName);
          TableDefinition tableDefinition = StandardTableDefinition.of(schema);
          TableInfo tableInfo = TableInfo.newBuilder(tableId, tableDefinition).build();
    
          bigquery.create(tableInfo);
          System.out.println("Table created successfully");
        } catch (BigQueryException e) {
          System.out.println("Table was not created. \n" + e.toString());
        }
      }
    }
    1
  • import com.google.cloud.bigquery.BigQuery;
    import com.google.cloud.bigquery.BigQueryException;
    import com.google.cloud.bigquery.BigQueryOptions;
    import com.google.cloud.bigquery.Field;
    import com.google.cloud.bigquery.Schema;
    import com.google.cloud.bigquery.StandardSQLTypeName;
    import com.google.cloud.bigquery.StandardTableDefinition;
    import com.google.cloud.bigquery.TableDefinition;
    import com.google.cloud.bigquery.TableId;
    import com.google.cloud.bigquery.TableInfo;
    
    public class CreateTable {
    
      public static void runCreateTable() {
        // TODO(developer): Replace these variables before running the sample.
        String datasetName = "MY_DATASET_NAME";
        String tableName = "MY_TABLE_NAME";
        Schema schema =
            Schema.of(
                Field.of("stringField", StandardSQLTypeName.STRING),
                Field.of("booleanField", StandardSQLTypeName.BOOL));
        createTable(datasetName, tableName, schema);
      }
    
      public static void createTable(String datasetName, String tableName, Schema schema) {
        try {
          // Initialize client that will be used to send requests. This client only needs to be created
          // once, and can be reused for multiple requests.
          BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
    
          TableId tableId = TableId.of(datasetName, tableName);
          TableDefinition tableDefinition = StandardTableDefinition.of(schema);
          TableInfo tableInfo = TableInfo.newBuilder(tableId, tableDefinition).build();
    
          bigquery.create(tableInfo);
          System.out.println("Table created successfully");
        } catch (BigQueryException e) {
          System.out.println("Table was not created. \n" + e.toString());
        }
      }
    }
    2 (includes the
    import (
    	"context"
    	"fmt"
    	"time"
    
    	"cloud.google.com/go/bigquery"
    )
    
    // createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
    func createTableExplicitSchema(projectID, datasetID, tableID string) error {
    	// projectID := "my-project-id"
    	// datasetID := "mydatasetid"
    	// tableID := "mytableid"
    	ctx := context.Background()
    
    	client, err := bigquery.NewClient(ctx, projectID)
    	if err != nil {
    		return fmt.Errorf("bigquery.NewClient: %v", err)
    	}
    	defer client.Close()
    
    	sampleSchema := bigquery.Schema{
    		{Name: "full_name", Type: bigquery.StringFieldType},
    		{Name: "age", Type: bigquery.IntegerFieldType},
    	}
    
    	metaData := &bigquery.TableMetadata{
    		Schema:         sampleSchema,
    		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
    	}
    	tableRef := client.Dataset(datasetID).Table(tableID)
    	if err := tableRef.Create(ctx, metaData); err != nil {
    		return err
    	}
    	return nil
    }
    
    8 permission)
  • import com.google.cloud.bigquery.BigQuery;
    import com.google.cloud.bigquery.BigQueryException;
    import com.google.cloud.bigquery.BigQueryOptions;
    import com.google.cloud.bigquery.Field;
    import com.google.cloud.bigquery.Schema;
    import com.google.cloud.bigquery.StandardSQLTypeName;
    import com.google.cloud.bigquery.StandardTableDefinition;
    import com.google.cloud.bigquery.TableDefinition;
    import com.google.cloud.bigquery.TableId;
    import com.google.cloud.bigquery.TableInfo;
    
    public class CreateTable {
    
      public static void runCreateTable() {
        // TODO(developer): Replace these variables before running the sample.
        String datasetName = "MY_DATASET_NAME";
        String tableName = "MY_TABLE_NAME";
        Schema schema =
            Schema.of(
                Field.of("stringField", StandardSQLTypeName.STRING),
                Field.of("booleanField", StandardSQLTypeName.BOOL));
        createTable(datasetName, tableName, schema);
      }
    
      public static void createTable(String datasetName, String tableName, Schema schema) {
        try {
          // Initialize client that will be used to send requests. This client only needs to be created
          // once, and can be reused for multiple requests.
          BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
    
          TableId tableId = TableId.of(datasetName, tableName);
          TableDefinition tableDefinition = StandardTableDefinition.of(schema);
          TableInfo tableInfo = TableInfo.newBuilder(tableId, tableDefinition).build();
    
          bigquery.create(tableInfo);
          System.out.println("Table created successfully");
        } catch (BigQueryException e) {
          System.out.println("Table was not created. \n" + e.toString());
        }
      }
    }
    4 (includes the
    import (
    	"context"
    	"fmt"
    	"time"
    
    	"cloud.google.com/go/bigquery"
    )
    
    // createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
    func createTableExplicitSchema(projectID, datasetID, tableID string) error {
    	// projectID := "my-project-id"
    	// datasetID := "mydatasetid"
    	// tableID := "mytableid"
    	ctx := context.Background()
    
    	client, err := bigquery.NewClient(ctx, projectID)
    	if err != nil {
    		return fmt.Errorf("bigquery.NewClient: %v", err)
    	}
    	defer client.Close()
    
    	sampleSchema := bigquery.Schema{
    		{Name: "full_name", Type: bigquery.StringFieldType},
    		{Name: "age", Type: bigquery.IntegerFieldType},
    	}
    
    	metaData := &bigquery.TableMetadata{
    		Schema:         sampleSchema,
    		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
    	}
    	tableRef := client.Dataset(datasetID).Table(tableID)
    	if err := tableRef.Create(ctx, metaData); err != nil {
    		return err
    	}
    	return nil
    }
    
    8 permission)
  • import com.google.cloud.bigquery.BigQuery;
    import com.google.cloud.bigquery.BigQueryException;
    import com.google.cloud.bigquery.BigQueryOptions;
    import com.google.cloud.bigquery.Field;
    import com.google.cloud.bigquery.Schema;
    import com.google.cloud.bigquery.StandardSQLTypeName;
    import com.google.cloud.bigquery.StandardTableDefinition;
    import com.google.cloud.bigquery.TableDefinition;
    import com.google.cloud.bigquery.TableId;
    import com.google.cloud.bigquery.TableInfo;
    
    public class CreateTable {
    
      public static void runCreateTable() {
        // TODO(developer): Replace these variables before running the sample.
        String datasetName = "MY_DATASET_NAME";
        String tableName = "MY_TABLE_NAME";
        Schema schema =
            Schema.of(
                Field.of("stringField", StandardSQLTypeName.STRING),
                Field.of("booleanField", StandardSQLTypeName.BOOL));
        createTable(datasetName, tableName, schema);
      }
    
      public static void createTable(String datasetName, String tableName, Schema schema) {
        try {
          // Initialize client that will be used to send requests. This client only needs to be created
          // once, and can be reused for multiple requests.
          BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
    
          TableId tableId = TableId.of(datasetName, tableName);
          TableDefinition tableDefinition = StandardTableDefinition.of(schema);
          TableInfo tableInfo = TableInfo.newBuilder(tableId, tableDefinition).build();
    
          bigquery.create(tableInfo);
          System.out.println("Table created successfully");
        } catch (BigQueryException e) {
          System.out.println("Table was not created. \n" + e.toString());
        }
      }
    }
    6 (includes the
    import (
    	"context"
    	"fmt"
    	"time"
    
    	"cloud.google.com/go/bigquery"
    )
    
    // createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
    func createTableExplicitSchema(projectID, datasetID, tableID string) error {
    	// projectID := "my-project-id"
    	// datasetID := "mydatasetid"
    	// tableID := "mytableid"
    	ctx := context.Background()
    
    	client, err := bigquery.NewClient(ctx, projectID)
    	if err != nil {
    		return fmt.Errorf("bigquery.NewClient: %v", err)
    	}
    	defer client.Close()
    
    	sampleSchema := bigquery.Schema{
    		{Name: "full_name", Type: bigquery.StringFieldType},
    		{Name: "age", Type: bigquery.IntegerFieldType},
    	}
    
    	metaData := &bigquery.TableMetadata{
    		Schema:         sampleSchema,
    		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
    	}
    	tableRef := client.Dataset(datasetID).Table(tableID)
    	if err := tableRef.Create(ctx, metaData); err != nil {
    		return err
    	}
    	return nil
    }
    
    8 permission)

Additionally, if you have the

import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryException;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.Field;
import com.google.cloud.bigquery.Schema;
import com.google.cloud.bigquery.StandardSQLTypeName;
import com.google.cloud.bigquery.StandardTableDefinition;
import com.google.cloud.bigquery.TableDefinition;
import com.google.cloud.bigquery.TableId;
import com.google.cloud.bigquery.TableInfo;

public class CreateTable {

  public static void runCreateTable() {
    // TODO(developer): Replace these variables before running the sample.
    String datasetName = "MY_DATASET_NAME";
    String tableName = "MY_TABLE_NAME";
    Schema schema =
        Schema.of(
            Field.of("stringField", StandardSQLTypeName.STRING),
            Field.of("booleanField", StandardSQLTypeName.BOOL));
    createTable(datasetName, tableName, schema);
  }

  public static void createTable(String datasetName, String tableName, Schema schema) {
    try {
      // Initialize client that will be used to send requests. This client only needs to be created
      // once, and can be reused for multiple requests.
      BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();

      TableId tableId = TableId.of(datasetName, tableName);
      TableDefinition tableDefinition = StandardTableDefinition.of(schema);
      TableInfo tableInfo = TableInfo.newBuilder(tableId, tableDefinition).build();

      bigquery.create(tableInfo);
      System.out.println("Table created successfully");
    } catch (BigQueryException e) {
      System.out.println("Table was not created. \n" + e.toString());
    }
  }
}
8 permission, you can create and update tables in the datasets that you create.

For more information on IAM roles and permissions in BigQuery, see Predefined roles and permissions.

Create an empty table with a schema definition

You can create an empty table with a schema definition in the following ways:

  • Enter the schema using the Google Cloud console.
  • Provide the schema inline using the
    import (
    	"context"
    	"fmt"
    	"time"
    
    	"cloud.google.com/go/bigquery"
    )
    
    // createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
    func createTableExplicitSchema(projectID, datasetID, tableID string) error {
    	// projectID := "my-project-id"
    	// datasetID := "mydatasetid"
    	// tableID := "mytableid"
    	ctx := context.Background()
    
    	client, err := bigquery.NewClient(ctx, projectID)
    	if err != nil {
    		return fmt.Errorf("bigquery.NewClient: %v", err)
    	}
    	defer client.Close()
    
    	sampleSchema := bigquery.Schema{
    		{Name: "full_name", Type: bigquery.StringFieldType},
    		{Name: "age", Type: bigquery.IntegerFieldType},
    	}
    
    	metaData := &bigquery.TableMetadata{
    		Schema:         sampleSchema,
    		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
    	}
    	tableRef := client.Dataset(datasetID).Table(tableID)
    	if err := tableRef.Create(ctx, metaData); err != nil {
    		return err
    	}
    	return nil
    }
    
    2 command-line tool.
  • Submit a JSON schema file using the
    import (
    	"context"
    	"fmt"
    	"time"
    
    	"cloud.google.com/go/bigquery"
    )
    
    // createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
    func createTableExplicitSchema(projectID, datasetID, tableID string) error {
    	// projectID := "my-project-id"
    	// datasetID := "mydatasetid"
    	// tableID := "mytableid"
    	ctx := context.Background()
    
    	client, err := bigquery.NewClient(ctx, projectID)
    	if err != nil {
    		return fmt.Errorf("bigquery.NewClient: %v", err)
    	}
    	defer client.Close()
    
    	sampleSchema := bigquery.Schema{
    		{Name: "full_name", Type: bigquery.StringFieldType},
    		{Name: "age", Type: bigquery.IntegerFieldType},
    	}
    
    	metaData := &bigquery.TableMetadata{
    		Schema:         sampleSchema,
    		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
    	}
    	tableRef := client.Dataset(datasetID).Table(tableID)
    	if err := tableRef.Create(ctx, metaData); err != nil {
    		return err
    	}
    	return nil
    }
    
    2 command-line tool.
  • Provide the schema in a table resource when calling the API's
    import (
    	"context"
    	"fmt"
    	"time"
    
    	"cloud.google.com/go/bigquery"
    )
    
    // createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
    func createTableExplicitSchema(projectID, datasetID, tableID string) error {
    	// projectID := "my-project-id"
    	// datasetID := "mydatasetid"
    	// tableID := "mytableid"
    	ctx := context.Background()
    
    	client, err := bigquery.NewClient(ctx, projectID)
    	if err != nil {
    		return fmt.Errorf("bigquery.NewClient: %v", err)
    	}
    	defer client.Close()
    
    	sampleSchema := bigquery.Schema{
    		{Name: "full_name", Type: bigquery.StringFieldType},
    		{Name: "age", Type: bigquery.IntegerFieldType},
    	}
    
    	metaData := &bigquery.TableMetadata{
    		Schema:         sampleSchema,
    		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
    	}
    	tableRef := client.Dataset(datasetID).Table(tableID)
    	if err := tableRef.Create(ctx, metaData); err != nil {
    		return err
    	}
    	return nil
    }
    
    4 method.

For more information about specifying a table schema, see Specifying a schema.

After the table is created, you can load data into it or populate it by writing query results to it.

To create an empty table with a schema definition:

Console

  1. In the Google Cloud console, go to the BigQuery page.

    Go to BigQuery

  2. In the Explorer pane, expand your project, and then select a dataset.
  3. In the Dataset info section, click add_box Create table.
  4. In the Create table panel, specify the following details:
    1. In the Source section, select Empty table in the Create table from list.
    2. In the Destination section, specify the following details:
      1. For Dataset, select the dataset in which you want to create the table.
      2. In the Table field, enter the name of the table that you want to create.
      3. Verify that the Table type field is set to Native table.
    3. In the Schema section, enter the schema definition. You can enter schema information manually by using one of the following methods:
      • Option 1: Click Edit as text and paste the schema in the form of a JSON array. When you use a JSON array, you generate the schema using the same process as creating a JSON schema file. You can view the schema of an existing table in JSON format by entering the following command:
            bq show --format=prettyjson dataset.table
            
      • Option 2: Click add_box Add field and enter the table schema. Specify each field's Name, Type, and Mode.
    4. Optional: Specify Partition and cluster settings. For more information, see Creating partitioned tables and Creating and using clustered tables.
    5. Optional: In the Advanced options section, if you want to use a customer-managed encryption key, then select the Use a customer-managed encryption key (CMEK) option. By default, BigQuery encrypts customer content stored at rest by using a Google-managed key.
    6. Click Create table.
Note: When you create an empty table using the Google Cloud console, you cannot add a label, description, or expiration time. You can add these optional properties when you create a table using the
import (
	"context"
	"fmt"
	"time"

	"cloud.google.com/go/bigquery"
)

// createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
func createTableExplicitSchema(projectID, datasetID, tableID string) error {
	// projectID := "my-project-id"
	// datasetID := "mydatasetid"
	// tableID := "mytableid"
	ctx := context.Background()

	client, err := bigquery.NewClient(ctx, projectID)
	if err != nil {
		return fmt.Errorf("bigquery.NewClient: %v", err)
	}
	defer client.Close()

	sampleSchema := bigquery.Schema{
		{Name: "full_name", Type: bigquery.StringFieldType},
		{Name: "age", Type: bigquery.IntegerFieldType},
	}

	metaData := &bigquery.TableMetadata{
		Schema:         sampleSchema,
		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
	}
	tableRef := client.Dataset(datasetID).Table(tableID)
	if err := tableRef.Create(ctx, metaData); err != nil {
		return err
	}
	return nil
}
2 command-line tool or API. After you create a table in the Google Cloud console, you can add an expiration, description, and labels.

SQL

The following example creates a table named

// Import the Google Cloud client library and create a client
const {BigQuery} = require('@google-cloud/bigquery');
const bigquery = new BigQuery();

async function createTable() {
  // Creates a new table named "my_table" in "my_dataset".

  /**
   * TODO(developer): Uncomment the following lines before running the sample.
   */
  // const datasetId = "my_dataset";
  // const tableId = "my_table";
  // const schema = 'Name:string, Age:integer, Weight:float, IsMagic:boolean';

  // For all options, see https://cloud.google.com/bigquery/docs/reference/v2/tables#resource
  const options = {
    schema: schema,
    location: 'US',
  };

  // Create a new table in the dataset
  const [table] = await bigquery
    .dataset(datasetId)
    .createTable(tableId, options);

  console.log(`Table ${table.id} created.`);
}
3 that expires on January 1, 2023:

  1. In the Google Cloud console, go to the BigQuery page.

    Go to BigQuery

  2. In the query editor, enter the following statement:

    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
  3. Click play_circle Run.

For more information about how to run queries, see Running interactive queries.

bq

Use the

import (
	"context"
	"fmt"
	"time"

	"cloud.google.com/go/bigquery"
)

// createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
func createTableExplicitSchema(projectID, datasetID, tableID string) error {
	// projectID := "my-project-id"
	// datasetID := "mydatasetid"
	// tableID := "mytableid"
	ctx := context.Background()

	client, err := bigquery.NewClient(ctx, projectID)
	if err != nil {
		return fmt.Errorf("bigquery.NewClient: %v", err)
	}
	defer client.Close()

	sampleSchema := bigquery.Schema{
		{Name: "full_name", Type: bigquery.StringFieldType},
		{Name: "age", Type: bigquery.IntegerFieldType},
	}

	metaData := &bigquery.TableMetadata{
		Schema:         sampleSchema,
		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
	}
	tableRef := client.Dataset(datasetID).Table(tableID)
	if err := tableRef.Create(ctx, metaData); err != nil {
		return err
	}
	return nil
}
3 command with the
// Import the Google Cloud client library and create a client
const {BigQuery} = require('@google-cloud/bigquery');
const bigquery = new BigQuery();

async function createTable() {
  // Creates a new table named "my_table" in "my_dataset".

  /**
   * TODO(developer): Uncomment the following lines before running the sample.
   */
  // const datasetId = "my_dataset";
  // const tableId = "my_table";
  // const schema = 'Name:string, Age:integer, Weight:float, IsMagic:boolean';

  // For all options, see https://cloud.google.com/bigquery/docs/reference/v2/tables#resource
  const options = {
    schema: schema,
    location: 'US',
  };

  // Create a new table in the dataset
  const [table] = await bigquery
    .dataset(datasetId)
    .createTable(tableId, options);

  console.log(`Table ${table.id} created.`);
}
5 or
// Import the Google Cloud client library and create a client
const {BigQuery} = require('@google-cloud/bigquery');
const bigquery = new BigQuery();

async function createTable() {
  // Creates a new table named "my_table" in "my_dataset".

  /**
   * TODO(developer): Uncomment the following lines before running the sample.
   */
  // const datasetId = "my_dataset";
  // const tableId = "my_table";
  // const schema = 'Name:string, Age:integer, Weight:float, IsMagic:boolean';

  // For all options, see https://cloud.google.com/bigquery/docs/reference/v2/tables#resource
  const options = {
    schema: schema,
    location: 'US',
  };

  // Create a new table in the dataset
  const [table] = await bigquery
    .dataset(datasetId)
    .createTable(tableId, options);

  console.log(`Table ${table.id} created.`);
}
6 flag. You can supply table schema information inline or via a JSON schema file. Optional parameters include:

  • // Import the Google Cloud client library and create a client
    const {BigQuery} = require('@google-cloud/bigquery');
    const bigquery = new BigQuery();
    
    async function createTable() {
      // Creates a new table named "my_table" in "my_dataset".
    
      /**
       * TODO(developer): Uncomment the following lines before running the sample.
       */
      // const datasetId = "my_dataset";
      // const tableId = "my_table";
      // const schema = 'Name:string, Age:integer, Weight:float, IsMagic:boolean';
    
      // For all options, see https://cloud.google.com/bigquery/docs/reference/v2/tables#resource
      const options = {
        schema: schema,
        location: 'US',
      };
    
      // Create a new table in the dataset
      const [table] = await bigquery
        .dataset(datasetId)
        .createTable(tableId, options);
    
      console.log(`Table ${table.id} created.`);
    }
    7
  • // Import the Google Cloud client library and create a client
    const {BigQuery} = require('@google-cloud/bigquery');
    const bigquery = new BigQuery();
    
    async function createTable() {
      // Creates a new table named "my_table" in "my_dataset".
    
      /**
       * TODO(developer): Uncomment the following lines before running the sample.
       */
      // const datasetId = "my_dataset";
      // const tableId = "my_table";
      // const schema = 'Name:string, Age:integer, Weight:float, IsMagic:boolean';
    
      // For all options, see https://cloud.google.com/bigquery/docs/reference/v2/tables#resource
      const options = {
        schema: schema,
        location: 'US',
      };
    
      // Create a new table in the dataset
      const [table] = await bigquery
        .dataset(datasetId)
        .createTable(tableId, options);
    
      console.log(`Table ${table.id} created.`);
    }
    8
  • // Import the Google Cloud client library and create a client
    const {BigQuery} = require('@google-cloud/bigquery');
    const bigquery = new BigQuery();
    
    async function createTable() {
      // Creates a new table named "my_table" in "my_dataset".
    
      /**
       * TODO(developer): Uncomment the following lines before running the sample.
       */
      // const datasetId = "my_dataset";
      // const tableId = "my_table";
      // const schema = 'Name:string, Age:integer, Weight:float, IsMagic:boolean';
    
      // For all options, see https://cloud.google.com/bigquery/docs/reference/v2/tables#resource
      const options = {
        schema: schema,
        location: 'US',
      };
    
      // Create a new table in the dataset
      const [table] = await bigquery
        .dataset(datasetId)
        .createTable(tableId, options);
    
      console.log(`Table ${table.id} created.`);
    }
    9
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    00
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    01
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    02
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    03
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    04

// Import the Google Cloud client library and create a client
const {BigQuery} = require('@google-cloud/bigquery');
const bigquery = new BigQuery();

async function createTable() {
  // Creates a new table named "my_table" in "my_dataset".

  /**
   * TODO(developer): Uncomment the following lines before running the sample.
   */
  // const datasetId = "my_dataset";
  // const tableId = "my_table";
  // const schema = 'Name:string, Age:integer, Weight:float, IsMagic:boolean';

  // For all options, see https://cloud.google.com/bigquery/docs/reference/v2/tables#resource
  const options = {
    schema: schema,
    location: 'US',
  };

  // Create a new table in the dataset
  const [table] = await bigquery
    .dataset(datasetId)
    .createTable(tableId, options);

  console.log(`Table ${table.id} created.`);
}
9,
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
00,
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
01,
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
02, and
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
03 are not demonstrated here. Refer to the following links for more information on these optional parameters:

  • For more information about
    // Import the Google Cloud client library and create a client
    const {BigQuery} = require('@google-cloud/bigquery');
    const bigquery = new BigQuery();
    
    async function createTable() {
      // Creates a new table named "my_table" in "my_dataset".
    
      /**
       * TODO(developer): Uncomment the following lines before running the sample.
       */
      // const datasetId = "my_dataset";
      // const tableId = "my_table";
      // const schema = 'Name:string, Age:integer, Weight:float, IsMagic:boolean';
    
      // For all options, see https://cloud.google.com/bigquery/docs/reference/v2/tables#resource
      const options = {
        schema: schema,
        location: 'US',
      };
    
      // Create a new table in the dataset
      const [table] = await bigquery
        .dataset(datasetId)
        .createTable(tableId, options);
    
      console.log(`Table ${table.id} created.`);
    }
    9,
    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    00, and
    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    01 see partitioned tables.
  • For more information about
    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    02, see clustered tables.
  • For more information about
    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    03, see customer-managed encryption keys.

If you are creating a table in a project other than your default project, add the project ID to the dataset in the following format:

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
15.

To create an empty table in an existing dataset with a schema definition, enter the following:

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema

Replace the following:

  • integer is the default lifetime (in seconds) for the table. The minimum value is 3600 seconds (one hour). The expiration time evaluates to the current UTC time plus the integer value. If you set the expiration time when you create a table, the dataset's default table expiration setting is ignored.
  • description is a description of the table in quotes.
  • key_1:value_1 and key_2:value_2 are key-value pairs that specify labels.
  • project_id is your project ID.
  • dataset is a dataset in your project.
  • table is the name of the table you're creating.
  • schema is an inline schema definition in the format field:data_type,field:data_type or the path to the JSON schema file on your local machine.

When you specify the schema on the command line, you cannot include a

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
16 (
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
17) type, you cannot include a column description, and you cannot specify the column's mode. All modes default to
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
18. To include descriptions, modes, and
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
16 types, supply a JSON schema file instead.

Examples:

Enter the following command to create a table using an inline schema definition. This command creates a table named

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
20 in
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21 in your default project. The table expiration is set to 3600 seconds (1 hour), the description is set to
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
22, and the label is set to
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
23. The command uses the
// Import the Google Cloud client library and create a client
const {BigQuery} = require('@google-cloud/bigquery');
const bigquery = new BigQuery();

async function createTable() {
  // Creates a new table named "my_table" in "my_dataset".

  /**
   * TODO(developer): Uncomment the following lines before running the sample.
   */
  // const datasetId = "my_dataset";
  // const tableId = "my_table";
  // const schema = 'Name:string, Age:integer, Weight:float, IsMagic:boolean';

  // For all options, see https://cloud.google.com/bigquery/docs/reference/v2/tables#resource
  const options = {
    schema: schema,
    location: 'US',
  };

  // Create a new table in the dataset
  const [table] = await bigquery
    .dataset(datasetId)
    .createTable(tableId, options);

  console.log(`Table ${table.id} created.`);
}
6 shortcut instead of
// Import the Google Cloud client library and create a client
const {BigQuery} = require('@google-cloud/bigquery');
const bigquery = new BigQuery();

async function createTable() {
  // Creates a new table named "my_table" in "my_dataset".

  /**
   * TODO(developer): Uncomment the following lines before running the sample.
   */
  // const datasetId = "my_dataset";
  // const tableId = "my_table";
  // const schema = 'Name:string, Age:integer, Weight:float, IsMagic:boolean';

  // For all options, see https://cloud.google.com/bigquery/docs/reference/v2/tables#resource
  const options = {
    schema: schema,
    location: 'US',
  };

  // Create a new table in the dataset
  const [table] = await bigquery
    .dataset(datasetId)
    .createTable(tableId, options);

  console.log(`Table ${table.id} created.`);
}
5. The schema is specified inline as:
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
26.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING

Enter the following command to create a table using a JSON schema file. This command creates a table named

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
20 in
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21 in your default project. The table expiration is set to 3600 seconds (1 hour), the description is set to
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
22, and the label is set to
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
23. The path to the schema file is
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
31.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json

Enter the following command to create a table using an JSON schema file. This command creates a table named

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
20 in
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21 in
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
34. The table expiration is set to 3600 seconds (1 hour), the description is set to
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
22, and the label is set to
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
23. The path to the schema file is
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
31.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json

After the table is created, you can update the table's expiration, description, and labels. You can also modify the schema definition.

API

Call the

import (
	"context"
	"fmt"
	"time"

	"cloud.google.com/go/bigquery"
)

// createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
func createTableExplicitSchema(projectID, datasetID, tableID string) error {
	// projectID := "my-project-id"
	// datasetID := "mydatasetid"
	// tableID := "mytableid"
	ctx := context.Background()

	client, err := bigquery.NewClient(ctx, projectID)
	if err != nil {
		return fmt.Errorf("bigquery.NewClient: %v", err)
	}
	defer client.Close()

	sampleSchema := bigquery.Schema{
		{Name: "full_name", Type: bigquery.StringFieldType},
		{Name: "age", Type: bigquery.IntegerFieldType},
	}

	metaData := &bigquery.TableMetadata{
		Schema:         sampleSchema,
		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
	}
	tableRef := client.Dataset(datasetID).Table(tableID)
	if err := tableRef.Create(ctx, metaData); err != nil {
		return err
	}
	return nil
}
4 method with a defined table resource.

C#

Before trying this sample, follow the C# setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery C# API reference documentation.

View on GitHub Feedback


using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}

Go

Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Go API reference documentation.

View on GitHub Feedback

import (
	"context"
	"fmt"
	"time"

	"cloud.google.com/go/bigquery"
)

// createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
func createTableExplicitSchema(projectID, datasetID, tableID string) error {
	// projectID := "my-project-id"
	// datasetID := "mydatasetid"
	// tableID := "mytableid"
	ctx := context.Background()

	client, err := bigquery.NewClient(ctx, projectID)
	if err != nil {
		return fmt.Errorf("bigquery.NewClient: %v", err)
	}
	defer client.Close()

	sampleSchema := bigquery.Schema{
		{Name: "full_name", Type: bigquery.StringFieldType},
		{Name: "age", Type: bigquery.IntegerFieldType},
	}

	metaData := &bigquery.TableMetadata{
		Schema:         sampleSchema,
		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
	}
	tableRef := client.Dataset(datasetID).Table(tableID)
	if err := tableRef.Create(ctx, metaData); err != nil {
		return err
	}
	return nil
}

Java

Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.

View on GitHub Feedback

import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryException;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.Field;
import com.google.cloud.bigquery.Schema;
import com.google.cloud.bigquery.StandardSQLTypeName;
import com.google.cloud.bigquery.StandardTableDefinition;
import com.google.cloud.bigquery.TableDefinition;
import com.google.cloud.bigquery.TableId;
import com.google.cloud.bigquery.TableInfo;

public class CreateTable {

  public static void runCreateTable() {
    // TODO(developer): Replace these variables before running the sample.
    String datasetName = "MY_DATASET_NAME";
    String tableName = "MY_TABLE_NAME";
    Schema schema =
        Schema.of(
            Field.of("stringField", StandardSQLTypeName.STRING),
            Field.of("booleanField", StandardSQLTypeName.BOOL));
    createTable(datasetName, tableName, schema);
  }

  public static void createTable(String datasetName, String tableName, Schema schema) {
    try {
      // Initialize client that will be used to send requests. This client only needs to be created
      // once, and can be reused for multiple requests.
      BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();

      TableId tableId = TableId.of(datasetName, tableName);
      TableDefinition tableDefinition = StandardTableDefinition.of(schema);
      TableInfo tableInfo = TableInfo.newBuilder(tableId, tableDefinition).build();

      bigquery.create(tableInfo);
      System.out.println("Table created successfully");
    } catch (BigQueryException e) {
      System.out.println("Table was not created. \n" + e.toString());
    }
  }
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Node.js API reference documentation.

View on GitHub Feedback

// Import the Google Cloud client library and create a client
const {BigQuery} = require('@google-cloud/bigquery');
const bigquery = new BigQuery();

async function createTable() {
  // Creates a new table named "my_table" in "my_dataset".

  /**
   * TODO(developer): Uncomment the following lines before running the sample.
   */
  // const datasetId = "my_dataset";
  // const tableId = "my_table";
  // const schema = 'Name:string, Age:integer, Weight:float, IsMagic:boolean';

  // For all options, see https://cloud.google.com/bigquery/docs/reference/v2/tables#resource
  const options = {
    schema: schema,
    location: 'US',
  };

  // Create a new table in the dataset
  const [table] = await bigquery
    .dataset(datasetId)
    .createTable(tableId, options);

  console.log(`Table ${table.id} created.`);
}

PHP

Before trying this sample, follow the PHP setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery PHP API reference documentation.

View on GitHub Feedback

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
0

Python

Before trying this sample, follow the Python setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Python API reference documentation.

View on GitHub Feedback

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
1

Ruby

Before trying this sample, follow the Ruby setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Ruby API reference documentation.

View on GitHub Feedback

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
2

Create an empty table without a schema definition

Java

Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.

View on GitHub Feedback

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
3

Create a table from a query result

To create a table from a query result, write the results to a destination table.

Console

  1. Open the BigQuery page in the Google Cloud console.

    Go to the BigQuery page

  2. In the Explorer panel, expand your project and select a dataset.

  3. Enter a valid SQL query.

  4. Click More and then select Query settings.

    Which command can you use to create a new table using data in an existing table?

  5. Select the Set a destination table for query results option.

    Which command can you use to create a new table using data in an existing table?

  6. In the Destination section, select the Dataset in which you want to create the table, and then choose a Table Id.

  7. In the Destination table write preference section, choose one of the following:

    • Write if empty — Writes the query results to the table only if the table is empty.
    • Append to table — Appends the query results to an existing table.
    • Overwrite table — Overwrites an existing table with the same name using the query results.
  8. Optional: For Data location, choose your location.

  9. To update the query settings, click Save.

  10. Click Run. This creates a query job that writes the query results to the table you specified.

Alternatively, if you forget to specify a destination table before running your query, you can copy the cached results table to a permanent table by clicking the Save Results button above the editor.

SQL

The following example uses the

import (
	"context"
	"fmt"
	"time"

	"cloud.google.com/go/bigquery"
)

// createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
func createTableExplicitSchema(projectID, datasetID, tableID string) error {
	// projectID := "my-project-id"
	// datasetID := "mydatasetid"
	// tableID := "mytableid"
	ctx := context.Background()

	client, err := bigquery.NewClient(ctx, projectID)
	if err != nil {
		return fmt.Errorf("bigquery.NewClient: %v", err)
	}
	defer client.Close()

	sampleSchema := bigquery.Schema{
		{Name: "full_name", Type: bigquery.StringFieldType},
		{Name: "age", Type: bigquery.IntegerFieldType},
	}

	metaData := &bigquery.TableMetadata{
		Schema:         sampleSchema,
		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
	}
	tableRef := client.Dataset(datasetID).Table(tableID)
	if err := tableRef.Create(ctx, metaData); err != nil {
		return err
	}
	return nil
}
5 statement to create the
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
40 table from data in the public
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
41 table:

  1. In the Google Cloud console, go to the BigQuery page.

    Go to BigQuery

  2. In the query editor, enter the following statement:

    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    4
  3. Click play_circle Run.

For more information about how to run queries, see Running interactive queries.

For more information, see Creating a new table from an existing table.

bq

Enter the

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
42 command and specify the
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
43 flag to create a permanent table based on the query results. Specify the
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
44 flag to use standard SQL syntax. To write the query results to a table that is not in your default project, add the project ID to the dataset name in the following format:
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
15.

Optional: Supply the

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
46 flag and set the value to your location.

To control the write disposition for an existing destination table, specify one of the following optional flags:

  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    47: If the destination table exists, the query results are appended to it.
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    48: If the destination table exists, it is overwritten with the query results.
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
5

Replace the following:

  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    49 is the name of the location used to process the query. The
    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    46 flag is optional. For example, if you are using BigQuery in the Tokyo region, you can set the flag's value to
    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    51. You can set a default value for the location by using the
    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    52 file.
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    53 is your project ID.
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    54 is the name of the dataset that contains the table to which you are writing the query results.
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    55 is the name of the table to which you're writing the query results.
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    56 is a query in standard SQL syntax.

If no write disposition flag is specified, the default behavior is to write the results to the table only if it is empty. If the table exists and it is not empty, the following error is returned: `BigQuery error in query operation: Error processing job

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
57.

Examples:

Note: These examples query a US-based public dataset. Because the public dataset is stored in the US multi-region location, the dataset that contains your destination table must also be in the US. You cannot query a dataset in one location and write the results to a destination table in another location.

Enter the following command to write query results to a destination table named

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
20 in
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21. The dataset is in your default project. Since no write disposition flag is specified in the command, the table must be new or empty. Otherwise, an
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
60 error is returned. The query retrieves data from the USA Name Data public dataset.

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
6

Enter the following command to use query results to overwrite a destination table named

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
20 in
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21. The dataset is in your default project. The command uses the
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
48 flag to overwrite the destination table.

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
7

Enter the following command to append query results to a destination table named

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
20 in
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21. The dataset is in
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
66, not your default project. The command uses the
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
47 flag to append the query results to the destination table.

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
8

The output for each of these examples looks like the following. For readability, some output is truncated.

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
9

API

To save query results to a permanent table, call the

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
68 method, configure a
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
56 job, and include a value for the
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
70 property. To control the write disposition for an existing destination table, configure the
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
71 property.

To control the processing location for the query job, specify the

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
49 property in the
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
73 section of the job resource.

Go

Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Go API reference documentation.

bigquery/snippets/querying/bigquery_query_destination_table.go

View on GitHub Feedback

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
0

Java

Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.

To save query results to a permanent table, set the destination table to the desired TableId in a QueryJobConfiguration.

View on GitHub

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
1

Node.js

Before trying this sample, follow the Node.js setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Node.js API reference documentation.

View on GitHub Feedback

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
2

Python

Before trying this sample, follow the Python setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Python API reference documentation.

To save query results to a permanent table, create a QueryJobConfig and set the destination to the desired TableReference. Pass the job configuration to the query method.

View on GitHub Feedback

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
3

Create a table that references an external data source

An external data source is a data source that you can query directly from BigQuery, even though the data is not stored in BigQuery storage. For example, you might have data in a different Google Cloud database, in files in Cloud Storage, or in a different cloud product altogether that you would like to analyze in BigQuery, but that you aren't prepared to migrate.

For more information, see Introduction to external data sources.

Create a table when you load data

When you load data into BigQuery, you can load data into a new table or partition, you can append data to an existing table or partition, or you can overwrite a table or partition. You do not need to create an empty table before loading data into it. You can create the new table and load your data at the same time.

When you load data into BigQuery, you can supply the table or partition schema, or for supported data formats, you can use schema auto-detection.

For more information about loading data, see Introduction to loading data into BigQuery.

Control access to tables

To configure access to tables and views, you can grant an IAM role to an entity at the following levels, listed in order of range of resources allowed (largest to smallest):

  • a high level in the Google Cloud resource hierarchy such as the project, folder, or organization level
  • the dataset level
  • the table or view level

You can also restrict data access within tables, by using the following methods:

  • column-level security
  • column data masking
  • row-level security

Access with any resource protected by IAM is additive. For example, if an entity does not have access at the high level such as a project, you could grant the entity access at the dataset level, and then the entity will have access to the tables and views in the dataset. Similarly, if the entity does not have access at the high level or the dataset level, you could grant the entity access at the table or view level.

Granting IAM roles at a higher level in the Google Cloud resource hierarchy such as the project, folder, or organization level gives the entity access to a broad set of resources. For example, granting a role to an entity at the project level gives that entity permissions that apply to all datasets throughout the project.

Granting a role at the dataset level specifies the operations an entity is allowed to perform on tables and views in that specific dataset, even if the entity does not have access at a higher level. For information on configuring dataset-level access controls, see Controlling access to datasets.

Granting a role at the table or view level specifies the operations an entity is allowed to perform on specific tables and views, even if the entity does not have access at a higher level. For information on configuring table-level access controls, see Controlling access to tables and views.

You can also create IAM custom roles. If you create a custom role, the permissions you grant depend on the specific operations you want the entity to be able to perform.

You can't set a "deny" permission on any resource protected by IAM.

For more information about roles and permissions, see Understanding roles in the IAM documentation and the BigQuery IAM roles and permissions.

Get information about tables

You can get information or metadata about tables in the following ways:

  • Using the Google Cloud console.
  • Using the
    import (
    	"context"
    	"fmt"
    	"time"
    
    	"cloud.google.com/go/bigquery"
    )
    
    // createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
    func createTableExplicitSchema(projectID, datasetID, tableID string) error {
    	// projectID := "my-project-id"
    	// datasetID := "mydatasetid"
    	// tableID := "mytableid"
    	ctx := context.Background()
    
    	client, err := bigquery.NewClient(ctx, projectID)
    	if err != nil {
    		return fmt.Errorf("bigquery.NewClient: %v", err)
    	}
    	defer client.Close()
    
    	sampleSchema := bigquery.Schema{
    		{Name: "full_name", Type: bigquery.StringFieldType},
    		{Name: "age", Type: bigquery.IntegerFieldType},
    	}
    
    	metaData := &bigquery.TableMetadata{
    		Schema:         sampleSchema,
    		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
    	}
    	tableRef := client.Dataset(datasetID).Table(tableID)
    	if err := tableRef.Create(ctx, metaData); err != nil {
    		return err
    	}
    	return nil
    }
    
    2 command-line tool
    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    75 command.
  • Calling the
    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    76 API method.
  • Using the client libraries.
  • Querying the
    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    77 views (beta).

Required permissions

At a minimum, to get information about tables, you must be granted

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
78 permissions. The following predefined IAM roles include
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
78 permissions:

  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    80
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    81
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    82
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    83
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    84

In addition, if a user has

import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryException;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.Field;
import com.google.cloud.bigquery.Schema;
import com.google.cloud.bigquery.StandardSQLTypeName;
import com.google.cloud.bigquery.StandardTableDefinition;
import com.google.cloud.bigquery.TableDefinition;
import com.google.cloud.bigquery.TableId;
import com.google.cloud.bigquery.TableInfo;

public class CreateTable {

  public static void runCreateTable() {
    // TODO(developer): Replace these variables before running the sample.
    String datasetName = "MY_DATASET_NAME";
    String tableName = "MY_TABLE_NAME";
    Schema schema =
        Schema.of(
            Field.of("stringField", StandardSQLTypeName.STRING),
            Field.of("booleanField", StandardSQLTypeName.BOOL));
    createTable(datasetName, tableName, schema);
  }

  public static void createTable(String datasetName, String tableName, Schema schema) {
    try {
      // Initialize client that will be used to send requests. This client only needs to be created
      // once, and can be reused for multiple requests.
      BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();

      TableId tableId = TableId.of(datasetName, tableName);
      TableDefinition tableDefinition = StandardTableDefinition.of(schema);
      TableInfo tableInfo = TableInfo.newBuilder(tableId, tableDefinition).build();

      bigquery.create(tableInfo);
      System.out.println("Table created successfully");
    } catch (BigQueryException e) {
      System.out.println("Table was not created. \n" + e.toString());
    }
  }
}
8 permissions, when that user creates a dataset, they are granted
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
82 access to it.
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
82 access gives the user the ability to retrieve table metadata.

For more information on IAM roles and permissions in BigQuery, see Access control.

Get table information

To get information about tables:

Console

  1. In the navigation panel, in the Resources section, expand your project, and then select a dataset.

  2. Click the dataset name to expand it. The tables and views in the dataset appear.

  3. Click the table name.

  4. In the Details panel, click Details to display the table's description and table information.

  5. Optionally, switch to the Schema tab to view the table's schema definition.

bq

Issue the

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
75 command to display all table information. Use the
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
89 flag to display only table schema information. The
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
90 flag can be used to control the output.

If you are getting information about a table in a project other than your default project, add the project ID to the dataset in the following format:

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
15.

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
4

Where:

  • project_id is your project ID.
  • dataset is the name of the dataset.
  • table is the name of the table.

Examples:

Enter the following command to display all information about

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
20 in
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21.
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21 is in your default project.

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
5

Enter the following command to display all information about

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
20 in
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21.
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21 is in
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
34, not your default project.

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
6

Enter the following command to display only schema information about

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
20 in
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21.
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21 is in
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
34, not your default project.

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
7

API

Call the

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
76 method and provide any relevant parameters.

Go

Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Go API reference documentation.

View on GitHub Feedback

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
8

Java

Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.

View on GitHub Feedback

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
9

Node.js

Before trying this sample, follow the Node.js setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Node.js API reference documentation.

View on GitHub Feedback

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
0

PHP

Before trying this sample, follow the PHP setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery PHP API reference documentation.

View on GitHub Feedback

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
1

Python

Before trying this sample, follow the Python setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Python API reference documentation.

View on GitHub Feedback

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
2

Get table information using CREATE TABLE mydataset.newtable ( x INT64 OPTIONS (description = 'An optional INTEGER field'), y STRUCT < a ARRAY OPTIONS (description = 'A repeated STRING field'), b BOOL > ) OPTIONS ( expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC', description = 'a table that expires in 2023', labels = [('org_unit', 'development')]); 77

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
77 is a series of views that provide access to metadata about datasets, routines, tables, views, jobs, reservations, and streaming data.

You can query the following views to get table information:

  • Use the
    bq mk \
    --table \
    --expiration integer \
    --description description \
    --label key_1:value_1 \
    --label key_2:value_2 \
    project_id:dataset.table \
    schema
    
    06 and
    bq mk \
    --table \
    --expiration integer \
    --description description \
    --label key_1:value_1 \
    --label key_2:value_2 \
    project_id:dataset.table \
    schema
    
    07 views to retrieve metadata about tables and views in a project.
  • Use the
    bq mk \
    --table \
    --expiration integer \
    --description description \
    --label key_1:value_1 \
    --label key_2:value_2 \
    project_id:dataset.table \
    schema
    
    08 and
    bq mk \
    --table \
    --expiration integer \
    --description description \
    --label key_1:value_1 \
    --label key_2:value_2 \
    project_id:dataset.table \
    schema
    
    09 views to retrieve metadata about the columns (fields) in a table.
  • Use the
    bq mk \
    --table \
    --expiration integer \
    --description description \
    --label key_1:value_1 \
    --label key_2:value_2 \
    project_id:dataset.table \
    schema
    
    10 and
    bq mk \
    --table \
    --expiration integer \
    --description description \
    --label key_1:value_1 \
    --label key_2:value_2 \
    project_id:dataset.table \
    schema
    
    11 views to retrieve metadata about current and historical storage usage by a table.

The

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
12 and
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
13 views also contain high-level information about views. For detailed information, query the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
14 view instead.

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
12 view

When you query the

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
06 view, the query results contain one row for each table or view in a dataset. For detailed information about views, query the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
14 view instead.

The

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
06 view has the following schema:

Column nameData typeValue
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
19
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The project ID of the project that contains the dataset.
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
21
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The name of the dataset that contains the table or view. Also referred to as the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
23.
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
24
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The name of the table or view. Also referred to as the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
26.
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
27
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The table type; one of the following:
  • bq mk \
    --table \
    --expiration integer \
    --description description \
    --label key_1:value_1 \
    --label key_2:value_2 \
    project_id:dataset.table \
    schema
    
    29: A standard table
  • bq mk \
    --table \
    --expiration integer \
    --description description \
    --label key_1:value_1 \
    --label key_2:value_2 \
    project_id:dataset.table \
    schema
    
    30: A table clone (Preview)
  • bq mk \
    --table \
    --expiration integer \
    --description description \
    --label key_1:value_1 \
    --label key_2:value_2 \
    project_id:dataset.table \
    schema
    
    31: A table snapshot
  • bq mk \
    --table \
    --expiration integer \
    --description description \
    --label key_1:value_1 \
    --label key_2:value_2 \
    project_id:dataset.table \
    schema
    
    32: A view
  • bq mk \
    --table \
    --expiration integer \
    --description description \
    --label key_1:value_1 \
    --label key_2:value_2 \
    project_id:dataset.table \
    schema
    
    33: A materialized view
  • bq mk \
    --table \
    --expiration integer \
    --description description \
    --label key_1:value_1 \
    --label key_2:value_2 \
    project_id:dataset.table \
    schema
    
    34: A table that references an external data source
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
35
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
37 or
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
38 depending on whether the table supports DML INSERT statements
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
39
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The value is always
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
38
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
42
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
43The table's creation time
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
44
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The DDL statement that can be used to recreate the table, such as
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
46 or
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
47
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
48
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
43For table clones (Preview), the time when the base table was cloned to create this table. If time travel was used, then this field contains the time travel timestamp. Otherwise, the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
48 field is the same as the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
42 field. Applicable only to tables with
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
27 set to
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
30.
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
54
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20For table clones (Preview), the base table's project. Applicable only to tables with
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
27 set to
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
30.
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
58
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20For table clones (Preview), the base table's dataset. Applicable only to tables with
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
27 set to
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
30.
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
62
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20For table clones (Preview), the base table's name. Applicable only to tables with
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
27 set to
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
30.
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
66
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The name of the default collation specification if it exists; otherwise,
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
68.

Examples

Example 1:

The following example retrieves table metadata for all of the tables in the dataset named

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21. The query selects all of the columns from the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
06 view except for
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
39, which is reserved for future use. The metadata that's returned is for all types of tables in
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21 in your default project.

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21 contains the following tables:

  • bq mk \
    --table \
    --expiration integer \
    --description description \
    --label key_1:value_1 \
    --label key_2:value_2 \
    project_id:dataset.table \
    schema
    
    74: a standard BigQuery table
  • bq mk \
    --table \
    --expiration integer \
    --description description \
    --label key_1:value_1 \
    --label key_2:value_2 \
    project_id:dataset.table \
    schema
    
    75: a BigQuery view

To run the query against a project other than your default project, add the project ID to the dataset in the following format:

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
76; for example,
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
77.

Note:
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
77 view names are case-sensitive.
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
3

The result is similar to the following. For readability, some columns are excluded from the result.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
4
Example 2:

The following example retrieves all tables of type

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
29 from the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
06 view. The
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
39 column is excluded. The metadata returned is for tables in
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21 in your default project.

To run the query against a project other than your default project, add the project ID to the dataset in the following format:

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
76; for example,
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
77.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
5

The result is similar to the following. For readability, some columns are excluded from the result.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
6
Example 3:

The following example retrieves

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
24 and
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
44 columns from the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
06 view for the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
88 table in the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
89 dataset. This dataset is part of the BigQuery public dataset program.

Because the table you're querying is in another project, you add the project ID to the dataset in the following format:

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
76. In this example, the value is
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
91.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
7

The result is similar to the following:

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
8

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
13 view

When you query the

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
07 view, the query results contain one row for each option, for each table or view in a dataset. For detailed information about views, query the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
14 view instead.

The

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
07 view has the following schema:

Column nameData typeValue
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
96
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The project ID of the project that contains the dataset
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
98
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The name of the dataset that contains the table or view also referred to as the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
23
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
01
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The name of the table or view also referred to as the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
26
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
04
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20One of the name values in the options table
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
06
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20One of the data type values in the options table
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
08
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20One of the value options in the options table
Options table
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
04
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
06
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
08
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
13
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
14The default lifetime, in days, of all partitions in a partitioned table
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
15
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
14The time when this table expires
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
17
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The name of the Cloud KMS key used to encrypt the table
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
19
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The table's descriptive name
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
21
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20A description of the table
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
23
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
24An array of
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
17's that represent the labels on the table
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
26
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
27Whether queries over the table require a partition filter
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
28
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
27Whether automatic refresh is enabled for a materialized view
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
30
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
14How frequently a materialized view is refreshed

For external tables, the following options are possible:

Options
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
32

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
27

If

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
34, allow rows that are missing trailing optional columns.

Applies to CSV data.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
35

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
27

If

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
34, allow quoted data sections that contain newline characters in the file.

Applies to CSV data.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
38

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20

The compression type of the data source. Supported values include:

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
40. If not specified, the data source is uncompressed.

Applies to CSV and JSON data.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
21

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20

A description of this table.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
43

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
27

If

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
34, convert Avro logical types into their corresponding SQL types. For more information, see Logical types.

Applies to Avro data.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
46

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
27

If

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
34, infer Parquet ENUM logical type as STRING instead of BYTES by default.

Applies to Parquet data.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
49

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
27

If

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
34, use schema inference specifically for Parquet LIST logical type.

Applies to Parquet data.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
52

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20

The character encoding of the data. Supported values include:

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
54 (or
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
55),
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
56 (or
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
57).

Applies to CSV data.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
15

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
43

The time when this table expires. If not specified, the table does not expire.

Example:

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
60.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
61

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20

The separator for fields in a CSV file.

Applies to CSV data.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
63

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20

The format of the external data. Supported values for

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
65 include:
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
66,
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
67,
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
68,
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
69,
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
70 (or
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
71),
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
72,
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
73,
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
74.

Supported values for

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
75 include:
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
66,
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
67,
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
70 (or
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
71),
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
72,
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
73.

The value

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
71 is equivalent to
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
70.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
84

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
85

Determines how to convert a

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
86 type. Equivalent to ExternalDataConfiguration.decimal_target_types

Example:

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
87.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
88

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20

For JSON data, indicates a particular JSON interchange format. If not specified, BigQuery reads the data as generic JSON records.

Supported values include:

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
90. Newline-delimited GeoJSON data. For more information, see Creating an external table from a newline-delimited GeoJSON file.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
91

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20

A common prefix for all source URIs before the partition key encoding begins. Applies only to hive-partitioned external tables.

Applies to Avro, CSV, JSON, Parquet, and ORC data.

Example:

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
93.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
94

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
27

If

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
34, ignore extra values that are not represented in the table schema, without returning an error.

Applies to CSV and JSON data.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
97

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98

The maximum number of bad records to ignore when reading the data.

Applies to: CSV, JSON, and Sheets data.

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
99

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
00

Applicable for BigLake tables and object tables. In preview.

Specifies whether cached metadata is used by operations against the table, and how fresh the cached metadata must be in order for the operation to use it.

To disable metadata caching, specify 0. This is the default.

To enable metadata caching, specify an interval literal value between 30 minutes and 7 days. For example, specify

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
01 for a 4 hour staleness interval. With this value, operations against the table use cached metadata if it has been refreshed within the past 4 hours. If the cached metadata is older than that, the operation falls back to retrieving metadata from Cloud Storage instead.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
02

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20

Applicable for BigLake tables and object tables. In preview.

Specifies whether the metadata cache for the table is refreshed automatically or manually.

Set to

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
04 for the metadata cache to be refreshed at a system-defined interval, usually somewhere between 30 and 60 minutes.

Set to

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
05 if you want to refresh the metadata cache on a schedule you determine. In this case, you can call the
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
06 system procedure to refresh the cache.

You must set

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
02 if
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
99 is set to a value greater than 0.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
09

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20

The string that represents

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
68 values in a CSV file.

Applies to CSV data.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
12

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20

Only required when creating an object table. In preview.

Set the value of this option to

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
14 when creating an object table.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
15

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
27

If

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
34, then the embedded ASCII control characters which are the first 32 characters in the ASCII table, ranging from '\x00' to '\x1F', are preserved.

Applies to CSV data.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
18

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20

A list of entity properties to load.

Applies to Datastore data.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
20

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20

The string used to quote data sections in a CSV file. If your data contains quoted newline characters, also set the

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
35 property to
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
34.

Applies to CSV data.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
24

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20

User provided reference file with the table schema.

Applies to Parquet/ORC/AVRO data.

Example:

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
26.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
27

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
27

If

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
34, all queries over this table require a partition filter that can be used to eliminate partitions when reading data. Applies only to hive-partitioned external tables.

Applies to Avro, CSV, JSON, Parquet, and ORC data.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
30

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20

Range of a Sheets spreadsheet to query from.

Applies to Sheets data.

Example:

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
32,

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
33

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98

The number of rows at the top of a file to skip when reading the data.

Applies to CSV and Sheets data.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
35

For external tables, including object tables, that aren't Cloud Bigtable tables:

bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
85

An array of fully qualified URIs for the external data locations. For Cloud Storage, each URI can contain one '*' wildcard character and it must come after the bucket name.

Example:

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
37.


For Bigtable tables:

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20

The URI identifying the Bigtable table to use as a data source. You can only specify one Bigtable URI.

Example:

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
39

For more information on constructing a Bigtable URI, see Retrieving the Bigtable URI.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
40

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20

Only required when creating a Bigtable external table.

Specifies the schema of the Bigtable external table in JSON format.

For a list of Bigtable table definition options, see

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
42 in the REST API reference.

Examples

Example 1:

The following example retrieves the default table expiration times for all tables in

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21 in your default project (
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
44) by querying the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
07 view.

To run the query against a project other than your default project, add the project ID to the dataset in the following format:

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
76; for example,
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
47.

Note:
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
77 view names are case-sensitive.
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
9

The result is similar to the following:

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
0Note: Tables without an expiration time are excluded from the query results.
Example 2:

The following example retrieves metadata about all tables in

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21 that contain test data. The query uses the values in the
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
21 option to find tables that contain "test" anywhere in the description.
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21 is in your default project —
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
44.

To run the query against a project other than your default project, add the project ID to the dataset in the following format:

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
76; for example,
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
47.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
1

The result is similar to the following:

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
2

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
55 view

When you query the

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
08 view, the query results contain one row for each column (field) in a table.

The

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
08 view has the following schema:

Column nameData typeValue
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
96
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The project ID of the project that contains the dataset
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
98
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The name of the dataset that contains the table also referred to as the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
23
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
01
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The name of the table or view also referred to as the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
26
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
66
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The name of the column
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
68
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98The 1-indexed offset of the column within the table; if it's a pseudo column such as _PARTITIONTIME or _PARTITIONDATE, the value is
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
68
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
71
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
37 or
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
38 depending on whether the column's mode allows
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
68 values
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
76
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The column's standard SQL data type
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
78
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The value is always
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
80
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
81
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The value is always
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
68
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
84
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The value is always
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
68
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
87
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
37 or
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
38 depending on whether the column is a pseudo column such as _PARTITIONTIME or _PARTITIONDATE
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
91
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The value is always
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
68
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
94
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
37 or
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
38 depending on whether the column is a pseudo column such as _PARTITIONTIME or _PARTITIONDATE
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
98
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
37 or
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
38 depending on whether the column is a partitioning column
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
02
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98The 1-indexed offset of the column within the table's clustering columns; the value is
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
68 if the table is not a clustered table
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
05
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The name of the collation specification if it exists; otherwise,
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
68

If a
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20 or
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
85 is passed in, the collation specification is returned if it exists; otherwise
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
68 is returned
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
11
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The default value of the column if it exists; otherwise, the value is
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
68
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
14
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The mode of rounding that's used for values written to the field if its type is a parameterized
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
16 or
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
17; otherwise, the value is
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
68

Examples

The following example retrieves metadata from the

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
08 view for the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
88 table in the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
89 dataset. This dataset is part of the BigQuery public dataset program.

Because the table you're querying is in another project, the

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
22 project, you add the project ID to the dataset in the following format:
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
76; for example,
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
91.

The following columns are excluded from the query results because they are currently reserved for future use:

  • bq mk \
      --table \
      --expiration 3600 \
      --description "This is my table" \
      --label organization:development \
      mydataset.mytable \
      /tmp/myschema.json
    
    78
  • bq mk \
      --table \
      --expiration 3600 \
      --description "This is my table" \
      --label organization:development \
      mydataset.mytable \
      /tmp/myschema.json
    
    81
  • bq mk \
      --table \
      --expiration 3600 \
      --description "This is my table" \
      --label organization:development \
      mydataset.mytable \
      /tmp/myschema.json
    
    84
  • bq mk \
      --table \
      --expiration 3600 \
      --description "This is my table" \
      --label organization:development \
      mydataset.mytable \
      /tmp/myschema.json
    
    91
Note:
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
77 view names are case-sensitive.
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
3

The result is similar to the following. For readability, some columns are excluded from the result.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
4

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
30 view

When you query the

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
09 view, the query results contain one row for each column nested within a
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
16 (or
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
17) column.

The

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
09 view has the following schema:

Column nameData typeValue
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
96
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The project ID of the project that contains the dataset
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
98
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The name of the dataset that contains the table also referred to as the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
23
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
01
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The name of the table or view also referred to as the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
26
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
66
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The name of the column
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
45
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The path to a column nested within a `RECORD` or `STRUCT` column
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
76
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The column's standard SQL data type
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
49
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The column's description
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
05
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The name of the collation specification if it exists; otherwise,
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
68

If a
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20,
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
85, or
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20 field in a
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
17 is passed in, the collation specification is returned if it exists; otherwise,
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
68 is returned
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
14
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The mode of rounding that's used when applying precision and scale to parameterized
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
16 or
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
17 values; otherwise, the value is
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
68

Examples

The following example retrieves metadata from the

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
09 view for the
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
65 table in the
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
66 dataset. This dataset is part of the BigQuery public dataset program.

Because the table you're querying is in another project, the

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
22 project, you add the project ID to the dataset in the following format:
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
76; for example,
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
69.

The

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
65 table contains the following nested and nested and repeated columns:

  • bq mk \
      --table \
      --expiration 3600 \
      --description "This is my table" \
      --label organization:development \
      myotherproject:mydataset.mytable \
      /tmp/myschema.json
    
    71: nested
    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    16 column
  • bq mk \
      --table \
      --expiration 3600 \
      --description "This is my table" \
      --label organization:development \
      myotherproject:mydataset.mytable \
      /tmp/myschema.json
    
    73: nested
    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    16 column
  • bq mk \
      --table \
      --expiration 3600 \
      --description "This is my table" \
      --label organization:development \
      myotherproject:mydataset.mytable \
      /tmp/myschema.json
    
    75: nested and repeated
    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    16 column
  • bq mk \
      --table \
      --expiration 3600 \
      --description "This is my table" \
      --label organization:development \
      myotherproject:mydataset.mytable \
      /tmp/myschema.json
    
    77: nested and repeated
    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    16 column

To view metadata about the

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
71 and
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
77 columns, run the following query.

Note:
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
77 view names are case-sensitive.
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
5

The result is similar to the following. For readability, some columns are excluded from the result.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
6

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
82 view

Preview

This product or feature is covered by the Pre-GA Offerings Terms of the Google Cloud Terms of Service. Pre-GA products and features might have limited support, and changes to pre-GA products and features might not be compatible with other pre-GA versions. For more information, see the launch stage descriptions.

The

bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
10 view has the following schema:

Column nameData typeValue
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
84
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The project ID of the project that contains the dataset
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
86
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98The project number of the project that contains the dataset
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
98
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The name of the dataset that contains the table or materialized view, also referred to as the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
23
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
01
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The name of the table or materialized view, also referred to as the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
26
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
94
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
43The table's creation time
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
96
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98The total number of rows in the table or materialized view
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
98
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98The number of partitions present in the table or materialized view. Unpartitioned tables return 0.

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
00
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98Total number of logical (uncompressed) bytes in the table or materialized view

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
02
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98Number of logical (uncompressed) bytes that are less than 90 days old

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
04
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98Number of logical (uncompressed) bytes that are more than 90 days old

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
06
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98Total number of physical (compressed) bytes used for storage, including active, long term, and time travel (deleted or changed data) bytes

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
08
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98Number of physical (compressed) bytes less than 90 days old, including time travel (deleted or changed data) bytes

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
10
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98Number of physical (compressed) bytes more than 90 days old

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
12
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98Number of physical (compressed) bytes used by time travel storage (deleted or changed data)

Examples

Example 1:

The following example shows you the total logical bytes billed for the current project.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
7

The result is similar to the following:

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  /tmp/myschema.json
8
Example 2:

The following example shows you how to calculate the price difference per dataset. This example assumes that usage was constant over the last month from the moment the query was run. Your actual bill might vary somewhat from the calculations returned by this query, because storage usage by clones and snapshots is billed, but is currently not included in the


using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
14 columns of the table storage views.

The prices used in the pricing variables for this query are for the


using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
15 region. If you want to run this query for a different region, update the pricing variables appropriately. See Storage pricing for pricing information.

  1. Open the BigQuery page in the Google Cloud console.

    Go to the BigQuery page

  2. Enter the following Google Standard SQL query in the Query editor box.

    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    77 requires Google Standard SQL syntax. Google Standard SQL is the default syntax in the Google Cloud console.

    bq mk \
      --table \
      --expiration 3600 \
      --description "This is my table" \
      --label organization:development \
      mydataset.mytable \
      /tmp/myschema.json
    
    9Note:
    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    77 view names are case-sensitive.
  3. Click Run.

The result is similar to following:

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
0


using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
18* views

Preview

This product or feature is covered by the Pre-GA Offerings Terms of the Google Cloud Terms of Service. Pre-GA products and features might have limited support, and changes to pre-GA products and features might not be compatible with other pre-GA versions. For more information, see the launch stage descriptions.

The table storage timeline views return one row for every event that triggers a storage change for the table, like writing, updating, or deleting a row. This means there can be multiple rows for a table for a single day. When querying a view for a time range, use the most recent timestamp on the day of interest.

This view has the following schema:

Column nameData typeValue
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
43
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
43Timestamp of when storage was last recalculated. Recalculation is triggered by changes to the data in the table.

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
21

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
22Indicates whether or not the table is deleted
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
84
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The project ID of the project that contains the dataset
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
86
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98The project number of the project that contains the dataset
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
98
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The name of the dataset that contains the table or materialized view, also referred to as the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
23
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
01
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
20The name of the table or materialized view, also referred to as the
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
26
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
94
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
43The table's creation time
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
96
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98The total number of rows in the table or materialized view
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
98
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98The number of partitions for the table or materialized view. Unpartitioned tables will return 0.

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
00
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98Total number of logical (uncompressed) bytes in the table or materialized view

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
02
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98Number of logical (uncompressed) bytes that are less than 90 days old

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
04
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98Number of logical (uncompressed) bytes that are more than 90 days old

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
06
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98Total number of physical (compressed) bytes used for storage, including active, long term, and time travel (for deleted tables) bytes

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
08
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98Number of physical (compressed) bytes less than 90 days old

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
10
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98Number of physical (compressed) bytes more than 90 days old

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
12
bq mk \
  -t \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  mydataset.mytable \
  qtr:STRING,sales:FLOAT,year:STRING
98Number of physical (compressed) bytes used by time travel storage (deleted or changed data)

Examples

The following example shows you the sum of physical storage that's used by each project in your organization for a given point in time:

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
1

The result is similar to the following:

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
2

List tables in a dataset

You can list tables in datasets in the following ways:

  • Using the Google Cloud console.
  • Using the
    import (
    	"context"
    	"fmt"
    	"time"
    
    	"cloud.google.com/go/bigquery"
    )
    
    // createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.
    func createTableExplicitSchema(projectID, datasetID, tableID string) error {
    	// projectID := "my-project-id"
    	// datasetID := "mydatasetid"
    	// tableID := "mytableid"
    	ctx := context.Background()
    
    	client, err := bigquery.NewClient(ctx, projectID)
    	if err != nil {
    		return fmt.Errorf("bigquery.NewClient: %v", err)
    	}
    	defer client.Close()
    
    	sampleSchema := bigquery.Schema{
    		{Name: "full_name", Type: bigquery.StringFieldType},
    		{Name: "age", Type: bigquery.IntegerFieldType},
    	}
    
    	metaData := &bigquery.TableMetadata{
    		Schema:         sampleSchema,
    		ExpirationTime: time.Now().AddDate(1, 0, 0), // Table will be automatically deleted in 1 year.
    	}
    	tableRef := client.Dataset(datasetID).Table(tableID)
    	if err := tableRef.Create(ctx, metaData); err != nil {
    		return err
    	}
    	return nil
    }
    
    2 command-line tool
    
    using Google.Cloud.BigQuery.V2;
    using System;
    
    public class BigQueryCreateTable
    {
        public BigQueryTable CreateTable(
            string projectId = "your-project-id",
            string datasetId = "your_dataset_id"
        )
        {
            BigQueryClient client = BigQueryClient.Create(projectId);
            var dataset = client.GetDataset(datasetId);
            // Create schema for new table.
            var schema = new TableSchemaBuilder
            {
                { "full_name", BigQueryDbType.String },
                { "age", BigQueryDbType.Int64 }
            }.Build();
            // Create the table
            return dataset.CreateTable(tableId: "your_table_id", schema: schema);
        }
    }
    54 command.
  • Calling the
    
    using Google.Cloud.BigQuery.V2;
    using System;
    
    public class BigQueryCreateTable
    {
        public BigQueryTable CreateTable(
            string projectId = "your-project-id",
            string datasetId = "your_dataset_id"
        )
        {
            BigQueryClient client = BigQueryClient.Create(projectId);
            var dataset = client.GetDataset(datasetId);
            // Create schema for new table.
            var schema = new TableSchemaBuilder
            {
                { "full_name", BigQueryDbType.String },
                { "age", BigQueryDbType.Int64 }
            }.Build();
            // Create the table
            return dataset.CreateTable(tableId: "your_table_id", schema: schema);
        }
    }
    55 API method.
  • Using the client libraries.

Required permissions

At a minimum, to list tables in a dataset, you must be granted


using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
56 permissions. The following predefined IAM roles include

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
56 permissions:

  • 
    using Google.Cloud.BigQuery.V2;
    using System;
    
    public class BigQueryCreateTable
    {
        public BigQueryTable CreateTable(
            string projectId = "your-project-id",
            string datasetId = "your_dataset_id"
        )
        {
            BigQueryClient client = BigQueryClient.Create(projectId);
            var dataset = client.GetDataset(datasetId);
            // Create schema for new table.
            var schema = new TableSchemaBuilder
            {
                { "full_name", BigQueryDbType.String },
                { "age", BigQueryDbType.Int64 }
            }.Build();
            // Create the table
            return dataset.CreateTable(tableId: "your_table_id", schema: schema);
        }
    }
    58
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    80
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    81
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    83
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    82
  • CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    84

For more information on IAM roles and permissions in BigQuery, see Access control.

List tables

To list the tables in a dataset:

Console

  1. In the Google Cloud console, in the navigation pane, click your dataset to expand it. This displays the tables and views in the dataset.

  2. Scroll through the list to see the tables in the dataset. Tables and views are identified by different icons.

bq

Issue the


using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
54 command. The
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
90 flag can be used to control the output. If you are listing tables in a project other than your default project, add the project ID to the dataset in the following format:
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
15.

Additional flags include:

  • 
    using Google.Cloud.BigQuery.V2;
    using System;
    
    public class BigQueryCreateTable
    {
        public BigQueryTable CreateTable(
            string projectId = "your-project-id",
            string datasetId = "your_dataset_id"
        )
        {
            BigQueryClient client = BigQueryClient.Create(projectId);
            var dataset = client.GetDataset(datasetId);
            // Create schema for new table.
            var schema = new TableSchemaBuilder
            {
                { "full_name", BigQueryDbType.String },
                { "age", BigQueryDbType.Int64 }
            }.Build();
            // Create the table
            return dataset.CreateTable(tableId: "your_table_id", schema: schema);
        }
    }
    67 or
    
    using Google.Cloud.BigQuery.V2;
    using System;
    
    public class BigQueryCreateTable
    {
        public BigQueryTable CreateTable(
            string projectId = "your-project-id",
            string datasetId = "your_dataset_id"
        )
        {
            BigQueryClient client = BigQueryClient.Create(projectId);
            var dataset = client.GetDataset(datasetId);
            // Create schema for new table.
            var schema = new TableSchemaBuilder
            {
                { "full_name", BigQueryDbType.String },
                { "age", BigQueryDbType.Int64 }
            }.Build();
            // Create the table
            return dataset.CreateTable(tableId: "your_table_id", schema: schema);
        }
    }
    68: An integer indicating the maximum number of results. The default value is
    
    using Google.Cloud.BigQuery.V2;
    using System;
    
    public class BigQueryCreateTable
    {
        public BigQueryTable CreateTable(
            string projectId = "your-project-id",
            string datasetId = "your_dataset_id"
        )
        {
            BigQueryClient client = BigQueryClient.Create(projectId);
            var dataset = client.GetDataset(datasetId);
            // Create schema for new table.
            var schema = new TableSchemaBuilder
            {
                { "full_name", BigQueryDbType.String },
                { "age", BigQueryDbType.Int64 }
            }.Build();
            // Create the table
            return dataset.CreateTable(tableId: "your_table_id", schema: schema);
        }
    }
    69.
bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
3

Where:

  • integer is an integer representing the number of tables to list.
  • project_id is your project ID.
  • dataset is the name of the dataset.

When you run the command, the


using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
70 field displays either

using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
71 or
bq mk \
--table \
--expiration integer \
--description description \
--label key_1:value_1 \
--label key_2:value_2 \
project_id:dataset.table \
schema
32. For example:

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
4

Examples:

Enter the following command to list tables in dataset

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21 in your default project.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
5

Enter the following command to return more than the default output of 50 tables from

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21.
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21 is in your default project.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
6

Enter the following command to list tables in dataset

CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
21 in
CREATE TABLE mydataset.newtable (
  x INT64 OPTIONS (description = 'An optional INTEGER field'),
  y STRUCT <
    a ARRAY  OPTIONS (description = 'A repeated STRING field'),
    b BOOL
  >
) OPTIONS (
    expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
    description = 'a table that expires in 2023',
    labels = [('org_unit', 'development')]);
34.

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
7

API

To list tables using the API, call the


using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
55 method.

C#

Before trying this sample, follow the C# setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery C# API reference documentation.

View on GitHub Feedback

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
8

Go

Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Go API reference documentation.

View on GitHub Feedback

bq mk \
  --table \
  --expiration 3600 \
  --description "This is my table" \
  --label organization:development \
  myotherproject:mydataset.mytable \
  /tmp/myschema.json
9

Java

Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Java API reference documentation.

View on GitHub Feedback


using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
0

Node.js

Before trying this sample, follow the Node.js setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Node.js API reference documentation.

View on GitHub Feedback


using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
1

PHP

Before trying this sample, follow the PHP setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery PHP API reference documentation.

View on GitHub Feedback


using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
2

Python

Before trying this sample, follow the Python setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Python API reference documentation.

View on GitHub Feedback


using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
3

Ruby

Before trying this sample, follow the Ruby setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Ruby API reference documentation.

View on GitHub Feedback


using Google.Cloud.BigQuery.V2;
using System;

public class BigQueryCreateTable
{
    public BigQueryTable CreateTable(
        string projectId = "your-project-id",
        string datasetId = "your_dataset_id"
    )
    {
        BigQueryClient client = BigQueryClient.Create(projectId);
        var dataset = client.GetDataset(datasetId);
        // Create schema for new table.
        var schema = new TableSchemaBuilder
        {
            { "full_name", BigQueryDbType.String },
            { "age", BigQueryDbType.Int64 }
        }.Build();
        // Create the table
        return dataset.CreateTable(tableId: "your_table_id", schema: schema);
    }
}
4

Table security

To control access to tables in BigQuery, see Introduction to table access controls.

Next steps

  • For more information about datasets, see Introduction to datasets.
  • For more information about handling table data, see Managing table data.
  • For more information about specifying table schemas, see Specifying a schema.
  • For more information about modifying table schemas, see Modifying table schemas.
  • For more information about managing tables, see Managing tables.
  • To see an overview of
    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    77, go to Introduction to BigQuery
    CREATE TABLE mydataset.newtable (
      x INT64 OPTIONS (description = 'An optional INTEGER field'),
      y STRUCT <
        a ARRAY  OPTIONS (description = 'A repeated STRING field'),
        b BOOL
      >
    ) OPTIONS (
        expiration_timestamp = TIMESTAMP '2023-01-01 00:00:00 UTC',
        description = 'a table that expires in 2023',
        labels = [('org_unit', 'development')]);
    
    77.

Try it for yourself

If you're new to Google Cloud, create an account to evaluate how BigQuery performs in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.

How can you create a new table with existing data from another table?

A copy of an existing table can be created using a combination of the CREATE TABLE statement and the SELECT statement. The new table has the same column definitions. All columns or specific columns can be selected.

Which command is used to create a new table?

Creating a basic table involves naming the table and defining its columns and each column's data type. The SQL CREATE TABLE statement is used to create a new table.

Which command is used to add a new column in existing table?

The ALTER TABLE statement is used to add, delete, or modify columns in an existing table.