Open
Description
The to_gbq
function should take a configuration
argument representing a BigQuery JobConfiguration REST API resource.
This would make it consistent with the read_gbq
function.
Context
Options for table creation / schema updates
- Partitioning and Clustering: Providing Table partitioning and cluster fields #395
- Schema update options: Request: add schemaUpdateOptions to to_gbq() #107
- Partition expiration time Option to provide partition column and partition expiry time #313
I believe these would require table creation to be done by load job instead of a separate create table step (especially partitioning, as that must be done at creation time). TBD what this would look like if we add support for the BigQuery Storage Write API or (legacy) Streaming API.
Options for file loading
- Custom NULL marker Empty strings inconsistently converted to NULL's when using df.to_gbq() #366 -- this would require an update to pandas CSV write configuration as, I believe, though.