Skip to content

Commit bdc4c9c

Browse files
Merge branch 'hold' into temp
2 parents 8e6db0f + 20661c2 commit bdc4c9c

File tree

2 files changed

+32
-8
lines changed

2 files changed

+32
-8
lines changed

ci/code_checks.sh

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -149,7 +149,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
149149
-i "pandas.DatetimeTZDtype.tz SA01" \
150150
-i "pandas.DatetimeTZDtype.unit SA01" \
151151
-i "pandas.Grouper PR02" \
152-
-i "pandas.HDFStore.append PR01,SA01" \
153152
-i "pandas.HDFStore.get SA01" \
154153
-i "pandas.HDFStore.groups SA01" \
155154
-i "pandas.HDFStore.info RT03,SA01" \

pandas/io/pytables.py

Lines changed: 32 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1261,15 +1261,25 @@ def append(
12611261
Table format. Write as a PyTables Table structure which may perform
12621262
worse but allow more flexible operations like searching / selecting
12631263
subsets of the data.
1264+
axes : default None
1265+
This parameter is currently not accepted.
12641266
index : bool, default True
12651267
Write DataFrame index as a column.
12661268
append : bool, default True
12671269
Append the input data to the existing.
1268-
data_columns : list of columns, or True, default None
1269-
List of columns to create as indexed data columns for on-disk
1270-
queries, or True to use all columns. By default only the axes
1271-
of the object are indexed. See `here
1272-
<https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#query-via-data-columns>`__.
1270+
complib : {'zlib', 'lzo', 'bzip2', 'blosc'}, default 'zlib'
1271+
Specifies the compression library to be used.
1272+
These additional compressors for Blosc are supported
1273+
(default if no compressor specified: 'blosc:blosclz'):
1274+
{'blosc:blosclz', 'blosc:lz4', 'blosc:lz4hc', 'blosc:snappy',
1275+
'blosc:zlib', 'blosc:zstd'}.
1276+
Specifying a compression library which is not available issues
1277+
a ValueError.
1278+
complevel : int, 0-9, default None
1279+
Specifies a compression level for data.
1280+
A value of 0 or None disables compression.
1281+
columns : default None
1282+
This parameter is currently not accepted, try data_columns.
12731283
min_itemsize : int, dict, or None
12741284
Dict of columns that specify minimum str sizes.
12751285
nan_rep : str
@@ -1278,11 +1288,26 @@ def append(
12781288
Size to chunk the writing.
12791289
expectedrows : int
12801290
Expected TOTAL row size of this table.
1281-
encoding : default None
1282-
Provide an encoding for str.
12831291
dropna : bool, default False, optional
12841292
Do not write an ALL nan row to the store settable
12851293
by the option 'io.hdf.dropna_table'.
1294+
data_columns : list of columns, or True, default None
1295+
List of columns to create as indexed data columns for on-disk
1296+
queries, or True to use all columns. By default only the axes
1297+
of the object are indexed. See `here
1298+
<https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#query-via-data-columns>`__.
1299+
encoding : default None
1300+
Provide an encoding for str.
1301+
errors : str, default 'strict'
1302+
The error handling scheme to use for encoding errors.
1303+
The default is 'strict' meaning that encoding errors raise a
1304+
UnicodeEncodeError. Other possible values are 'ignore', 'replace' and
1305+
'xmlcharrefreplace' as well as any other name registered with
1306+
codecs.register_error that can handle UnicodeEncodeErrors.
1307+
1308+
See Also
1309+
--------
1310+
HDFStore.append_to_multiple : Append to multiple tables.
12861311
12871312
Notes
12881313
-----

0 commit comments

Comments
 (0)