Skip to content

Commit 68c0f34

Browse files
Alvaro Tejero-Canterowesm
Alvaro Tejero-Cantero
authored andcommitted
DOC: Typos + little PEP8 spacing instances and reverted inadvertent add at doc/source/conf.py
1 parent 9f3a0d2 commit 68c0f34

File tree

2 files changed

+15
-17
lines changed

2 files changed

+15
-17
lines changed

doc/source/conf.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,6 @@
1919
# sys.path.append(os.path.abspath('.'))
2020
sys.path.insert(0, os.path.abspath('../sphinxext'))
2121

22-
sys.path.insert(0, '/home/e0/repos/jrb_pytb7')
23-
2422
sys.path.extend([
2523

2624
# numpy standard doc extensions

doc/source/io.rst

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1118,11 +1118,11 @@ everying in the sub-store and BELOW, so be *careful*.
11181118
Storing Mixed Types in a Table
11191119
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
11201120

1121-
Storing mixed-dtype data is supported. Strings are store as a
1121+
Storing mixed-dtype data is supported. Strings are stored as a
11221122
fixed-width using the maximum size of the appended column. Subsequent
11231123
appends will truncate strings at this length.
11241124

1125-
Passing ``min_itemsize = { `values` : size }`` as a parameter to append
1125+
Passing ``min_itemsize={`values`: size}`` as a parameter to append
11261126
will set a larger minimum for the string columns. Storing ``floats,
11271127
strings, ints, bools, datetime64`` are currently supported. For string
11281128
columns, passing ``nan_rep = 'nan'`` to append will change the default
@@ -1136,7 +1136,7 @@ defaults to `nan`.
11361136
df_mixed['int'] = 1
11371137
df_mixed['bool'] = True
11381138
df_mixed['datetime64'] = Timestamp('20010102')
1139-
df_mixed.ix[3:5,['A','B','string','datetime64']] = np.nan
1139+
df_mixed.ix[3:5,['A', 'B', 'string', 'datetime64']] = np.nan
11401140
11411141
store.append('df_mixed', df_mixed, min_itemsize = {'values': 50})
11421142
df_mixed1 = store.select('df_mixed')
@@ -1150,7 +1150,7 @@ Storing Multi-Index DataFrames
11501150
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
11511151

11521152
Storing multi-index dataframes as tables is very similar to
1153-
storing/selecting from homogenous index DataFrames.
1153+
storing/selecting from homogeneous index DataFrames.
11541154

11551155
.. ipython:: python
11561156
@@ -1173,7 +1173,7 @@ storing/selecting from homogenous index DataFrames.
11731173
Querying a Table
11741174
~~~~~~~~~~~~~~~~
11751175

1176-
``select`` and ``delete`` operations have an optional criteria that can
1176+
``select`` and ``delete`` operations have an optional criterion that can
11771177
be specified to select/delete only a subset of the data. This allows one
11781178
to have a very large on-disk table and retrieve only a portion of the
11791179
data.
@@ -1201,7 +1201,7 @@ terms.
12011201
Queries are built up using a list of ``Terms`` (currently only
12021202
**anding** of terms is supported). An example query for a panel might be
12031203
specified as follows. ``['major_axis>20000102', ('minor_axis', '=',
1204-
['A','B']) ]``. This is roughly translated to: `major_axis must be
1204+
['A', 'B']) ]``. This is roughly translated to: `major_axis must be
12051205
greater than the date 20000102 and the minor_axis must be A or B`
12061206

12071207
.. ipython:: python
@@ -1212,13 +1212,13 @@ greater than the date 20000102 and the minor_axis must be A or B`
12121212
12131213
The ``columns`` keyword can be supplied to select to filter a list of
12141214
the return columns, this is equivalent to passing a
1215-
``Term('columns',list_of_columns_to_filter)``
1215+
``Term('columns', list_of_columns_to_filter)``
12161216

12171217
.. ipython:: python
12181218
12191219
store.select('df', columns=['A', 'B'])
12201220
1221-
Start and Stop parameters can be specified to limit the total search
1221+
``start`` and ``stop`` parameters can be specified to limit the total search
12221222
space. These are in terms of the total number of rows in a table.
12231223

12241224
.. ipython:: python
@@ -1251,7 +1251,7 @@ specify. This behavior can be turned off by passing ``index=False`` to
12511251
i.optlevel, i.kind
12521252
12531253
# change an index by passing new parameters
1254-
store.create_table_index('df', optlevel = 9, kind = 'full')
1254+
store.create_table_index('df', optlevel=9, kind='full')
12551255
i = store.root.df.table.cols.index.index
12561256
i.optlevel, i.kind
12571257
@@ -1312,7 +1312,7 @@ very quickly. Note ``nan`` are excluded from the result set.
13121312
**Replicating or**
13131313

13141314
``not`` and ``or`` conditions are unsupported at this time; however,
1315-
``or`` operations are easy to replicate, by repeately applying the
1315+
``or`` operations are easy to replicate, by repeatedly applying the
13161316
criteria to the table, and then ``concat`` the results.
13171317

13181318
.. ipython:: python
@@ -1325,7 +1325,7 @@ criteria to the table, and then ``concat`` the results.
13251325
**Storer Object**
13261326

13271327
If you want to inspect the stored object, retrieve via
1328-
``get_storer``. You could use this progamatically to say get the number
1328+
``get_storer``. You could use this programmatically to say get the number
13291329
of rows in an object.
13301330

13311331
.. ipython:: python
@@ -1340,10 +1340,10 @@ New in 0.10.1 are the methods ``append_to_multple`` and
13401340
``select_as_multiple``, that can perform appending/selecting from
13411341
multiple tables at once. The idea is to have one table (call it the
13421342
selector table) that you index most/all of the columns, and perform your
1343-
queries. The other table(s) are data tables that are indexed the same
1343+
queries. The other table(s) are data tables that are indexed the same as
13441344
the selector table. You can then perform a very fast query on the
13451345
selector table, yet get lots of data back. This method works similar to
1346-
having a very wide-table, but is more efficient in terms of queries.
1346+
having a very wide table, but is more efficient in terms of queries.
13471347

13481348
Note, **THE USER IS RESPONSIBLE FOR SYNCHRONIZING THE TABLES**. This
13491349
means, append to the tables in the same order; ``append_to_multiple``
@@ -1369,7 +1369,7 @@ table (optional) to let it have the remaining columns. The argument
13691369
store.select('df2_mt')
13701370
13711371
# as a multiple
1372-
store.select_as_multiple(['df1_mt','df2_mt'], where=['A>0', 'B>0'],
1372+
store.select_as_multiple(['df1_mt', 'df2_mt'], where=['A>0', 'B>0'],
13731373
selector = 'df1_mt')
13741374
13751375
@@ -1386,7 +1386,7 @@ pays to have the dimension you are deleting be the first of the
13861386
``indexables``.
13871387

13881388
Data is ordered (on the disk) in terms of the ``indexables``. Here's a
1389-
simple use case. You store panel type data, with dates in the
1389+
simple use case. You store panel-type data, with dates in the
13901390
``major_axis`` and ids in the ``minor_axis``. The data is then
13911391
interleaved like this:
13921392

0 commit comments

Comments
 (0)