Skip to content

Commit 945d1e0

Browse files
committed
Merge remote-tracking branch 'upstream/master' into numpy-ea
2 parents 2c615b0 + 08c920e commit 945d1e0

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

57 files changed

+277
-6624
lines changed

doc/source/advanced.rst

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -778,12 +778,12 @@ a ``Categorical`` will return a ``CategoricalIndex``, indexed according to the c
778778
of the **passed** ``Categorical`` dtype. This allows one to arbitrarily index these even with
779779
values **not** in the categories, similarly to how you can reindex **any** pandas index.
780780

781-
.. ipython :: python
781+
.. ipython:: python
782782
783-
df2.reindex(['a','e'])
784-
df2.reindex(['a','e']).index
785-
df2.reindex(pd.Categorical(['a','e'],categories=list('abcde')))
786-
df2.reindex(pd.Categorical(['a','e'],categories=list('abcde'))).index
783+
df2.reindex(['a', 'e'])
784+
df2.reindex(['a', 'e']).index
785+
df2.reindex(pd.Categorical(['a', 'e'], categories=list('abcde')))
786+
df2.reindex(pd.Categorical(['a', 'e'], categories=list('abcde'))).index
787787
788788
.. warning::
789789

@@ -1040,7 +1040,8 @@ than integer locations. Therefore, with an integer axis index *only*
10401040
label-based indexing is possible with the standard tools like ``.loc``. The
10411041
following code will generate exceptions:
10421042

1043-
.. code-block:: python
1043+
.. ipython:: python
1044+
:okexcept:
10441045
10451046
s = pd.Series(range(5))
10461047
s[-1]
@@ -1130,7 +1131,7 @@ index can be somewhat complicated. For example, the following does not work:
11301131

11311132
::
11321133

1133-
s.loc['c':'e'+1]
1134+
s.loc['c':'e' + 1]
11341135

11351136
A very common use case is to limit a time series to start and end at two
11361137
specific dates. To enable this, we made the design to make label-based

doc/source/categorical.rst

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -977,21 +977,17 @@ categorical (categories and ordering). So if you read back the CSV file you have
977977
relevant columns back to `category` and assign the right categories and categories ordering.
978978

979979
.. ipython:: python
980-
:suppress:
981980
982-
983-
.. ipython:: python
984-
985-
from pandas.compat import StringIO
981+
import io
986982
s = pd.Series(pd.Categorical(['a', 'b', 'b', 'a', 'a', 'd']))
987983
# rename the categories
988984
s.cat.categories = ["very good", "good", "bad"]
989985
# reorder the categories and add missing categories
990986
s = s.cat.set_categories(["very bad", "bad", "medium", "good", "very good"])
991987
df = pd.DataFrame({"cats": s, "vals": [1, 2, 3, 4, 5, 6]})
992-
csv = StringIO()
988+
csv = io.StringIO()
993989
df.to_csv(csv)
994-
df2 = pd.read_csv(StringIO(csv.getvalue()))
990+
df2 = pd.read_csv(io.StringIO(csv.getvalue()))
995991
df2.dtypes
996992
df2["cats"]
997993
# Redo the category
@@ -1206,6 +1202,7 @@ Use ``copy=True`` to prevent such a behaviour or simply don't reuse ``Categorica
12061202
cat
12071203
12081204
.. note::
1205+
12091206
This also happens in some cases when you supply a NumPy array instead of a ``Categorical``:
12101207
using an int array (e.g. ``np.array([1,2,3,4])``) will exhibit the same behavior, while using
12111208
a string array (e.g. ``np.array(["a","b","c","a"])``) will not.

doc/source/conf.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -296,7 +296,10 @@
296296
np.random.seed(123456)
297297
np.set_printoptions(precision=4, suppress=True)
298298
pd.options.display.max_rows = 15
299-
"""
299+
300+
import os
301+
os.chdir('{}')
302+
""".format(os.path.dirname(os.path.dirname(__file__)))
300303

301304

302305
html_context = {

doc/source/cookbook.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1236,7 +1236,7 @@ the following Python code will read the binary file ``'binary.dat'`` into a
12361236
pandas ``DataFrame``, where each element of the struct corresponds to a column
12371237
in the frame:
12381238

1239-
.. code-block:: python
1239+
.. ipython:: python
12401240
12411241
names = 'count', 'avg', 'scale'
12421242
@@ -1399,7 +1399,6 @@ of the data values:
13991399

14001400
.. ipython:: python
14011401
1402-
14031402
def expand_grid(data_dict):
14041403
rows = itertools.product(*data_dict.values())
14051404
return pd.DataFrame.from_records(rows, columns=data_dict.keys())

doc/source/gotchas.rst

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -301,9 +301,7 @@ Byte-Ordering Issues
301301
--------------------
302302
Occasionally you may have to deal with data that were created on a machine with
303303
a different byte order than the one on which you are running Python. A common
304-
symptom of this issue is an error like:
305-
306-
.. code-block:: python-traceback
304+
symptom of this issue is an error like:::
307305

308306
Traceback
309307
...

doc/source/io.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4879,7 +4879,7 @@ below and the SQLAlchemy `documentation <https://docs.sqlalchemy.org/en/latest/c
48794879
48804880
If you want to manage your own connections you can pass one of those instead:
48814881

4882-
.. code-block:: python
4882+
.. ipython:: python
48834883
48844884
with engine.connect() as conn, conn.begin():
48854885
data = pd.read_sql_table('data', conn)

doc/source/merging.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1122,6 +1122,8 @@ This is equivalent but less verbose and more memory efficient / faster than this
11221122
labels=['left', 'right'], vertical=False);
11231123
plt.close('all');
11241124
1125+
.. _merging.join_with_two_multi_indexes:
1126+
11251127
Joining with two MultiIndexes
11261128
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
11271129

0 commit comments

Comments
 (0)