Skip to content

DOC: Remove multiple blank lines in ipython directives #41400

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 10, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions doc/source/user_guide/basics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1184,11 +1184,9 @@ a single value and returning a single value. For example:

df4


def f(x):
return len(str(x))


df4["one"].map(f)
df4.applymap(f)

Expand Down
13 changes: 0 additions & 13 deletions doc/source/user_guide/cookbook.rst
Original file line number Diff line number Diff line change
Expand Up @@ -494,15 +494,12 @@ Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to

S = pd.Series([i / 100.0 for i in range(1, 11)])


def cum_ret(x, y):
return x * (1 + y)


def red(x):
return functools.reduce(cum_ret, x, 1.0)


S.expanding().apply(red, raw=True)


Expand All @@ -514,12 +511,10 @@ Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to
df = pd.DataFrame({"A": [1, 1, 2, 2], "B": [1, -1, 1, 2]})
gb = df.groupby("A")


def replace(g):
mask = g < 0
return g.where(mask, g[~mask].mean())


gb.transform(replace)

`Sort groups by aggregated data
Expand Down Expand Up @@ -551,13 +546,11 @@ Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to
rng = pd.date_range(start="2014-10-07", periods=10, freq="2min")
ts = pd.Series(data=list(range(10)), index=rng)


def MyCust(x):
if len(x) > 2:
return x[1] * 1.234
return pd.NaT


mhc = {"Mean": np.mean, "Max": np.max, "Custom": MyCust}
ts.resample("5min").apply(mhc)
ts
Expand Down Expand Up @@ -803,11 +796,9 @@ Apply
index=["I", "II", "III"],
)


def SeriesFromSubList(aList):
return pd.Series(aList)


df_orgz = pd.concat(
{ind: row.apply(SeriesFromSubList) for ind, row in df.iterrows()}
)
Expand All @@ -827,12 +818,10 @@ Rolling Apply to multiple columns where function calculates a Series before a Sc
)
df


def gm(df, const):
v = ((((df["A"] + df["B"]) + 1).cumprod()) - 1) * const
return v.iloc[-1]


s = pd.Series(
{
df.index[i]: gm(df.iloc[i: min(i + 51, len(df) - 1)], 5)
Expand All @@ -859,11 +848,9 @@ Rolling Apply to multiple columns where function returns a Scalar (Volume Weight
)
df


def vwap(bars):
return (bars.Close * bars.Volume).sum() / bars.Volume.sum()


window = 5
s = pd.concat(
[
Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/groupby.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1617,12 +1617,10 @@ column index name will be used as the name of the inserted column:
}
)


def compute_metrics(x):
result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
return pd.Series(result, name="metrics")


result = df.groupby("a").apply(compute_metrics)

result
Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4648,11 +4648,9 @@ chunks.

store.append("dfeq", dfeq, data_columns=["number"])


def chunks(l, n):
return [l[i: i + n] for i in range(0, len(l), n)]


evens = [2, 4, 6, 8, 10]
coordinates = store.select_as_coordinates("dfeq", "number=evens")
for c in chunks(coordinates, 2):
Expand Down
1 change: 1 addition & 0 deletions doc/source/user_guide/merging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1578,4 +1578,5 @@ to ``True``.
You may also keep all the original values even if they are equal.

.. ipython:: python

df.compare(df2, keep_shape=True, keep_equal=True)
2 changes: 0 additions & 2 deletions doc/source/user_guide/reshaping.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@ Reshaping by pivoting DataFrame objects

import pandas._testing as tm


def unpivot(frame):
N, K = frame.shape
data = {
Expand All @@ -29,7 +28,6 @@ Reshaping by pivoting DataFrame objects
columns = ["date", "variable", "value"]
return pd.DataFrame(data, columns=columns)


df = unpivot(tm.makeTimeDataFrame(3))

Data is often stored in so-called "stacked" or "record" format:
Expand Down
1 change: 0 additions & 1 deletion doc/source/user_guide/sparse.rst
Original file line number Diff line number Diff line change
Expand Up @@ -325,7 +325,6 @@ In the example below, we transform the ``Series`` to a sparse representation of
row_levels=["A", "B"], column_levels=["C", "D"], sort_labels=True
)


A
A.todense()
rows
Expand Down
5 changes: 0 additions & 5 deletions doc/source/user_guide/text.rst
Original file line number Diff line number Diff line change
Expand Up @@ -297,24 +297,19 @@ positional argument (a regex object) and return a string.
# Reverse every lowercase alphabetic word
pat = r"[a-z]+"


def repl(m):
return m.group(0)[::-1]


pd.Series(["foo 123", "bar baz", np.nan], dtype="string").str.replace(
pat, repl, regex=True
)


# Using regex groups
pat = r"(?P<one>\w+) (?P<two>\w+) (?P<three>\w+)"


def repl(m):
return m.group("two").swapcase()


pd.Series(["Foo Bar Baz", np.nan], dtype="string").str.replace(
pat, repl, regex=True
)
Expand Down
6 changes: 0 additions & 6 deletions doc/source/user_guide/timeseries.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1422,7 +1422,6 @@ An example of how holidays and holiday calendars are defined:
MO,
)


class ExampleCalendar(AbstractHolidayCalendar):
rules = [
USMemorialDay,
Expand All @@ -1435,7 +1434,6 @@ An example of how holidays and holiday calendars are defined:
),
]


cal = ExampleCalendar()
cal.holidays(datetime.datetime(2012, 1, 1), datetime.datetime(2012, 12, 31))

Expand Down Expand Up @@ -1707,13 +1705,11 @@ We can instead only resample those groups where we have points as follows:
from functools import partial
from pandas.tseries.frequencies import to_offset


def round(t, freq):
# round a Timestamp to a specified freq
freq = to_offset(freq)
return pd.Timestamp((t.value // freq.delta.value) * freq.delta.value)


ts.groupby(partial(round, freq="3T")).sum()

.. _timeseries.aggregate:
Expand Down Expand Up @@ -2255,11 +2251,9 @@ To convert from an ``int64`` based YYYYMMDD representation.
s = pd.Series([20121231, 20141130, 99991231])
s


def conv(x):
return pd.Period(year=x // 10000, month=x // 100 % 100, day=x % 100, freq="D")


s.apply(conv)
s.apply(conv)[2]

Expand Down
5 changes: 1 addition & 4 deletions doc/source/user_guide/window.rst
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,6 @@ from present information back to past information. This allows the rolling windo

df


.. _window.custom_rolling_window:

Custom window rolling
Expand Down Expand Up @@ -294,13 +293,12 @@ conditions. In these cases it can be useful to perform forward-looking rolling w
This :func:`BaseIndexer <pandas.api.indexers.BaseIndexer>` subclass implements a closed fixed-width
forward-looking rolling window, and we can use it as follows:

.. ipython:: ipython
.. ipython:: python

from pandas.api.indexers import FixedForwardWindowIndexer
indexer = FixedForwardWindowIndexer(window_size=2)
df.rolling(indexer, min_periods=1).sum()


.. _window.rolling_apply:

Rolling apply
Expand All @@ -319,7 +317,6 @@ the windows are cast as :class:`Series` objects (``raw=False``) or ndarray objec
s = pd.Series(range(10))
s.rolling(window=4).apply(mad, raw=True)


.. _window.numba_engine:

Numba engine
Expand Down