Skip to content

Commit a346130

Browse files
committed
try adding codeautolink
1 parent b442f0d commit a346130

File tree

6 files changed

+46
-28
lines changed

6 files changed

+46
-28
lines changed

examples/case_studies/blackbox_external_likelihood.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@
7373
"However, this is not necessarily that simple if you have a model function, or probability distribution, that, for example, relies on an external code that you have little/no control over (and may even be, for example, wrapped `C` code rather than Python). This can be problematic went you need to pass parameters set as PyMC3 distributions to these external functions; your external function probably wants you to pass it floating point numbers rather than PyMC3 distributions!\n",
7474
"\n",
7575
"```python\n",
76-
"import pymc3 as pm:\n",
76+
"import pymc3 as pm\n",
7777
"from external_module import my_external_func # your external function!\n",
7878
"\n",
7979
"# set up your model\n",

examples/case_studies/conditional-autoregressive-model.ipynb

Lines changed: 13 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@
122122
"\n",
123123
"The classical `WinBUGS` implementation (more information [here](http://glau.ca/?p=340)):\n",
124124
"\n",
125-
"```python\n",
125+
"```stan\n",
126126
"model\n",
127127
"{\n",
128128
" for (i in 1 : regions) {\n",
@@ -2854,7 +2854,7 @@
28542854
"metadata": {},
28552855
"source": [
28562856
"Then in the `Stan` model:\n",
2857-
"```\n",
2857+
"```stan\n",
28582858
"model {\n",
28592859
" phi ~ multi_normal_prec(zeros, tau * (D - alpha * W));\n",
28602860
" ...\n",
@@ -3004,7 +3004,7 @@
30043004
"Note that in the node $\\phi \\sim \\mathcal{N}(0, [D_\\tau (I - \\alpha B)]^{-1})$, we are computing the log-likelihood for a multivariate Gaussian distribution, which might not scale well in high-dimensions. We can take advantage of the fact that the covariance matrix here $[D_\\tau (I - \\alpha B)]^{-1}$ is **sparse**, and there are faster ways to compute its log-likelihood. \n",
30053005
"\n",
30063006
"For example, a more efficient sparse representation of the CAR in `Stan`:\n",
3007-
"```python\n",
3007+
"```stan\n",
30083008
"functions {\n",
30093009
" /**\n",
30103010
" * Return the log probability of a proper conditional autoregressive (CAR) prior \n",
@@ -3040,9 +3040,9 @@
30403040
" - tau * (phit_D * phi - alpha * (phit_W * phi)));\n",
30413041
" }\n",
30423042
"}\n",
3043-
"```python\n",
3044-
"with the data transformed in the model:\n",
30453043
"```\n",
3044+
"with the data transformed in the model:\n",
3045+
"```stan\n",
30463046
"transformed data {\n",
30473047
" int W_sparse[W_n, 2]; // adjacency pairs\n",
30483048
" vector[n] D_sparse; // diagonal of D (number of neigbors for each site)\n",
@@ -3073,7 +3073,7 @@
30733073
"}\n",
30743074
"```\n",
30753075
"and the likelihood:\n",
3076-
"```\n",
3076+
"```stan\n",
30773077
"model {\n",
30783078
" phi ~ sparse_car(tau, alpha, W_sparse, D_sparse, lambda, n, W_n);\n",
30793079
"}\n",
@@ -3297,6 +3297,7 @@
32973297
"In `Stan`, there is an option to write a `generated quantities` block for sample generation. Doing the similar in pymc3, however, is not recommended. \n",
32983298
"\n",
32993299
"Consider the following simple sample:\n",
3300+
"\n",
33003301
"```python\n",
33013302
"# Data\n",
33023303
"x = np.array([1.1, 1.9, 2.3, 1.8])\n",
@@ -3312,9 +3313,12 @@
33123313
" p = pm.Deterministic('p', pm.math.sigmoid(mu))\n",
33133314
" count = pm.Binomial('count', n=10, p=p, shape=10)\n",
33143315
"```\n",
3316+
"\n",
33153317
"where we intended to use \n",
3318+
"\n",
33163319
"```python\n",
3317-
"count = pm.Binomial('count', n=10, p=p, shape=10)```\n",
3320+
"count = pm.Binomial('count', n=10, p=p, shape=10)\n",
3321+
"```\n",
33183322
"to generate posterior prediction. However, if the new RV added to the model is a discrete variable it can cause weird turbulence to the trace. You can see [issue #1990](https://github.com/pymc-devs/pymc3/issues/1990) for related discussion."
33193323
]
33203324
},
@@ -3486,7 +3490,7 @@
34863490
],
34873491
"metadata": {
34883492
"kernelspec": {
3489-
"display_name": "Python 3",
3493+
"display_name": "Python 3 (ipykernel)",
34903494
"language": "python",
34913495
"name": "python3"
34923496
},
@@ -3500,7 +3504,7 @@
35003504
"name": "python",
35013505
"nbconvert_exporter": "python",
35023506
"pygments_lexer": "ipython3",
3503-
"version": "3.8.6"
3507+
"version": "3.9.7"
35043508
}
35053509
},
35063510
"nbformat": 4,

examples/conf.py

Lines changed: 25 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@
2121
"sphinxext.opengraph",
2222
"sphinx_copybutton",
2323
"sphinxcontrib.bibtex",
24+
"sphinx_codeautolink",
2425
]
2526

2627
# List of patterns, relative to source directory, that match files and
@@ -110,6 +111,7 @@
110111

111112
# MyST config
112113
myst_enable_extensions = ["colon_fence", "deflist", "dollarmath", "amsmath"]
114+
jupyter_execute_notebooks = "off"
113115

114116
# bibtex config
115117
bibtex_bibfiles = ["references.bib"]
@@ -120,17 +122,33 @@
120122
# ogp_site_url = "https://predictablynoisy.com"
121123
# ogp_image = "https://predictablynoisy.com/_static/profile-bw.png"
122124

123-
# Temporarily stored as off until we fix it
124-
jupyter_execute_notebooks = "off"
125+
# codeautolink config
126+
from IPython.core.inputtransformer2 import TransformerManager
127+
128+
129+
def ipython_cell_transform(source):
130+
out = TransformerManager().transform_cell(source)
131+
return source, out
132+
133+
134+
# codeautolink
135+
codeautolink_custom_blocks = {
136+
"ipython3": ipython_cell_transform,
137+
}
138+
codeautolink_autodoc_inject = False
139+
codeautolink_global_preface = """
140+
import arviz as az
141+
import pymc3 as pm
142+
"""
125143

126144
# intersphinx mappings
127145
intersphinx_mapping = {
128146
"aesara": ("https://aesara.readthedocs.io/en/latest/", None),
129147
"arviz": ("https://arviz-devs.github.io/arviz/", None),
130-
# "mpl": ("https://matplotlib.org/", None),
131-
# "numpy": ("https://numpy.org/doc/stable/", None),
132-
# "pandas": ("https://pandas.pydata.org/pandas-docs/stable/", None),
148+
"mpl": ("https://matplotlib.org/", None),
149+
"numpy": ("https://numpy.org/doc/stable/", None),
150+
"pandas": ("https://pandas.pydata.org/pandas-docs/stable/", None),
133151
"pymc": ("https://docs.pymc.io/en/stable/", None),
134-
# "scipy": ("https://docs.scipy.org/doc/scipy/reference/", None),
135-
# "xarray": ("http://xarray.pydata.org/en/stable/", None),
152+
"scipy": ("https://docs.scipy.org/doc/scipy/reference/", None),
153+
"xarray": ("http://xarray.pydata.org/en/stable/", None),
136154
}

examples/getting_started.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -253,7 +253,7 @@
253253
"\n",
254254
"Following instantiation of the model, the subsequent specification of the model components is performed inside a `with` statement:\n",
255255
"\n",
256-
"```python\n",
256+
"```\n",
257257
"with basic_model:\n",
258258
"```\n",
259259
"This creates a *context manager*, with our `basic_model` as the context, that includes all statements until the indented block ends. This means all PyMC3 objects introduced in the indented code block below the `with` statement are added to the model behind the scenes. Absent this context manager idiom, we would be forced to manually associate each of the variables with `basic_model` right after we create them. If you try to create a new random variable without a `with model:` statement, it will raise an error since there is no obvious model for the variable to be added to.\n",
@@ -3840,7 +3840,7 @@
38403840
"name": "python",
38413841
"nbconvert_exporter": "python",
38423842
"pygments_lexer": "ipython3",
3843-
"version": "3.8.6"
3843+
"version": "3.9.7"
38443844
}
38453845
},
38463846
"nbformat": 4,

examples/pymc3_howto/sampling_compound_step.ipynb

Lines changed: 4 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -63,10 +63,10 @@
6363
" rv1 = ... # random variable 1 (continuous)\n",
6464
" rv2 = ... # random variable 2 (continuous)\n",
6565
" rv3 = ... # random variable 3 (categorical)\n",
66-
" ...\n",
66+
" #...\n",
6767
" step1 = pm.Metropolis([rv1, rv2])\n",
6868
" step2 = pm.CategoricalGibbsMetropolis([rv3])\n",
69-
" trace = pm.sample(..., step=[step1, step2]...)\n",
69+
" trace = pm.sample(..., step=[step1, step2])\n",
7070
"```\n",
7171
"The compound step now contains a list of `methods`. At each sampling step, it iterates over these methods, taking a `point` as input. In each step a new `point` is proposed as an output, if rejected by the Metropolis-Hastings criteria the original input `point` sticks around as the output. "
7272
]
@@ -678,11 +678,6 @@
678678
"The concern with mixing discrete and continuous sampling is that the change in discrete parameters will affect the continuous distribution's geometry so that the adaptation (i.e., the tuned mass matrix and step size) may be inappropriate for the Hamiltonian Monte Carlo sampling. HMC/NUTS is hypersensitive to its tuning parameters (mass matrix and step size). Another issue is that we also don't know how many iterations we have to run to get a decent sample when the discrete parameters change. Though it hasn't been fully evaluated, it seems that if the discrete parameter is in low dimensions (e.g., 2-class mixture models, outlier detection with explicit discrete labeling), the mixing of discrete sampling with HMC/NUTS works OK. However, it is much less efficient than marginalizing out the discrete parameters. And sometimes it can be observed that the Markov chains get stuck quite often. In order to evaluate this more properly, one can use a simulation-based method to look at the posterior coverage and establish the computational correctness, as explained in [Cook, Gelman, and Rubin 2006](https://amstat.tandfonline.com/doi/abs/10.1198/106186006x136976)."
679679
]
680680
},
681-
{
682-
"cell_type": "markdown",
683-
"metadata": {},
684-
"source": []
685-
},
686681
{
687682
"cell_type": "markdown",
688683
"metadata": {},
@@ -724,7 +719,7 @@
724719
],
725720
"metadata": {
726721
"kernelspec": {
727-
"display_name": "Python 3",
722+
"display_name": "Python 3 (ipykernel)",
728723
"language": "python",
729724
"name": "python3"
730725
},
@@ -738,7 +733,7 @@
738733
"name": "python",
739734
"nbconvert_exporter": "python",
740735
"pygments_lexer": "ipython3",
741-
"version": "3.9.2"
736+
"version": "3.9.7"
742737
},
743738
"latex_envs": {
744739
"LaTeX_envs_menu_present": true,

requirements-docs.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,3 +6,4 @@ sphinx-copybutton
66
sphinxcontrib-bibtex
77
ablog
88
sphinxext-opengraph
9+
git+https://github.com/felix-hilden/sphinx-codeautolink

0 commit comments

Comments
 (0)