Skip to content

Commit 9565c9a

Browse files
percevalveSeverin Hatt
and
Severin Hatt
authored
Cleaning up (#365)
Co-authored-by: Severin Hatt <severinhatt@mini.local>
1 parent 8accd0d commit 9565c9a

File tree

2 files changed

+198
-507
lines changed

2 files changed

+198
-507
lines changed

examples/case_studies/probabilistic_matrix_factorization.ipynb

Lines changed: 174 additions & 486 deletions
Large diffs are not rendered by default.

myst_nbs/case_studies/probabilistic_matrix_factorization.myst.md

Lines changed: 24 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -6,29 +6,27 @@ jupytext:
66
format_version: 0.13
77
jupytext_version: 1.13.7
88
kernelspec:
9-
display_name: Python 3
9+
display_name: Python 3 (ipykernel)
1010
language: python
1111
name: python3
1212
---
1313

14+
(probabilistic_matrix_factorization)=
1415
# Probabilistic Matrix Factorization for Making Personalized Recommendations
1516

16-
:::{post} Sept 20, 2021
17-
:tags: case study,
17+
:::{post} June 3, 2022
18+
:tags: case study, product recommendation, matrix factorization
1819
:category: intermediate
20+
:author: Ruslan Salakhutdinov, Andriy Mnih, Mack Sweeney, Colin Carroll, Rob Zinkov
1921
:::
2022

2123
```{code-cell} ipython3
22-
%matplotlib inline
23-
2424
import arviz as az
2525
import matplotlib.pyplot as plt
2626
import numpy as np
2727
import pandas as pd
28-
import pymc3 as pm
28+
import pymc as pm
2929
import xarray as xr
30-
31-
print(f"Running on PyMC3 v{pm.__version__}")
3230
```
3331

3432
```{code-cell} ipython3
@@ -42,7 +40,7 @@ az.style.use("arviz-darkgrid")
4240

4341
So you are browsing for something to watch on Netflix and just not liking the suggestions. You just know you can do better. All you need to do is collect some ratings data from yourself and friends and build a recommendation algorithm. This notebook will guide you in doing just that!
4442

45-
We'll start out by getting some intuition for how our model will work. Then we'll formalize our intuition. Afterwards, we'll examine the dataset we are going to use. Once we have some notion of what our data looks like, we'll define some baseline methods for predicting preferences for movies. Following that, we'll look at Probabilistic Matrix Factorization (PMF), which is a more sophisticated Bayesian method for predicting preferences. Having detailed the PMF model, we'll use PyMC3 for MAP estimation and MCMC inference. Finally, we'll compare the results obtained with PMF to those obtained from our baseline methods and discuss the outcome.
43+
We'll start out by getting some intuition for how our model will work. Then we'll formalize our intuition. Afterwards, we'll examine the dataset we are going to use. Once we have some notion of what our data looks like, we'll define some baseline methods for predicting preferences for movies. Following that, we'll look at Probabilistic Matrix Factorization (PMF), which is a more sophisticated Bayesian method for predicting preferences. Having detailed the PMF model, we'll use PyMC for MAP estimation and MCMC inference. Finally, we'll compare the results obtained with PMF to those obtained from our baseline methods and discuss the outcome.
4644

4745
## Intuition
4846

@@ -305,23 +303,23 @@ Given small precision parameters, the priors on $U$ and $V$ ensure our latent va
305303
import logging
306304
import time
307305
306+
import aesara
308307
import scipy as sp
309-
import theano
310308
311309
# Enable on-the-fly graph computations, but ignore
312310
# absence of intermediate test values.
313-
theano.config.compute_test_value = "ignore"
311+
aesara.config.compute_test_value = "ignore"
314312
315313
# Set up logging.
316314
logger = logging.getLogger()
317315
logger.setLevel(logging.INFO)
318316
319317
320318
class PMF:
321-
"""Probabilistic Matrix Factorization model using pymc3."""
319+
"""Probabilistic Matrix Factorization model using pymc."""
322320
323321
def __init__(self, train, dim, alpha=2, std=0.01, bounds=(1, 5)):
324-
"""Build the Probabilistic Matrix Factorization model using pymc3.
322+
"""Build the Probabilistic Matrix Factorization model using pymc.
325323
326324
:param np.ndarray train: The training data to use for learning the model.
327325
:param int dim: Dimensionality of the model; number of latent factors.
@@ -362,14 +360,14 @@ class PMF:
362360
mu=0,
363361
tau=self.alpha_u * np.eye(dim),
364362
dims=("users", "latent_factors"),
365-
testval=rng.standard_normal(size=(n, dim)) * std,
363+
initval=rng.standard_normal(size=(n, dim)) * std,
366364
)
367365
V = pm.MvNormal(
368366
"V",
369367
mu=0,
370368
tau=self.alpha_v * np.eye(dim),
371369
dims=("movies", "latent_factors"),
372-
testval=rng.standard_normal(size=(m, dim)) * std,
370+
initval=rng.standard_normal(size=(m, dim)) * std,
373371
)
374372
R = pm.Normal(
375373
"R",
@@ -390,7 +388,7 @@ We'll also need functions for calculating the MAP and performing sampling on our
390388

391389
$$ E = \frac{1}{2} \sum_{i=1}^N \sum_{j=1}^M I_{ij} (R_{ij} - U_i V_j^T)^2 + \frac{\lambda_U}{2} \sum_{i=1}^N \|U\|_{Fro}^2 + \frac{\lambda_V}{2} \sum_{j=1}^M \|V\|_{Fro}^2, $$
392390

393-
where $\lambda_U = \alpha_U / \alpha$, $\lambda_V = \alpha_V / \alpha$, and $\|\cdot\|_{Fro}^2$ denotes the Frobenius norm {cite:p}`mnih2008advances`. Minimizing this objective function gives a local minimum, which is essentially a maximum a posteriori (MAP) estimate. While it is possible to use a fast Stochastic Gradient Descent procedure to find this MAP, we'll be finding it using the utilities built into `pymc3`. In particular, we'll use `find_MAP` with Powell optimization (`scipy.optimize.fmin_powell`). Having found this MAP estimate, we can use it as our starting point for MCMC sampling.
391+
where $\lambda_U = \alpha_U / \alpha$, $\lambda_V = \alpha_V / \alpha$, and $\|\cdot\|_{Fro}^2$ denotes the Frobenius norm {cite:p}`mnih2008advances`. Minimizing this objective function gives a local minimum, which is essentially a maximum a posteriori (MAP) estimate. While it is possible to use a fast Stochastic Gradient Descent procedure to find this MAP, we'll be finding it using the utilities built into `pymc`. In particular, we'll use `find_MAP` with Powell optimization (`scipy.optimize.fmin_powell`). Having found this MAP estimate, we can use it as our starting point for MCMC sampling.
394392

395393
Since it is a reasonably complex model, we expect the MAP estimation to take some time. So let's save it after we've found it. Note that we define a function for finding the MAP below, assuming it will receive a namespace with some variables in it. Then we attach that function to the PMF class, where it will have such a namespace after initialization. The PMF class is defined in pieces this way so I can say a few things between each piece to make it clearer.
396394

@@ -424,9 +422,9 @@ So now our PMF class has a `map` `property` which will either be found using Pow
424422
```{code-cell} ipython3
425423
# Draw MCMC samples.
426424
def _draw_samples(self, **kwargs):
427-
kwargs.setdefault("chains", 1)
425+
# kwargs.setdefault("chains", 1)
428426
with self.model:
429-
self.trace = pm.sample(**kwargs, return_inferencedata=True)
427+
self.trace = pm.sample(**kwargs)
430428
431429
432430
# Update our class with the sampling infrastructure.
@@ -748,16 +746,18 @@ ax.set_ylabel("RMSE");
748746

749747
## Summary
750748

751-
We set out to predict user preferences for unseen movies. First we discussed the intuitive notion behind the user-user and item-item neighborhood approaches to collaborative filtering. Then we formalized our intuitions. With a firm understanding of our problem context, we moved on to exploring our subset of the Movielens data. After discovering some general patterns, we defined three baseline methods: uniform random, global mean, and mean of means. With the goal of besting our baseline methods, we implemented the basic version of Probabilistic Matrix Factorization (PMF) using `pymc3`.
749+
We set out to predict user preferences for unseen movies. First we discussed the intuitive notion behind the user-user and item-item neighborhood approaches to collaborative filtering. Then we formalized our intuitions. With a firm understanding of our problem context, we moved on to exploring our subset of the Movielens data. After discovering some general patterns, we defined three baseline methods: uniform random, global mean, and mean of means. With the goal of besting our baseline methods, we implemented the basic version of Probabilistic Matrix Factorization (PMF) using `pymc`.
752750

753751
Our results demonstrate that the mean of means method is our best baseline on our prediction task. As expected, we are able to obtain a significant decrease in RMSE using the PMF MAP estimate obtained via Powell optimization. We illustrated one way to monitor convergence of an MCMC sampler with a high-dimensionality sampling space using the Frobenius norms of the sampled variables. The traceplots using this method seem to indicate that our sampler converged to the posterior. Results using this posterior showed that attempting to improve the MAP estimation using MCMC sampling actually overfit the training data and increased test RMSE. This was likely caused by the constraining of the posterior via fixed precision parameters $\alpha$, $\alpha_U$, and $\alpha_V$.
754752

755-
As a followup to this analysis, it would be interesting to also implement the logistic and constrained versions of PMF. We expect both models to outperform the basic PMF model. We could also implement the fully Bayesian version of PMF (BPMF) {cite:p}`salakhutdinov2008bayesian`, which places hyperpriors on the model parameters to automatically learn ideal mean and precision parameters for $U$ and $V$. This would likely resolve the issue we faced in this analysis. We would expect BPMF to improve upon the MAP estimation produced here by learning more suitable hyperparameters and parameters. For a basic (but working!) implementation of BPMF in `pymc3`, see [this gist](https://gist.github.com/macks22/00a17b1d374dfc267a9a).
753+
As a followup to this analysis, it would be interesting to also implement the logistic and constrained versions of PMF. We expect both models to outperform the basic PMF model. We could also implement the fully Bayesian version of PMF (BPMF) {cite:p}`salakhutdinov2008bayesian`, which places hyperpriors on the model parameters to automatically learn ideal mean and precision parameters for $U$ and $V$. This would likely resolve the issue we faced in this analysis. We would expect BPMF to improve upon the MAP estimation produced here by learning more suitable hyperparameters and parameters. For a basic (but working!) implementation of BPMF in `pymc`, see [this gist](https://gist.github.com/macks22/00a17b1d374dfc267a9a).
756754

757755
If you made it this far, then congratulations! You now have some idea of how to build a basic recommender system. These same ideas and methods can be used on many different recommendation tasks. Items can be movies, products, advertisements, courses, or even other people. Any time you can build yourself a user-item matrix with user preferences in the cells, you can use these types of collaborative filtering algorithms to predict the missing values. If you want to learn more about recommender systems, the first reference is a good place to start.
758756

759757
+++
760758

759+
## Authors
760+
761761
The model discussed in this analysis was developed by Ruslan Salakhutdinov and Andriy Mnih. Code and supporting text are the original work of [Mack Sweeney](https://www.linkedin.com/in/macksweeney) with changes made to adapt the code and text for the MovieLens dataset by Colin Carroll and Rob Zinkov.
762762

763763
+++
@@ -776,5 +776,8 @@ goldberg2001eigentaste
776776

777777
```{code-cell} ipython3
778778
%load_ext watermark
779-
%watermark -n -u -v -iv -w
779+
%watermark -n -u -v -iv -w -p aesara,aeppl,xarray
780780
```
781+
782+
:::{include} ../page_footer.md
783+
:::

0 commit comments

Comments
 (0)