Skip to content

Commit 0f42bcf

Browse files
committed
auto-generating sphinx docs
1 parent c8e4ea4 commit 0f42bcf

File tree

719 files changed

+1252
-1164
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

719 files changed

+1252
-1164
lines changed

docs/master/__config__.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@
159159

160160

161161
<div class="version">
162-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+d38ed4d &#x25BC</a>
162+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+e68ee78 &#x25BC</a>
163163
</div>
164164

165165

docs/master/_modules/index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+d38ed4d &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+e68ee78 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+d38ed4d &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+e68ee78 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch/__config__.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+d38ed4d &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+e68ee78 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch/_jit_internal.html

Lines changed: 24 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+d38ed4d &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+e68ee78 &#x25BC</a>
162162
</div>
163163

164164

@@ -1053,6 +1053,29 @@ <h1>Source code for torch._jit_internal</h1><div class="highlight"><pre>
10531053
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">7</span><span class="p">):</span>
10541054
<span class="nb">globals</span><span class="p">()[</span><span class="s2">&quot;BroadcastingList</span><span class="si">{}</span><span class="s2">&quot;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">i</span><span class="p">)]</span> <span class="o">=</span> <span class="n">BroadcastingList1</span>
10551055

1056+
1057+
<div class="viewcode-block" id="is_scripting"><a class="viewcode-back" href="../../jit_language_reference.html#torch.jit.is_scripting">[docs]</a><span class="k">def</span> <span class="nf">is_scripting</span><span class="p">():</span>
1058+
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;</span>
1059+
<span class="sd"> Function that returns True when in compilation and False otherwise. This</span>
1060+
<span class="sd"> is useful especially with the @unused decorator to leave code in your</span>
1061+
<span class="sd"> model that is not yet TorchScript compatible.</span>
1062+
<span class="sd"> .. testcode::</span>
1063+
1064+
<span class="sd"> import torch</span>
1065+
1066+
<span class="sd"> @torch.jit.unused</span>
1067+
<span class="sd"> def unsupported_linear_op(x):</span>
1068+
<span class="sd"> return x</span>
1069+
1070+
<span class="sd"> def linear(x):</span>
1071+
<span class="sd"> if not torch.jit.is_scripting():</span>
1072+
<span class="sd"> return torch.linear(x)</span>
1073+
<span class="sd"> else:</span>
1074+
<span class="sd"> return unsupported_linear_op(x)</span>
1075+
<span class="sd"> &quot;&quot;&quot;</span>
1076+
<span class="k">return</span> <span class="kc">False</span></div>
1077+
1078+
10561079
<span class="c1"># Retrieves a fully-qualified name (module hierarchy + classname) for a given obj.</span>
10571080
<span class="k">def</span> <span class="nf">_qualified_name</span><span class="p">(</span><span class="n">obj</span><span class="p">):</span>
10581081
<span class="c1"># This special case allows us to override the qualified name on a type.</span>

docs/master/_modules/torch/_lobpcg.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+d38ed4d &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+e68ee78 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch/_lowrank.html

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+d38ed4d &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+e68ee78 &#x25BC</a>
162162
</div>
163163

164164

@@ -426,7 +426,7 @@ <h1>Source code for torch._lowrank</h1><div class="highlight"><pre>
426426
<span class="k">return</span> <span class="n">Q</span>
427427

428428

429-
<span class="k">def</span> <span class="nf">svd_lowrank</span><span class="p">(</span><span class="n">A</span><span class="p">,</span> <span class="n">q</span><span class="o">=</span><span class="mi">6</span><span class="p">,</span> <span class="n">niter</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">M</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
429+
<div class="viewcode-block" id="svd_lowrank"><a class="viewcode-back" href="../../generated/torch.svd_lowrank.html#torch.svd_lowrank">[docs]</a><span class="k">def</span> <span class="nf">svd_lowrank</span><span class="p">(</span><span class="n">A</span><span class="p">,</span> <span class="n">q</span><span class="o">=</span><span class="mi">6</span><span class="p">,</span> <span class="n">niter</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">M</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
430430
<span class="c1"># type: (Tensor, Optional[int], Optional[int], Optional[Tensor]) -&gt; Tuple[Tensor, Tensor, Tensor]</span>
431431
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;Return the singular value decomposition ``(U, S, V)`` of a matrix,</span>
432432
<span class="sd"> batches of matrices, or a sparse matrix :math:`A` such that</span>
@@ -471,7 +471,7 @@ <h1>Source code for torch._lowrank</h1><div class="highlight"><pre>
471471
<span class="n">tensor_ops</span> <span class="o">=</span> <span class="p">(</span><span class="n">A</span><span class="p">,</span> <span class="n">M</span><span class="p">)</span>
472472
<span class="k">if</span> <span class="p">(</span><span class="ow">not</span> <span class="nb">set</span><span class="p">(</span><span class="nb">map</span><span class="p">(</span><span class="nb">type</span><span class="p">,</span> <span class="n">tensor_ops</span><span class="p">))</span><span class="o">.</span><span class="n">issubset</span><span class="p">((</span><span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">,</span> <span class="nb">type</span><span class="p">(</span><span class="kc">None</span><span class="p">)))</span> <span class="ow">and</span> <span class="n">has_torch_function</span><span class="p">(</span><span class="n">tensor_ops</span><span class="p">)):</span>
473473
<span class="k">return</span> <span class="n">handle_torch_function</span><span class="p">(</span><span class="n">svd_lowrank</span><span class="p">,</span> <span class="n">tensor_ops</span><span class="p">,</span> <span class="n">A</span><span class="p">,</span> <span class="n">q</span><span class="o">=</span><span class="n">q</span><span class="p">,</span> <span class="n">niter</span><span class="o">=</span><span class="n">niter</span><span class="p">,</span> <span class="n">M</span><span class="o">=</span><span class="n">M</span><span class="p">)</span>
474-
<span class="k">return</span> <span class="n">_svd_lowrank</span><span class="p">(</span><span class="n">A</span><span class="p">,</span> <span class="n">q</span><span class="o">=</span><span class="n">q</span><span class="p">,</span> <span class="n">niter</span><span class="o">=</span><span class="n">niter</span><span class="p">,</span> <span class="n">M</span><span class="o">=</span><span class="n">M</span><span class="p">)</span>
474+
<span class="k">return</span> <span class="n">_svd_lowrank</span><span class="p">(</span><span class="n">A</span><span class="p">,</span> <span class="n">q</span><span class="o">=</span><span class="n">q</span><span class="p">,</span> <span class="n">niter</span><span class="o">=</span><span class="n">niter</span><span class="p">,</span> <span class="n">M</span><span class="o">=</span><span class="n">M</span><span class="p">)</span></div>
475475

476476

477477
<span class="k">def</span> <span class="nf">_svd_lowrank</span><span class="p">(</span><span class="n">A</span><span class="p">,</span> <span class="n">q</span><span class="o">=</span><span class="mi">6</span><span class="p">,</span> <span class="n">niter</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">M</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
@@ -511,7 +511,7 @@ <h1>Source code for torch._lowrank</h1><div class="highlight"><pre>
511511
<span class="k">return</span> <span class="n">U</span><span class="p">,</span> <span class="n">S</span><span class="p">,</span> <span class="n">V</span>
512512

513513

514-
<div class="viewcode-block" id="pca_lowrank"><a class="viewcode-back" href="../../generated/torch.pca_lowrank.html#torch.pca_lowrank">[docs]</a><span class="k">def</span> <span class="nf">pca_lowrank</span><span class="p">(</span><span class="n">A</span><span class="p">,</span> <span class="n">q</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">center</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">niter</span><span class="o">=</span><span class="mi">2</span><span class="p">):</span>
514+
<span class="k">def</span> <span class="nf">pca_lowrank</span><span class="p">(</span><span class="n">A</span><span class="p">,</span> <span class="n">q</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">center</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">niter</span><span class="o">=</span><span class="mi">2</span><span class="p">):</span>
515515
<span class="c1"># type: (Tensor, Optional[int], bool, int) -&gt; Tuple[Tensor, Tensor, Tensor]</span>
516516
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;Performs linear Principal Component Analysis (PCA) on a low-rank</span>
517517
<span class="sd"> matrix, batches of such matrices, or sparse matrix.</span>
@@ -612,7 +612,7 @@ <h1>Source code for torch._lowrank</h1><div class="highlight"><pre>
612612
<span class="k">return</span> <span class="n">_svd_lowrank</span><span class="p">(</span><span class="n">A</span><span class="p">,</span> <span class="n">q</span><span class="p">,</span> <span class="n">niter</span><span class="o">=</span><span class="n">niter</span><span class="p">,</span> <span class="n">M</span><span class="o">=</span><span class="n">M</span><span class="p">)</span>
613613
<span class="k">else</span><span class="p">:</span>
614614
<span class="n">C</span> <span class="o">=</span> <span class="n">A</span><span class="o">.</span><span class="n">mean</span><span class="p">(</span><span class="n">dim</span><span class="o">=</span><span class="p">(</span><span class="o">-</span><span class="mi">2</span><span class="p">,),</span> <span class="n">keepdim</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
615-
<span class="k">return</span> <span class="n">_svd_lowrank</span><span class="p">(</span><span class="n">A</span> <span class="o">-</span> <span class="n">C</span><span class="p">,</span> <span class="n">q</span><span class="p">,</span> <span class="n">niter</span><span class="o">=</span><span class="n">niter</span><span class="p">,</span> <span class="n">M</span><span class="o">=</span><span class="kc">None</span><span class="p">)</span></div>
615+
<span class="k">return</span> <span class="n">_svd_lowrank</span><span class="p">(</span><span class="n">A</span> <span class="o">-</span> <span class="n">C</span><span class="p">,</span> <span class="n">q</span><span class="p">,</span> <span class="n">niter</span><span class="o">=</span><span class="n">niter</span><span class="p">,</span> <span class="n">M</span><span class="o">=</span><span class="kc">None</span><span class="p">)</span>
616616
</pre></div>
617617

618618
</article>

docs/master/_modules/torch/_tensor_str.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+d38ed4d &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+e68ee78 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch/_utils.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+d38ed4d &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+e68ee78 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch/autograd.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+d38ed4d &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+e68ee78 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch/autograd/anomaly_mode.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+d38ed4d &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+e68ee78 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch/autograd/function.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+d38ed4d &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+e68ee78 &#x25BC</a>
162162
</div>
163163

164164

docs/master/_modules/torch/autograd/functional.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@
158158

159159

160160
<div class="version">
161-
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+d38ed4d &#x25BC</a>
161+
<a href='http://pytorch.org/docs/versions.html'>1.7.0a0+e68ee78 &#x25BC</a>
162162
</div>
163163

164164

0 commit comments

Comments
 (0)