@@ -24,8 +24,8 @@ def DecomposeOp : Op<Transform_Dialect, "structured.decompose",
24
24
(depthwise) convolutions, into combinations of lower-dimensional equivalents
25
25
when possible.
26
26
27
- Return modes:
28
- =============
27
+ #### Return modes
28
+
29
29
This operation ignores non-Linalg ops and drops them in the return.
30
30
If all the operations referred to by the `target` PDLOperation decompose
31
31
properly, the transform succeeds. Otherwise the transform silently fails.
@@ -68,11 +68,11 @@ def GeneralizeOp : Op<Transform_Dialect, "structured.generalize",
68
68
[FunctionalStyleTransformOpTrait, MemoryEffectsOpInterface,
69
69
TransformOpInterface, TransformEachOpTrait]> {
70
70
let description = [{
71
- Transforms a named structued operation into the generic form with the
71
+ Transforms a named structured operation into the generic form with the
72
72
explicit attached region.
73
73
74
- Return modes:
75
- =============
74
+ #### Return modes
75
+
76
76
This operation ignores non-Linalg ops and drops them in the return.
77
77
If all the operations referred to by the `target` PDLOperation generalize
78
78
properly, the transform succeeds. Otherwise the transform silently fails.
@@ -100,8 +100,8 @@ def InterchangeOp : Op<Transform_Dialect, "structured.interchange",
100
100
Interchanges the iterators of the operations pointed to by the target handle
101
101
using the iterator interchange attribute.
102
102
103
- Return modes:
104
- =============
103
+ #### Return modes
104
+
105
105
This operation ignores non-linalg::Generic ops and drops them in the return.
106
106
This operation fails if the interchange attribute is invalid.
107
107
If all the operations referred to by the `target` PDLOperation interchange
@@ -134,8 +134,8 @@ def PadOp : Op<Transform_Dialect, "structured.pad",
134
134
Pads the operations pointed to by the target handle using the options
135
135
provides as operation attributes.
136
136
137
- Return modes:
138
- =============
137
+ #### Return modes
138
+
139
139
This operation ignores non-Linalg ops and drops them in the return.
140
140
This operation may produce a definiteFailure if the padding fails for any
141
141
reason.
@@ -174,8 +174,8 @@ def ScalarizeOp : Op<Transform_Dialect, "structured.scalarize",
174
174
Indicates that ops of a specific kind in the given function should be
175
175
scalarized (i.e. their dynamic dimensions tiled by 1).
176
176
177
- Return modes:
178
- =============
177
+ #### Return modes:
178
+
179
179
This operation ignores non-Linalg ops and drops them in the return.
180
180
This operation produces `definiteFailure` if the scalarization fails for any
181
181
reason.
@@ -259,8 +259,8 @@ def SplitReductionOp : Op<Transform_Dialect, "structured.split_reduction",
259
259
- use_alloc: whether to use an alloc op to allocate the temporary
260
260
tensor (default: do not use alloc op)
261
261
262
- Return modes:
263
- =============
262
+ #### Return modes
263
+
264
264
This operation ignores non-Linalg ops and drops them in the return.
265
265
This operation produces `definiteFailure` if the splitting fails for any
266
266
reason.
@@ -275,8 +275,8 @@ def SplitReductionOp : Op<Transform_Dialect, "structured.split_reduction",
275
275
- the split op and
276
276
- the result-combining op.
277
277
278
- Example (default: use_scaling_algorithm = false, use_alloc = false):
279
- ====================================================================
278
+ #### Example (default: ` use_scaling_algorithm = false, use_alloc = false` ):
279
+
280
280
```
281
281
%r = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>,
282
282
affine_map<(d0) -> ()>],
@@ -314,8 +314,8 @@ def SplitReductionOp : Op<Transform_Dialect, "structured.split_reduction",
314
314
} -> tensor<f32>
315
315
```
316
316
317
- Example (use_scaling_algorithm = true, use_alloc = true):
318
- =========================================================
317
+ #### Example (` use_scaling_algorithm = true, use_alloc = true` ):
318
+
319
319
Instead of introducing an ExpandShapeOp, this scaling-based implementation
320
320
rewrites a reduction dimension `k` into `k * split_factor + kk`.
321
321
The dimension `kk` is added as an extra parallel dimension to the
@@ -329,7 +329,7 @@ def SplitReductionOp : Op<Transform_Dialect, "structured.split_reduction",
329
329
b. O(i, j) += O_i(kk, i, j)
330
330
The intermediate tensor O_i is of shape (128/16)x3x5 == 8x3x5.
331
331
332
- Example:
332
+ #### Example:
333
333
334
334
```
335
335
%0 = linalg.matmul ins(%A, %B: tensor<16x256xf32>, tensor<256x32xf32>)
@@ -439,8 +439,8 @@ def VectorizeOp : Op<Transform_Dialect, "structured.vectorize",
439
439
Note that this transformation is invalidating the handles to any payload IR
440
440
operation that is contained inside the vectorization target.
441
441
442
- Return modes:
443
- =============
442
+ #### Return modes:
443
+
444
444
This operation produces `definiteFailure` if vectorization fails for any
445
445
reason.
446
446
The operation always returns the handle to the target op that is expected
0 commit comments