Skip to content

Commit fe09dc6

Browse files
committed
Merge branch 'main' of https://github.com/arangodb/docs-hugo into aql-timezone
2 parents 0cf4257 + 611714b commit fe09dc6

File tree

7 files changed

+46
-7
lines changed

7 files changed

+46
-7
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -778,7 +778,7 @@ Netlify supports server-side redirects configured with a text file
778778
This is helpful when renaming folders with many subfolders and files because
779779
there is support for splatting and placeholders (but not regular expressions). See
780780
[Redirect options](https://docs.netlify.com/routing/redirects/redirect-options/)
781-
for details. The configuration file is `site/content/_redirects`.
781+
for details. The configuration file is `site/static/_redirects`.
782782

783783
Otherwise, the following steps are necessary for moving content:
784784
1. Rename file or folder

site/config/_default/config.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,9 @@ module:
1616
- source: "content/images"
1717
target: "assets/images"
1818

19+
- source: "static"
20+
target: "static"
21+
1922
markup:
2023
highlight:
2124
noClasses: false

site/content/3.11/develop/http-api/queries/aql-queries.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -178,9 +178,9 @@ Content-type: application/json
178178
}
179179
```
180180

181-
If the `allowRetry` query option is set to `true`, then the response object
182-
contains a `nextBatchId` attribute, except for the last batch (if `hasMore` is
183-
`false`). If retrieving a result batch fails because of a connection issue, you
181+
The response object contains a `nextBatchId` attribute, except for the last batch
182+
(when `hasMore` is `false`). If the `allowRetry` query option is set to `true`
183+
and if retrieving a result batch fails because of a connection issue, you
184184
can ask for that batch again using the `POST /_api/cursor/<cursor-id>/<batch-id>`
185185
endpoint. The first batch has an ID of `1` and the value is incremented by 1
186186
with every batch. Every result response except the last one also includes a

site/content/3.12/aql/high-level-operations/upsert.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -243,6 +243,20 @@ UPSERT { a: 1234 }
243243
OPTIONS { indexHint: … , forceIndexHint: true }
244244
```
245245

246+
### `readOwnWrites`
247+
248+
The `readOwnWrites` option allows an `UPSERT` operation to process its inputs one
249+
by one. The default value is `true`. When enabled, the `UPSERT` operation can
250+
observe its own writes and can handle modifying the same target document multiple
251+
times in the same query.
252+
253+
When the option is set to `false`, an `UPSERT` operation processes its inputs
254+
in batches. Normally, a batch has 1000 inputs, which can lead to a faster execution.
255+
However, when using batches, the `UPSERT` operation can essentially not observe its own writes.
256+
You should only set the `readOwnWrites` option to `false` if you can
257+
guarantee that the input of the `UPSERT` leads to disjoint documents being
258+
inserted, updated, or replaced.
259+
246260
## Returning documents
247261

248262
`UPSERT` statements can optionally return data. To do so, they need to be followed

site/content/3.12/develop/http-api/queries/aql-queries.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -178,9 +178,9 @@ Content-type: application/json
178178
}
179179
```
180180

181-
If the `allowRetry` query option is set to `true`, then the response object
182-
contains a `nextBatchId` attribute, except for the last batch (if `hasMore` is
183-
`false`). If retrieving a result batch fails because of a connection issue, you
181+
The response object contains a `nextBatchId` attribute, except for the last batch
182+
(when `hasMore` is `false`). If the `allowRetry` query option is set to `true`
183+
and if retrieving a result batch fails because of a connection issue, you
184184
can ask for that batch again using the `POST /_api/cursor/<cursor-id>/<batch-id>`
185185
endpoint. The first batch has an ID of `1` and the value is incremented by 1
186186
with every batch. Every result response except the last one also includes a

site/content/3.12/release-notes/version-3.12/whats-new-in-3-12.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -175,6 +175,21 @@ UPDATE { logins: OLD.logins + 1 } IN users
175175

176176
Read more about [`UPSERT` operations](../../aql/high-level-operations/upsert.md) in AQL.
177177

178+
### `readOwnWrites` option for `UPSERT` operations
179+
180+
A `readOwnWrites` option has been added for `UPSERT` operations. The default
181+
value is `true` and the behavior is identical to previous versions of ArangoDB that
182+
do not have this option. When enabled, an `UPSERT` operation processes its
183+
inputs one by one. This way, the operation can observe its own writes and can
184+
handle modifying the same target document multiple times in the same query.
185+
186+
When the option is set to `false`, an `UPSERT` operation processes its inputs
187+
in batches. Normally, a batch has 1000 inputs, which can lead to a faster execution.
188+
However, when using batches, the `UPSERT` operation cannot observe its own writes.
189+
Therefore, you should only set the `readOwnWrites` option to `false` if you can
190+
guarantee that the input of the `UPSERT` leads to disjoint documents being
191+
inserted, updated, or replaced.
192+
178193
### Added AQL functions
179194

180195
The new `PARSE_COLLECTION()` and `PARSE_KEY()` let you more extract the
@@ -225,6 +240,13 @@ timezone, and the second date in the second timezone:
225240

226241
See [Date functions in AQL](../../aql/functions/date.md#date_dayofweek)
227242

243+
### Improved `move-filters-into-enumerate` optimizer rule
244+
245+
The `move-filters-into-enumerate` optimizer rule can now also move filters into
246+
`EnumerateListNodes` for early pruning. This can significantly improve the
247+
performance of queries that do a lot of filtering on longer lists of
248+
non-collection data.
249+
228250
## Indexing
229251

230252
### Stored values can contain the `_id` attribute
File renamed without changes.

0 commit comments

Comments
 (0)