@@ -274,6 +274,233 @@ Instead of the simple majority strategy (``ConsensusStrategy``) an
274
274
``UnanimousStrategy `` can be used to require the lock to be acquired in all
275
275
the stores.
276
276
277
+ .. caution ::
278
+
279
+ In order to get high availability when using the ``ConsensusStrategy ``, the
280
+ minimum cluster size must be three servers. This allows the cluster to keep
281
+ working when a single server fails (because this strategy requires that the
282
+ lock is acquired in more than half of the servers).
283
+
284
+ Reliability
285
+ -----------
286
+
287
+ The component guarantees that the same resource can't be lock twice as long as
288
+ the component is used in the following way.
289
+
290
+ Remote Stores
291
+ ~~~~~~~~~~~~~
292
+
293
+ Remote stores (:ref: `MemcachedStore <lock-store-memcached >` and
294
+ :ref: `RedisStore <lock-store-redis >`) use an unique token to recognize the true
295
+ owner of the lock. This token is stored in the
296
+ :class: `Symfony\\ Component\\ Lock\\ Key ` object and is used internally by the
297
+ ``Lock ``, therefore this key must not be shared between processes (session,
298
+ caching, fork, ...).
299
+
300
+ .. caution ::
301
+
302
+ Do not share a key between processes.
303
+
304
+ Every concurrent process must store the ``Lock `` in the same server. Otherwise two
305
+ different machines may allow two different processes to acquire the same ``Lock ``.
306
+
307
+ .. caution ::
308
+
309
+ To guarantee that the same server will always be safe, do not use Memcached
310
+ behind a LoadBalancer, a cluster or round-robin DNS. Even if the main server
311
+ is down, the calls must not be forwarded to a backup or failover server.
312
+
313
+ Expiring Stores
314
+ ~~~~~~~~~~~~~~~
315
+
316
+ Expiring stores (:ref: `MemcachedStore <lock-store-memcached >` and
317
+ :ref: `RedisStore <lock-store-redis >`) guarantee that the lock is acquired
318
+ only for the defined duration of time. If the task takes longer to be
319
+ accomplished, then the lock can be released by the store and acquired by
320
+ someone else.
321
+
322
+ The ``Lock `` provides several methods to check its health. The ``isExpired() ``
323
+ method checks whether or not it lifetime is over and the ``getRemainingLifetime() ``
324
+ method returns its time to live in seconds.
325
+
326
+ Using the above methods, a more robust code would be::
327
+
328
+ // ...
329
+ $lock = $factory->createLock('invoice-publication', 30);
330
+
331
+ $lock->acquire();
332
+ while (!$finished) {
333
+ if ($lock->getRemainingLifetime() <= 5) {
334
+ if ($lock->isExpired()) {
335
+ // lock was lost, perform a rollback or send a notification
336
+ throw new \RuntimeException('Lock lost during the overall process');
337
+ }
338
+
339
+ $lock->refresh();
340
+ }
341
+
342
+ // Perform the task whose duration MUST be less than 5 minutes
343
+ }
344
+
345
+ .. caution ::
346
+
347
+ Choose wisely the lifetime of the ``Lock `` and check whether its remaining
348
+ time to leave is enough to perform the task.
349
+
350
+ .. caution ::
351
+
352
+ Storing a ``Lock `` usually takes a few milliseconds, but network conditions
353
+ may increase that time a lot (up to a few seconds). Take that into account
354
+ when choosing the right TTL.
355
+
356
+ By design, locks are stored in servers with a defined lifetime. If the date or
357
+ time of the machine changes, a lock could be released sooner than expected.
358
+
359
+ .. caution ::
360
+
361
+ To guarantee that date won't change, the NTP service should be disabled
362
+ and the date should be updated when the service is stopped.
363
+
364
+ FlockStore
365
+ ~~~~~~~~~~
366
+
367
+ By using the file system, this ``Store `` is reliable as long as concurrent
368
+ processes use the same physical directory to stores locks.
369
+
370
+ Processes must run on the same machine, virtual machine or container.
371
+ Be careful when updating a Kubernetes or Swarm service because for a short
372
+ period of time, there can be two running containers in parallel.
373
+
374
+ The absolute path to the directory must remain the same. Be careful of symlinks
375
+ that could change at anytime: Capistrano and blue/green deployment often use
376
+ that trick. Be careful when the path to that directory changes between two
377
+ deployments.
378
+
379
+ Some file systems (such as some types of NFS) do not support locking.
380
+
381
+ .. caution ::
382
+
383
+ All concurrent processes must use the same physical file system by running
384
+ on the same machine and using the same absolute path to locks directory.
385
+
386
+ By definition, usage of ``FlockStore `` in an HTTP context is incompatible
387
+ with multiple front servers, unless to ensure that the same resource will
388
+ always be locked on the same machine or to use a well configured shared file
389
+ system.
390
+
391
+ Files on file system can be removed during a maintenance operation. For instance
392
+ to cleanup the ``/tmp `` directory or after a reboot of the machine when directory
393
+ uses tmpfs. It's not an issue if the lock is released when the process ended, but
394
+ it is in case of ``Lock `` reused between requests.
395
+
396
+ .. caution ::
397
+
398
+ Do not store locks on a volatile file system if they have to be reused in
399
+ several requests.
400
+
401
+ MemcachedStore
402
+ ~~~~~~~~~~~~~~
403
+
404
+ The way Memcached works is to store items in memory. That means that by using
405
+ the :ref: `MemcachedStore <lock-store-memcached >` the locks are not persisted
406
+ and may disappear by mistake at anytime.
407
+
408
+ If the Memcached service or the machine hosting it restarts, every lock would
409
+ be lost without notifying the running processes.
410
+
411
+ .. caution ::
412
+
413
+ To avoid that someone else acquires a lock after a restart, it's recommended
414
+ to delay service start and wait at least as long as the longest lock TTL.
415
+
416
+ By default Memcached uses a LRU mechanism to remove old entries when the service
417
+ needs space to add new items.
418
+
419
+ .. caution ::
420
+
421
+ Number of items stored in the Memcached must be under control. If it's not
422
+ possible, LRU should be disabled and Lock should be stored in a dedicated
423
+ Memcached service away from Cache.
424
+
425
+ When the Memcached service is shared and used for multiple usage, Locks could be
426
+ removed by mistake. For instance some implementation of the PSR-6 ``clear() ``
427
+ method uses the Memcached's ``flush() `` method which purges and removes everything.
428
+
429
+ .. caution ::
430
+
431
+ The method ``flush() `` must not be called, or locks should be stored in a
432
+ dedicated Memcached service away from Cache.
433
+
434
+ RedisStore
435
+ ~~~~~~~~~~
436
+
437
+ The way Redis works is to store items in memory. That means that by using
438
+ the :ref: `RedisStore <lock-store-redis >` the locks are not persisted
439
+ and may disappear by mistake at anytime.
440
+
441
+ If the Redis service or the machine hosting it restarts, every locks would
442
+ be lost without notifying the running processes.
443
+
444
+ .. caution ::
445
+
446
+ To avoid that someone else acquires a lock after a restart, it's recommended
447
+ to delay service start and wait at least as long as the longest lock TTL.
448
+
449
+ .. tip ::
450
+
451
+ Redis can be configured to persist items on disk, but this option would
452
+ slow down writes on the service. This could go against other uses of the
453
+ server.
454
+
455
+ When the Redis service is shared and used for multiple usages, locks could be
456
+ removed by mistake.
457
+
458
+ .. caution ::
459
+
460
+ The command ``FLUSHDB `` must not be called, or locks should be stored in a
461
+ dedicated Redis service away from Cache.
462
+
463
+ CombinedStore
464
+ ~~~~~~~~~~~~~
465
+
466
+ Combined stores allow to store locks across several backends. It's a common
467
+ mistake to think that the lock mechanism will be more reliable. This is wrong
468
+ The ``CombinedStore `` will be, at best, as reliable as the least reliable of
469
+ all managed stores. As soon as one managed store returns erroneous information,
470
+ the ``CombinedStore `` won't be reliable.
471
+
472
+ .. caution ::
473
+
474
+ All concurrent processes must use the same configuration, with the same
475
+ amount of managed stored and the same endpoint.
476
+
477
+ .. tip ::
478
+
479
+ Instead of using a cluster of Redis or Memcached servers, it's better to use
480
+ a ``CombinedStore `` with a single server per managed store.
481
+
482
+ SemaphoreStore
483
+ ~~~~~~~~~~~~~~
484
+
485
+ Semaphores are handled by the Kernel level. In order to be reliable, processes
486
+ must run on the same machine, virtual machine or container. Be careful when
487
+ updating a Kubernetes or Swarm service because for a short period of time, there
488
+ can be two running containers in parallel.
489
+
490
+ .. caution ::
491
+
492
+ All concurrent processes must use the same machine. Before starting a
493
+ concurrent process on a new machine, check that other process are stopped
494
+ on the old one.
495
+
496
+ Overall
497
+ ~~~~~~~
498
+
499
+ Changing the configuration of stores should be done very carefully. For
500
+ instance, during the deployment of a new version. Processes with new
501
+ configuration must not be started while old processes with old configuration
502
+ are still running.
503
+
277
504
.. _`locks` : https://en.wikipedia.org/wiki/Lock_(computer_science)
278
505
.. _Packagist : https://packagist.org/packages/symfony/lock
279
506
.. _`PHP semaphore functions` : http://php.net/manual/en/book.sem.php
0 commit comments