@@ -260,6 +260,227 @@ Instead of the simple majority strategy (``ConsensusStrategy``) an
260
260
``UnanimousStrategy `` can be used to require the lock to be acquired in all
261
261
the stores.
262
262
263
+ Reliability
264
+ -----------
265
+
266
+ The component guarantees that the same resource can't be lock twice as long as
267
+ the component is used in the following way.
268
+
269
+ Remote Stores
270
+ ~~~~~~~~~~~~~
271
+
272
+ Remote stores (:ref: `MemcachedStore <lock-store-memcached >` and
273
+ :ref: `RedisStore <lock-store-redis >`) use an unique token to recognize the true
274
+ owner of the lock. This token is stored in the
275
+ :class: `Symfony\\ Component\\ Lock\\ Key ` object and is used internally by the
276
+ ``Lock ``, therefore this Key must not be shared between processes (Session,
277
+ Caching, fork, ...).
278
+
279
+ .. caution ::
280
+
281
+ Do not share a Key between processes.
282
+
283
+ Every concurrent process must store the ``Lock `` in the same Server otherwise two
284
+ distinguished machines may allow two distinguished process to acquire the same ``Lock ``.
285
+
286
+ .. caution ::
287
+
288
+ To guarantee that the same Server will always be sure, do not use Memcached
289
+ behind a LoadBalancer, a cluster or round-robin DNS. Even if the main server
290
+ is Down, the calls must not be forwarded to a backup or failover server.
291
+
292
+ Expiring Stores
293
+ ~~~~~~~~~~~~~~~
294
+
295
+ Expiring stores (:ref: `MemcachedStore <lock-store-memcached >` and
296
+ :ref: `RedisStore <lock-store-redis >`) guarantee that the lock is acquired
297
+ only for the defined duration of time. If the task takes longer to be
298
+ accomplished, then the lock can be released by the store and acquired by
299
+ someone else.
300
+
301
+ The ``Lock `` provide several methods to check it health. The ``isExpired ``
302
+ method provide an quick way to check whether or not it lifetime is over.
303
+ While the ``getRemainingLifetime `` method returns it time to live in seconds.
304
+
305
+ With the above methods, a more robust code would be::
306
+
307
+ // ...
308
+ $lock = $factory->createLock('invoice-publication', 30);
309
+
310
+ $lock->acquire();
311
+ while (!$finished) {
312
+ if ($lock->getRemainingLifetime() <= 5) {
313
+ if ($lock->isExpired()) {
314
+ // reliability was lost, perform a rollback or send a notification
315
+ throw new \RuntimeException('Lock lost during the overall process');
316
+ }
317
+
318
+ $lock->refresh();
319
+ }
320
+
321
+ // Perform the task whose duration MUST be less than 5 minutes
322
+ }
323
+
324
+ .. caution ::
325
+
326
+ Choose wisely the lifetime of the ``Lock ``. And check if it remaining
327
+ time to leave is enough to perform the task.
328
+
329
+ .. caution ::
330
+
331
+ Storing a ``Lock `` could take time. Even if, most of the time, it take
332
+ few milliseconds, Network, may have trouble and the duration to perform
333
+ this simple task could be up to few seconds. Take it into accound when
334
+ choosing the right TTL.
335
+
336
+ By design, Lock are stored in Server with a defined Lifetime. If the date or
337
+ time of the machine changes, a Lock could be released sooner than expected.
338
+
339
+ .. caution ::
340
+
341
+ To guarantee that date wouldn't change, the NTP service should be disabled
342
+ and the date should be updated when while the service is stopped.
343
+
344
+ FlockStore
345
+ ~~~~~~~~~~
346
+
347
+ By using the file system, this ``Store `` is reliable as long as concurrent
348
+ processes use the same physical directory to stores locks.
349
+
350
+ Processes must run on the same Machine, Virtual Machine or Container.
351
+ Be careful when updating a Kubernetes or Swarm service because for a short
352
+ period of time, there can be 2 running containers in parallel.
353
+
354
+ The absolute path to the directory must remain the same. Be careful to
355
+ symlinks on the path that could change at anytime: Capistrano and blue/green
356
+ deployment often use that trick. Be careful when the path to that directory
357
+ change between 2 deployments.
358
+
359
+ Some file systems (such as some types of NFS) do not support locking.
360
+
361
+ .. caution ::
362
+
363
+ All concurrent processes MUST use the same physical file system by running
364
+ on the same machine and using the same absolute path to locks directory.
365
+
366
+ By definition, usage of ``FlockStore `` in an HTTP context is incompatible
367
+ with multiple front server, unless to be sure that the same resource will
368
+ alway be locked on the same machine or to use a well configured shared file
369
+ system.
370
+
371
+ Files on file system can be removed during a maintenance operation. For instance
372
+ to cleanup the ``/tmp `` directory or after a reboot of the machine when directory
373
+ uses tmpfs. It's not an issue if the lock is released when the process ended, but
374
+ it is in case of ``Lock `` reused between requests.
375
+
376
+ .. caution ::
377
+
378
+ Do not store locks on a volatil file system if they have to be reused during
379
+ several requests.
380
+
381
+ MemcachedStore
382
+ ~~~~~~~~~~~~~~
383
+
384
+ The way Memcached works is to store items in Memory, that's means that by using
385
+ the :ref: `MemcachedStore <lock-store-memcached >` the locks are not persisted
386
+ and may disappear by mistake at anytime.
387
+
388
+ If the Memcached service or the machine hosting it restarts, every locks would
389
+ be lost without notify running processes.
390
+
391
+ .. caution ::
392
+
393
+ To avoid that someone else acquires a lock after a restart, we recommend
394
+ to delayed service start and wait at least as long as the longest lock TTL.
395
+
396
+ By default Memcached use a LRU mechanism to remove old entries when the service
397
+ need space to add new items.
398
+
399
+ .. caution ::
400
+
401
+ Number of items stored in the Memcached must be under control. If it's not
402
+ possible, LRU should be disabled and Lock should be stored in a dedicated
403
+ Memcached service away from Cache.
404
+
405
+ When the Memcached service is shared and used for multiple usage, Locks could be
406
+ removed by mistake. For instance some implementation of the PSR-6 ``clear ``
407
+ method use the Memcached's ``flush `` method which purge and remove everything.
408
+
409
+ .. caution ::
410
+
411
+ The method ``flush `` MUST not be called, or Locks should be stored in a
412
+ dedicated Memcached service away from Cache.
413
+
414
+ RedisStore
415
+ ~~~~~~~~~~
416
+
417
+ The way Redis works is to store items in Memory, that's means that by using
418
+ the :ref: `RedisStore <lock-store-redis >` the locks are not persisted
419
+ and may disappear by mistake at anytime.
420
+
421
+ If the Redis service or the machine hosting it restarts, every locks would
422
+ be lost without notify running processes.
423
+
424
+ .. caution ::
425
+
426
+ To avoid that someone else acquires a lock after a restart, we recommend
427
+ to delayed service start and wait at least as long as the longest lock TTL.
428
+
429
+ .. tips ::
430
+
431
+ Redis can be configured to persist items on disk, but this option would
432
+ slow down writes on the service. This could go against other uses of the
433
+ server.
434
+
435
+ When the Redis service is shared and used for multiple usage, Locks could be
436
+ removed by mistake.
437
+
438
+ .. caution ::
439
+
440
+ The command ``FLUSHDB `` MUST not be called, or Locks should be stored in a
441
+ dedicated Redis service away from Cache.
442
+
443
+ CombinedStore
444
+ ~~~~~~~~~~~~~
445
+
446
+ Combined stores allows to store locks across several backend. It's a common
447
+ mistake to think that the lock mechanism will be more reliable. This is wrong
448
+ The ``CombinedStore `` will be, at best, as reliable than the less reliable of
449
+ all managed stores. As soon as one managed store returns erroneous information,
450
+ the ``CombinedStore `` would be not reliable.
451
+
452
+ .. caution ::
453
+
454
+ All concurrent processes MUST use the same configuration. with the same
455
+ amount of managed stored and the same endpoint.
456
+
457
+ .. tips ::
458
+
459
+ Instead of using Cluster of Redis or memcached servers, we recommend to use
460
+ a ``CombinedStore `` with & single server per managed store.
461
+
462
+ SemaphoreStore
463
+ ~~~~~~~~~~~~~~
464
+
465
+ Semaphore are handled by the Kernel level, to be reliable, processes must run
466
+ on the same Machine, Virtual Machine or Container.
467
+ Be careful when updating a Kubernetes or Swarm service because for a short
468
+ period of time, there can be 2 running containers in parallel.
469
+
470
+ .. caution ::
471
+
472
+ All concurrent processes MUST use the same machine. Before starting a
473
+ concurrent process on a new machine, check that other process are stopped
474
+ on the old one.
475
+
476
+ Overall
477
+ ~~~~~~~
478
+
479
+ Changing the configuration of stores should be done very carefully. For
480
+ instance, during the deployment of a new version. Processes with new
481
+ configuration MUST NOT be started while Old processes with old configuration
482
+ are still running
483
+
263
484
.. _`locks` : https://en.wikipedia.org/wiki/Lock_(computer_science)
264
485
.. _Packagist : https://packagist.org/packages/symfony/lock
265
486
.. _`PHP semaphore functions` : http://php.net/manual/en/book.sem.php
0 commit comments