Symfony lock что за файл
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Open with Desktop
- View raw
- Copy raw contents Copy raw contents
Copy raw contents
Copy raw contents
The Lock Component
The Lock Component creates and manages locks, a mechanism to provide exclusive access to a shared resource.
Locks are used to guarantee exclusive access to some shared resource. In Symfony applications, you can use locks for example to ensure that a command is not executed more than once at the same time (on the same or different servers).
Locks are created using a :class:`Symfony\\Component\\Lock\\LockFactory` class, which in turn requires another class to manage the storage of locks:
The lock is created by calling the :method:`Symfony\\Component\\Lock\\LockFactory::createLock` method. Its first argument is an arbitrary string that represents the locked resource. Then, a call to the :method:`Symfony\\Component\\Lock\\LockInterface::acquire` method will try to acquire the lock:
If the lock can not be acquired, the method returns false . The acquire() method can be safely called repeatedly, even if the lock is already acquired.
Unlike other implementations, the Lock Component distinguishes locks instances even when they are created for the same resource. If a lock has to be used by several services, they should share the same Lock instance returned by the LockFactory::createLock method.
If you don't release the lock explicitly, it will be released automatically on instance destruction. In some cases, it can be useful to lock a resource across several requests. To disable the automatic release behavior, set the third argument of the createLock() method to false .
By default, when a lock cannot be acquired, the acquire method returns false immediately. To wait (indefinitely) until the lock can be created, pass true as the argument of the acquire() method. This is called a blocking lock because the execution of your application stops until the lock is acquired.
Some of the built-in Store classes support this feature. When they don't, they can be decorated with the RetryTillSaveStore class:
Locks created remotely are difficult to manage because there is no way for the remote Store to know if the locker process is still alive. Due to bugs, fatal errors or segmentation faults, it cannot be guaranteed that release() method will be called, which would cause the resource to be locked infinitely.
The best solution in those cases is to create expiring locks, which are released automatically after some amount of time has passed (called TTL for Time To Live). This time, in seconds, is configured as the second argument of the createLock() method. If needed, these locks can also be released early with the release() method.
The trickiest part when working with expiring locks is choosing the right TTL. If it's too short, other processes could acquire the lock before finishing the job; if it's too long and the process crashes before calling the release() method, the resource will stay locked until the timeout:
To avoid letting the lock in a locking state, it's recommended to wrap the job in a try/catch/finally block to always try to release the expiring lock.
In case of long-running tasks, it's better to start with a not too long TTL and then use the :method:`Symfony\\Component\\Lock\\LockInterface::refresh` method to reset the TTL to its original value:
Another useful technique for long-running tasks is to pass a custom TTL as an argument of the refresh() method to change the default lock TTL:
This component also provides two useful methods related to expiring locks: getExpiringDate() (which returns null or a \DateTimeImmutable object) and isExpired() (which returns a boolean).
The Owner of The Lock
Locks that are acquired for the first time are owned[1]_ by the Lock instance that acquired it. If you need to check whether the current Lock instance is (still) the owner of a lock, you can use the isAcquired() method:
Because of the fact that some lock stores have expiring locks (as seen and explained above), it is possible for an instance to lose the lock it acquired automatically:
A common pitfall might be to use the isAcquired() method to check if a lock has already been acquired by any process. As you can see in this example you have to use acquire() for this. The isAcquired() method is used to check if the lock has been acquired by the current process only!
[1] | Technically, the true owners of the lock are the ones that share the same instance of Key , not Lock . But from a user perspective, Key is internal and you will likely only be working with the Lock instance so it's easier to think of the Lock instance as being the one that is the owner of the lock. |
The component includes the following built-in store types:
Store | Scope | Blocking | Expiring |
---|---|---|---|
:ref:`FlockStore ` | local | yes | no |
:ref:`MemcachedStore ` | remote | no | yes |
:ref:`PdoStore ` | remote | no | yes |
:ref:`RedisStore ` | remote | no | yes |
:ref:`SemaphoreStore ` | local | yes | no |
:ref:`ZookeeperStore ` | remote | no | no |
The FlockStore uses the file system on the local computer to create the locks. It does not support expiration, but the lock is automatically released when the PHP process is terminated:
Beware that some file systems (such as some types of NFS) do not support locking. In those cases, it's better to use a directory on a local disk drive or a remote store based on PDO, Redis or Memcached.
The MemcachedStore saves locks on a Memcached server, it requires a Memcached connection implementing the \Memcached class. This store does not support blocking, and expects a TTL to avoid stalled locks:
Memcached does not support TTL lower than 1 second.
The PdoStore saves locks in an SQL database. It requires a PDO connection, a Doctrine DBAL Connection, or a Data Source Name (DSN). This store does not support blocking, and expects a TTL to avoid stalled locks:
This store does not support TTL lower than 1 second.
A great way to set up the table in production is to call the createTable() method in your local computer and then generate a :ref:`database migration ` :
The RedisStore saves locks on a Redis server, it requires a Redis connection implementing the \Redis , \RedisArray , \RedisCluster or \Predis classes. This store does not support blocking, and expects a TTL to avoid stalled locks:
The SemaphoreStore uses the PHP semaphore functions to create the locks:
The CombinedStore is designed for High Availability applications because it manages several stores in sync (for example, several Redis servers). When a lock is being acquired, it forwards the call to all the managed stores, and it collects their responses. If a simple majority of stores have acquired the lock, then the lock is considered as acquired; otherwise as not acquired:
Instead of the simple majority strategy ( ConsensusStrategy ) an UnanimousStrategy can be used to require the lock to be acquired in all the stores.
In order to get high availability when using the ConsensusStrategy , the minimum cluster size must be three servers. This allows the cluster to keep working when a single server fails (because this strategy requires that the lock is acquired in more than half of the servers).
The ZookeeperStore saves locks on a ZooKeeper server. It requires a ZooKeeper connection implementing the \Zookeeper class. This store does not support blocking and expiration but the lock is automatically released when the PHP process is terminated:
Zookeeper does not require a TTL as the nodes used for locking are ephemeral and die when the PHP process is terminated.
The component guarantees that the same resource can't be lock twice as long as the component is used in the following way.
Remote stores ( :ref:`MemcachedStore ` , :ref:`PdoStore ` , :ref:`RedisStore ` and :ref:`ZookeeperStore ` ) use a unique token to recognize the true owner of the lock. This token is stored in the :class:`Symfony\\Component\\Lock\\Key` object and is used internally by the Lock , therefore this key must not be shared between processes (session, caching, fork, . ).
Do not share a key between processes.
Every concurrent process must store the Lock in the same server. Otherwise two different machines may allow two different processes to acquire the same Lock .
To guarantee that the same server will always be safe, do not use Memcached behind a LoadBalancer, a cluster or round-robin DNS. Even if the main server is down, the calls must not be forwarded to a backup or failover server.
Expiring stores ( :ref:`MemcachedStore ` , :ref:`PdoStore ` and :ref:`RedisStore ` ) guarantee that the lock is acquired only for the defined duration of time. If the task takes longer to be accomplished, then the lock can be released by the store and acquired by someone else.
The Lock provides several methods to check its health. The isExpired() method checks whether or not it lifetime is over and the getRemainingLifetime() method returns its time to live in seconds.
Using the above methods, a more robust code would be:
Choose wisely the lifetime of the Lock and check whether its remaining time to leave is enough to perform the task.
Storing a Lock usually takes a few milliseconds, but network conditions may increase that time a lot (up to a few seconds). Take that into account when choosing the right TTL.
By design, locks are stored in servers with a defined lifetime. If the date or time of the machine changes, a lock could be released sooner than expected.
To guarantee that date won't change, the NTP service should be disabled and the date should be updated when the service is stopped.
By using the file system, this Store is reliable as long as concurrent processes use the same physical directory to stores locks.
Processes must run on the same machine, virtual machine or container. Be careful when updating a Kubernetes or Swarm service because for a short period of time, there can be two running containers in parallel.
The absolute path to the directory must remain the same. Be careful of symlinks that could change at anytime: Capistrano and blue/green deployment often use that trick. Be careful when the path to that directory changes between two deployments.
Some file systems (such as some types of NFS) do not support locking.
All concurrent processes must use the same physical file system by running on the same machine and using the same absolute path to locks directory.
Files on the file system can be removed during a maintenance operation. For instance, to clean up the /tmp directory or after a reboot of the machine when a directory uses tmpfs. It's not an issue if the lock is released when the process ended, but it is in case of Lock reused between requests.
Do not store locks on a volatile file system if they have to be reused in several requests.
The way Memcached works is to store items in memory. That means that by using the :ref:`MemcachedStore ` the locks are not persisted and may disappear by mistake at anytime.
If the Memcached service or the machine hosting it restarts, every lock would be lost without notifying the running processes.
To avoid that someone else acquires a lock after a restart, it's recommended to delay service start and wait at least as long as the longest lock TTL.
By default Memcached uses a LRU mechanism to remove old entries when the service needs space to add new items.
The number of items stored in Memcached must be under control. If it's not possible, LRU should be disabled and Lock should be stored in a dedicated Memcached service away from Cache.
When the Memcached service is shared and used for multiple usage, Locks could be removed by mistake. For instance some implementation of the PSR-6 clear() method uses the Memcached's flush() method which purges and removes everything.
The method flush() must not be called, or locks should be stored in a dedicated Memcached service away from Cache.
The PdoStore relies on the ACID properties of the SQL engine.
In a cluster configured with multiple primaries, ensure writes are synchronously propagated to every nodes, or always use the same node.
Some SQL engines like MySQL allow to disable the unique constraint check. Ensure that this is not the case SET unique_checks=1; .
In order to purge old locks, this store uses a current datetime to define an expiration date reference. This mechanism relies on all server nodes to have synchronized clocks.
To ensure locks don't expire prematurely; the TTLs should be set with enough extra time to account for any clock drift between nodes.
The way Redis works is to store items in memory. That means that by using the :ref:`RedisStore ` the locks are not persisted and may disappear by mistake at anytime.
If the Redis service or the machine hosting it restarts, every locks would be lost without notifying the running processes.
To avoid that someone else acquires a lock after a restart, it's recommended to delay service start and wait at least as long as the longest lock TTL.
Redis can be configured to persist items on disk, but this option would slow down writes on the service. This could go against other uses of the server.
When the Redis service is shared and used for multiple usages, locks could be removed by mistake.
The command FLUSHDB must not be called, or locks should be stored in a dedicated Redis service away from Cache.
Combined stores allow to store locks across several backends. It's a common mistake to think that the lock mechanism will be more reliable. This is wrong The CombinedStore will be, at best, as reliable as the least reliable of all managed stores. As soon as one managed store returns erroneous information, the CombinedStore won't be reliable.
All concurrent processes must use the same configuration, with the same amount of managed stored and the same endpoint.
Instead of using a cluster of Redis or Memcached servers, it's better to use a CombinedStore with a single server per managed store.
Semaphores are handled by the Kernel level. In order to be reliable, processes must run on the same machine, virtual machine or container. Be careful when updating a Kubernetes or Swarm service because for a short period of time, there can be two running containers in parallel.
All concurrent processes must use the same machine. Before starting a concurrent process on a new machine, check that other process are stopped on the old one.
The way ZookeeperStore works is by maintaining locks as ephemeral nodes on the server. That means that by using :ref:`ZookeeperStore ` the locks will be automatically released at the end of the session in case the client cannot unlock for any reason.
If the ZooKeeper service or the machine hosting it restarts, every lock would be lost without notifying the running processes.
To use ZooKeeper's high-availability feature, you can setup a cluster of multiple servers so that in case one of the server goes down, the majority will still be up and serving the requests. All the available servers in the cluster will see the same state.
As this store does not support multi-level node locks, since the clean up of intermediate nodes becomes an overhead, all locks are maintained at the root level.
Changing the configuration of stores should be done very carefully. For instance, during the deployment of a new version. Processes with new configuration must not be started while old processes with old configuration are still running.
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Когда выполнение программы распараллелена, часть кода, изменяющая общие ресурсы не должна быть доступна одновременно нескольким процессам. Компонент Symfony Lock предоставляет механизм блокировок, чтобы убедиться, что только один процесс запускает критический участок кода в любой момент времени для предотвращения состояния гонки (race condition).
Пример, показывающий типичное использование блокировки:
Installation
In applications using Symfony Flex, run this command to install the Lock component:
Configuring Lock with FrameworkBundle
By default, Symfony provides a Semaphore when available, or a Flock otherwise. You can configure this behavior by using the lock key like:
Locking a Dynamic Resource
Sometimes the application is able to cut the resource into small pieces in order to lock a small subset of process and let other through. In our previous example with see how to lock the $pdf->getOrCreatePdf('terms-of-use') for everybody, now let's see how to lock $pdf->getOrCreatePdf($version) only for processes asking for the same $version :
Блокировка ресурса¶
Для блокировки ресурса по умолчанию, автоподключите Lock используя LockInterface (id сервиса lock ):
Тот же экземпляр LockInterface не будет блокирован, если вызвать acquire несколько раз внутри того же процесса. Когда несколько сервисов используют тот же lock, внедрите LockFactory , чтобы создавать новый экземпляр блокировщика для каждого сервиса.
trusted_headers¶
New in version 5.2: Опция trusted_headers была представлена в Symfony 5.2.
Опция trusted_headers необходима для конфигурации того, какой клиентской информации стоит доверять (например, их хостингу) при запуске Symfony с балансировщиком нагрузки или обратным прокси. См, How to Configure Symfony to Work behind a Load Balancer or a Reverse Proxy .
trusted_proxies¶
New in version 5.2: Опция trusted_proxies была повторно представлена в Symfony 5.2 (она была удалена в Symfony 3.3).
Опция trusted_proxies необходима, чтобы получать точную информацию о клиенте (например, его IP адрес) при запуске Symfony с балансировщиком нагрузки или обратным прокси. См, How to Configure Symfony to Work behind a Load Balancer or a Reverse Proxy .
тип: string по умолчанию: null
Опция phpstorm поддерживается PhpStorm в MacOS, Windows требует PhpStormProtocol, а Linux требует phpstorm-url-handler.
Если вы используете другой редактор, то ожидаемое значение конфигурации - это шаблон URL, содержащий заполнител %f так, где ожидается путь файла, и заполнитель %l для номера строки (символы процента ( % ) должны быть экранированы путём их удвоения, чтобы предупроедить Symfony от использования их в качестве параметров контейнера).
Сегодня хочу предложить вашему вниманию частный случай для решения «неудобств», связанных с периодичным запуском процессов в том случае, если предыдущий еще не завершился. Иначе говоря — блокировка запущенных процессов в symfony/console. Но все было бы слишком банально, если бы не необходимость блокировки среди группы серверов, а не на отдельно взятом.
Дано: Один и тот же процесс, который запускается на N серверов.
Задача: Сделать так, чтобы в единицу времени был запущен только один.
Наиболее популярные решения, которые можно встретить на «просторах»:
- блокировка через базу данных;
- сторонние приложения;
- нативное использование lock-файла
База данных
- требует подключение к базе в каждом запускаемом скрипте;
- нужна таблица;
- нужен код, обслуживающий запись/удаление;
- сложности при «падении» скрипта с тем, как снять lock, нужен watchDog;
- сложности при «падении» самой базы
- не для всех платформ есть одинаковые приложения с одинаково предсказуемым поведением;
- не всегда есть возможность установить что-то дополнительное;
- не все умеют блокировать «в сети»
- каждая команда должна сопровождаться созданием файла;
- сколько команд — столько строк с путем и именем lock-файла
Итак, первое же, от чего пришлось отказаться — flock, который используется, к примеру, в LockHandler от symfony. Он не дает возможность блокировки среди нескольких серверов.
Вместо этого будем создавать lock-файл в расшаренной между серверами директории, с помощью маленького сервиса, это практически аналог LockHandler, но с «выпиленным» flock.
Следующее, от чего нужно избавиться — необходимость в каждой команде проверять вручную блокировку, и, самое главное — снимать ее, ведь не всегда скрипт завершается там, где мы предполагаем.
Для этого предлагаю применить нечто, похожее на Mediator — реализовать и финализировать стандартный метод execute(), который будет запущен при старте команды и навязать использование нового метода lockExecute().
Для чего это нужно:
- весь код команды будет содержаться в методе lockExecute();
- вызываемый при запуске метод execute() будет создавать блокировку, регистрировать снятие блокировки при падении/завершении скрипта и только потом — выполнять lockExecute()
будет выглядеть так:
Писать значительно больше кода не придется и при этом она будет гарантированно запущена только 1 раз, сколько бы серверов не попытались это сделать. Единственное условие — общая директория для lock-файлов.
Уже готовое решение и больше деталей можно посмотреть на гитхаб: singleton-command
UPD: как справедливо было замечено — в случае «жестких» падений скриптов, возможно сохранение lock-файлов. Поэтому, желательно организовать демона, который будет «наблюдать» за «залежавшимися» lock-файлами.
Locking a Resource
To lock the default resource, autowire the lock using LockInterface (service id lock ):
The same instance of LockInterface won't block when calling acquire multiple times inside the same process. When several services use the same lock, inject the LockFactory instead to create a separate lock instance for each service.
Блокирование динамического ресурса¶
Иногда приложение может разделить ресурс на небольшие части, чтобы блокировать только малую часть процесса, и позволять остальным обрабатываться. В предудущем примере $pdf->getOrCreatePdf('terms-of-use') блокировалось для всех, а теперь давайте посмотрим, как заблокировать $pdf->getOrCreatePdf($version) только для процессов, которым нужен доступ к той же $version :
Blocking Store
If you want to use the RetryTillSaveStore for non-blocking locks, you can do it by decorating the store service:
↓ Our footer now uses the colors of the Ukrainian flag because Symfony stands with the people of Ukraine.
Thanks John Stevenson for being a Symfony contributor
1 commit • 13 lines changed
Become a Symfony contributor
Be an active part of the community and contribute ideas, code and bug fixes. Both experts and newcomers are welcome.
FrameworkBundle определяет конфигурацию главного фреймворка, от сессий и переводов, до форм, валидации, маршрутизации и прочего. Все эти опции конфигурируются под ключом framework в вашей конфигурации приложения.
secret¶
тип: string обязательно
Это строка, которая должна быть уникальна в вашем приложении, и часто используется, что добавить больше энтропии в операции, связанные с безопасностью. Её значение жолжно быть набором знаков, цифр и символов, выбранных хаотично, а рекомендуемая длина составляет примерно 32 знака.
Эта опция становится параметром сервис-контейнера под названием kernel.secret , который вы можете использовать тогда, когда приложению нужна случайная постоянная строка для добавления большей энтропии.
тип: boolean по умолчанию: true
Named Lock
If the application needs different kind of Stores alongside each other, Symfony provides named lock:
Each name becomes a service where the service id suffixed by the name of the lock (e.g. lock.invoice ). An autowiring alias is also created for each lock using the camel case version of its name suffixed by Lock - e.g. invoice can be injected automatically by naming the argument $invoiceLock and type-hinting it with LockInterface.
Symfony also provide a corresponding factory and store following the same rules (e.g. invoice generates a lock.invoice.factory and lock.invoice.store , both can be injected automatically by naming respectively $invoiceLockFactory and $invoiceLockStore and type-hinted with LockFactory and PersistingStoreInterface)
Конфигурация¶
Именованая блокировка¶
Если приложению нужен другой тип хранилища для разных блокировок, Symfony предоставляет именованную блокировку :
Каждое название становится сервисом, где к id сервиса добавляется суффикс в виде имени блокировщика (например, lock.invoice ). Также создаётся алиас для автоподключения каждого блокировщика используя camel case версию его имени с добавлением Lock - например, invoice может быть автовнедрён с помощью названия аргумента $invoiceLock и с добавлением типа к нему LockInterface .
Symfony также предоставляет соответствующую фабрику и хранилице с аналогичными правилами (например invoice генерирует фабрику lock.invoice.factory и хранилище lock.invoice.store , оба могут быть автовнедрены с названиями соответственно $invoiceLockFactory и $invoiceLockStore с указанием типов LockFactory и PersistingStoreInterface )
Установка¶
В приложениях с Symfony Flex , запустите эту команду для установки компонента Lock:
Настройка Lock с FrameworkBundle¶
По умолчанию Symfony предоставляет Semaphore , когда возможно, или Flock в других случаях. Вы можете настроить это поведение используя ключ lock так:
Блокирующее хранилище¶
Если вы хотите использовать RetryTillSaveStore для неблокирующих блокировщиков , вы можете сделать это декорировав сервис хранилища:
Эта документация является переводом официальной документации Symfony и предоставляется по свободной лицензии CC BY-SA 3.0.
Warning: You are browsing the documentation for Symfony 5.1, which is no longer maintained.
Read the updated version of this page for Symfony 6.0 (the current stable version).
- Installation
- Configuring Lock with FrameworkBundle
- Locking a Resource
- Locking a Dynamic Resource
- Named Lock
- Blocking Store
When a program runs concurrently, some part of code which modify shared resources should not be accessed by multiple processes at the same time. Symfony's Lock component provides a locking mechanism to ensure that only one process is running the critical section of code at any point of time to prevent race condition from happening.
The following example shows a typical usage of the lock:
Читайте также: