Difference between revisions of "HowTo: Permanent Migration to dCache Read Only Pools"
Jump to navigation
Jump to search
(5 intermediate revisions by the same user not shown) | |||
Line 34: | Line 34: | ||
:::: You can setup higher values if you expect a very good performance, otherwise migration can become a bottleneck. | :::: You can setup higher values if you expect a very good performance, otherwise migration can become a bottleneck. | ||
− | == CMS Example == | + | == CMS T1 Example == |
* Considering a CMS configuration like the following: | * Considering a CMS configuration like the following: | ||
# '''pgroup-cms''' | # '''pgroup-cms''' | ||
Line 67: | Line 67: | ||
save | save | ||
− | * In short, the above ''migration'' command permanently runs and migrates data bigger or equal than 1070000 and accessed during the last 10', from any pool in '''pgroup-cms''' to '''pgroup-cms'''. This migration is performed with concurrency 1, checksum for each file is performed, atime is maintained in order to do not modify access time for the file and preserve the real access of a file. '''Random''' target location is selected in order to do not have in account the free space. | + | * In short, the above ''migration'' command permanently runs and migrates data bigger or equal than 1070000 and accessed during the last 10', from any pool in '''pgroup-cms''' to '''pgroup-cms-ro'''. This migration is performed with concurrency 1, checksum for each file is performed, atime is maintained in order to do not modify access time for the file and preserve the real access of a file. '''Random''' target location is selected in order to do not have in account the free space. |
+ | |||
+ | * If we want to do the same for the '''pgroup-cms-recall''' pool group: | ||
+ | :* Run on every pool in '''pgroup-cms-recall''' | ||
+ | migration cache -permanent -pins=keep -concurrency=1 -atime -accessed=0..600 -select=random -include=dc052_1,dc053_1 -target=pgroup \ | ||
+ | -exclude=dc100*,dc101* -id=caching_recently_accessed_files -verify -size=1070000.. -- pgroup-cms-ro | ||
+ | |||
+ | == ATLAS T1 Example == | ||
+ | * Considering a ATLAS configuration like the following: | ||
+ | # '''pgroup-atlas''' | ||
+ | |- dc020_1, dc020_2, dc020_3, dc020_4 | ||
+ | |- dc028_1, dc028_2, dc028_3, dc028_4 | ||
+ | |- dc032_1, dc032_2, dc032_3, dc032_4 | ||
+ | |- dc033_1, dc033_2, dc033_3, dc033_4 | ||
+ | |- dc036_1 | ||
+ | |- dc041_1 | ||
+ | |- dc042_1 | ||
+ | |- dc046_1, dc046_2, dc046_3 | ||
+ | |- dc047_1, dc047_2, dc047_3 | ||
+ | |- dc048_1, dc048_2, dc048_3 | ||
+ | |||
+ | # '''pgroup-atlas-ro''' | ||
+ | |- dc054_1 | ||
+ | |||
+ | # '''pgroup-atlas-recall''' | ||
+ | |- dc091_1 | ||
+ | |- dc092_1 | ||
+ | |- dc093_1 | ||
+ | |||
+ | * We want make a permanent cache migration from '''pgroup-atlas''' to '''pgroup-atlas-ro''' according to the following rules: | ||
+ | :* Copy only replicas accessed last 10' (0..600) | ||
+ | :* Copy only files with size equal or bigger than 1070000 bytes | ||
+ | :* Select target pools randomly (do not take account of the free space) | ||
+ | :* Concurrency of 1 per migration process (want to avoid to stress the pools, so we consider the lowest value) | ||
+ | :* Set target as '''pgroup''' (we want to migrate the data to any pool in '''pgroup-atlas-ro''') | ||
+ | :* For security and in order to avoid updates in the poolmanager.conf that are not applied to the migration process, add the '''include''' and '''exclude''' options with the pools that will be affected. | ||
+ | |||
+ | * Run on every pool in '''pgroup-atlas''' | ||
+ | migration cache -permanent -pins=keep -concurrency=1 -atime -accessed=0..600 -select=random -include=dc054_1 -target=pgroup \ | ||
+ | -exclude=dc020*,dc028*,dc032*,dc033*,dc036*,dc041*,dc042*,dc046*,dc047*,dc048* -id=caching_recently_accessed_files -verify -size=1070000.. -- pgroup-atlas-ro | ||
+ | |||
+ | * Optionally, '''save''' configuration in order to re-run it once pools gets restarted. | ||
+ | save | ||
+ | |||
+ | * In short, the above ''migration'' command permanently runs and migrates data bigger or equal than 1070000 and accessed during the last 10', from any pool in '''pgroup-atlas''' to '''pgroup-atlas-ro'''. This migration is performed with concurrency 1, checksum for each file is performed, atime is maintained in order to do not modify access time for the file and preserve the real access of a file. '''Random''' target location is selected in order to do not have in account the free space. | ||
+ | |||
+ | * If we want to do the same for the '''pgroup-atlas-recall''' pool group: | ||
+ | :* Run on every pool in '''pgroup-atlas-recall''' | ||
+ | migration cache -permanent -pins=keep -concurrency=1 -atime -accessed=0..600 -select=random -include=dc054_1 -target=pgroup \ | ||
+ | -exclude=dc091*,dc092*,dc093* -id=caching_recently_accessed_files -verify -size=1070000.. -- pgroup-atlas-ro | ||
+ | |||
+ | == ATLAS T2 Example == | ||
+ | * Considering a ATLAS T2 configuration like the following: | ||
+ | # '''pgroup-atlast2''' | ||
+ | |- dc023_1, dc023_2 | ||
+ | |- dc024_1, dc024_2 | ||
+ | |- dc025_1, dc025_2 | ||
+ | |- dc026_1, dc026_2, dc026_3, dc026_4 | ||
+ | |- dc034_1 | ||
+ | |- dc086_1, dc086_2, dc086_3 | ||
+ | |- dc097_1, dc097_2, dc097_3 | ||
+ | |- dc098_1, dc098_2, dc098_3 | ||
+ | |||
+ | # '''pgroup-atlast2-ro''' | ||
+ | |- dc056_1 | ||
+ | |||
+ | * We want make a permanent cache migration from '''pgroup-atlast2''' to '''pgroup-atlast2-ro''' according to the following rules: | ||
+ | :* Copy only replicas accessed last 10' (0..600) | ||
+ | :* Copy only files with size equal or bigger than 1070000 bytes | ||
+ | :* Select target pools randomly (do not take account of the free space) | ||
+ | :* Concurrency of 1 per migration process (want to avoid to stress the pools, so we consider the lowest value) | ||
+ | :* Set target as '''pgroup''' (we want to migrate the data to any pool in '''pgroup-atlast2-ro''') | ||
+ | :* For security and in order to avoid updates in the poolmanager.conf that are not applied to the migration process, add the '''include''' and '''exclude''' options with the pools that will be affected. | ||
+ | |||
+ | * Run on every pool in '''pgroup-atlast2''' | ||
+ | migration cache -permanent -pins=keep -concurrency=1 -atime -accessed=0..600 -select=random -include=dc056_1 -target=pgroup \ | ||
+ | -exclude=dc023*,dc024*,dc025*,dc026*,dc034*,dc086*,dc097*,dc098* -id=caching_recently_accessed_files -verify -size=1070000.. -- pgroup-atlast2-ro | ||
+ | |||
+ | * Optionally, '''save''' configuration in order to re-run it once pools gets restarted. | ||
+ | save | ||
+ | |||
+ | * In short, the above ''migration'' command permanently runs and migrates data bigger or equal than 1070000 and accessed during the last 10', from any pool in '''pgroup-atlast2''' to '''pgroup-atlast2-ro'''. This migration is performed with concurrency 1, checksum for each file is performed, atime is maintained in order to do not modify access time for the file and preserve the real access of a file. '''Random''' target location is selected in order to do not have in account the free space. |
Latest revision as of 11:11, 26 May 2017
Source pools
- Add the following line to your ${pool.path}/setup file, or alternatively execute this command from the pool admin console.
- If you want permanent changes, remember to run save at the end only when executing this command from the pool admin console. This way is equivalent to write the below information in the ${pool.path}/setup file.
- If you just add the line in the ${pool.path}/setup, please remember to execute reload -yes on the pool admin console, or just the command from there.
migration cache -permanent -pins=keep -concurrency=<concurrency> -atime -accessed=<string> -select=proportional|random \ -include=<glob>[,<glob>]<...> -target=pool|pgroup|link -exclude=<glob>[,<glob>]<...> -id=<string> -verify -size=<string> -- <pool|pgroup|link>
- Options:
- Filter options:
- -accessed=<string>: Only copy replicas accessed n seconds ago, or accessed within the given, possibly open-ended, interval. E.g. -accessed=0..60 matches files accessed within the last minute; -accesed=60.. matches files accessed one minute or more ago.
- -size=<string>: Only copy replicas with size n, or a size within the given, possibly open-ended, interval. E.g. -size=0..1070000 matches files with size between 0 and ~1MB; -size=1070000.. matches ~1MB or bigger files.
- -select=proportional|random: Determines how a pool is selected from the set of target pools. Defaults to proportional.
- proportional: selects a pool with a probability proportional to the free space.
- random: selects a pool randomly.
- Lifetime options:
- -permanent: Mark job as permanent.
- However, you need to ensure that the command is set in the ${pool.path}/setup file. Otherwise on a pool restart it will not run it.
- Target options:
- -exclude=<glob>[,<glob>]<...>: Exclude target pools matching any of the patterns. Single character (?) and multi character (*) wildcards may be used.
- This option is set to avoid RW pools being included in case that something is miss-configured from the poolmanager side.
- -include=<glob>[,<glob>]<...>: Only include target pools matching any of the patterns. Single character (?) and multi character (*) wildcards may be used.
- This option is set to ensure that only some specific pools will be included in case that something is miss-configured from the poolmanager side.
- -target=pool|pgroup|link: Determines the interpretation of the target names. Defaults to pool.
- We should setup pgroup because generally we will not want to migrate to a specific pool.
- Transfer options:
- -verify: Force checksum computation when an existing target is updated.
- -atime: Maintain last access time.
- Otherwise if file is not used by the client updating the last access time would mean that file will not be recycled first. We want to keep cached hot files and not older ones.
- -pins=keep|move: Controls how sticky flags owned by the pin manager are handled.
- move: Ask pin manager to move pins to the target pool.
- keep: Keep pin on the source pool. Defaults to keep.
- -concurrency=<int>: Specifies how many concurrent transfers to perform. Defaults to 1.
- You can setup higher values if you expect a very good performance, otherwise migration can become a bottleneck.
CMS T1 Example
- Considering a CMS configuration like the following:
# pgroup-cms |- dc021_1, dc021_2, dc021_3, dc021_4, dc021_5, dc021_6, dc021_7 |- dc022_1, dc022_2, dc022_3, dc022_4, dc022_5, dc022_6, dc022_7 |- dc029_1, dc029_2, dc029_3, dc029_4, dc029_5, dc029_6, dc029_7 |- dc031_1, dc031_2, dc031_3, dc031_4, dc031_5, dc031_6, dc031_7 |- dc037_1 |- dc040_1 |- dc044_1, dc044_2, dc044_3 # pgroup-cms-ro |- dc052_1 |- dc053_1 # pgroup-cms-recall |- dc100_1 |- dc101_1
- We want make a permanent cache migration from pgroup-cms to pgroup-cms-ro according to the following rules:
- Copy only replicas accessed last 10' (0..600)
- Copy only files with size equal or bigger than 1070000 bytes
- Select target pools randomly (do not take account of the free space)
- Concurrency of 1 per migration process (want to avoid to stress the pools, so we consider the lowest value)
- Set target as pgroup (we want to migrate the data to any pool in pgroup-cms-ro)
- For security and in order to avoid updates in the poolmanager.conf that are not applied to the migration process, add the include and exclude options with the pools that will be affected.
- Run on every pool in pgroup-cms
migration cache -permanent -pins=keep -concurrency=1 -atime -accessed=0..600 -select=random -include=dc052_1,dc053_1 -target=pgroup \ -exclude=dc021*,dc022*,dc029*,dc031*,dc037*,dc040*,dc044* -id=caching_recently_accessed_files -verify -size=1070000.. -- pgroup-cms-ro
- Optionally, save configuration in order to re-run it once pools gets restarted.
save
- In short, the above migration command permanently runs and migrates data bigger or equal than 1070000 and accessed during the last 10', from any pool in pgroup-cms to pgroup-cms-ro. This migration is performed with concurrency 1, checksum for each file is performed, atime is maintained in order to do not modify access time for the file and preserve the real access of a file. Random target location is selected in order to do not have in account the free space.
- If we want to do the same for the pgroup-cms-recall pool group:
- Run on every pool in pgroup-cms-recall
migration cache -permanent -pins=keep -concurrency=1 -atime -accessed=0..600 -select=random -include=dc052_1,dc053_1 -target=pgroup \ -exclude=dc100*,dc101* -id=caching_recently_accessed_files -verify -size=1070000.. -- pgroup-cms-ro
ATLAS T1 Example
- Considering a ATLAS configuration like the following:
# pgroup-atlas |- dc020_1, dc020_2, dc020_3, dc020_4 |- dc028_1, dc028_2, dc028_3, dc028_4 |- dc032_1, dc032_2, dc032_3, dc032_4 |- dc033_1, dc033_2, dc033_3, dc033_4 |- dc036_1 |- dc041_1 |- dc042_1 |- dc046_1, dc046_2, dc046_3 |- dc047_1, dc047_2, dc047_3 |- dc048_1, dc048_2, dc048_3 # pgroup-atlas-ro |- dc054_1 # pgroup-atlas-recall |- dc091_1 |- dc092_1 |- dc093_1
- We want make a permanent cache migration from pgroup-atlas to pgroup-atlas-ro according to the following rules:
- Copy only replicas accessed last 10' (0..600)
- Copy only files with size equal or bigger than 1070000 bytes
- Select target pools randomly (do not take account of the free space)
- Concurrency of 1 per migration process (want to avoid to stress the pools, so we consider the lowest value)
- Set target as pgroup (we want to migrate the data to any pool in pgroup-atlas-ro)
- For security and in order to avoid updates in the poolmanager.conf that are not applied to the migration process, add the include and exclude options with the pools that will be affected.
- Run on every pool in pgroup-atlas
migration cache -permanent -pins=keep -concurrency=1 -atime -accessed=0..600 -select=random -include=dc054_1 -target=pgroup \ -exclude=dc020*,dc028*,dc032*,dc033*,dc036*,dc041*,dc042*,dc046*,dc047*,dc048* -id=caching_recently_accessed_files -verify -size=1070000.. -- pgroup-atlas-ro
- Optionally, save configuration in order to re-run it once pools gets restarted.
save
- In short, the above migration command permanently runs and migrates data bigger or equal than 1070000 and accessed during the last 10', from any pool in pgroup-atlas to pgroup-atlas-ro. This migration is performed with concurrency 1, checksum for each file is performed, atime is maintained in order to do not modify access time for the file and preserve the real access of a file. Random target location is selected in order to do not have in account the free space.
- If we want to do the same for the pgroup-atlas-recall pool group:
- Run on every pool in pgroup-atlas-recall
migration cache -permanent -pins=keep -concurrency=1 -atime -accessed=0..600 -select=random -include=dc054_1 -target=pgroup \ -exclude=dc091*,dc092*,dc093* -id=caching_recently_accessed_files -verify -size=1070000.. -- pgroup-atlas-ro
ATLAS T2 Example
- Considering a ATLAS T2 configuration like the following:
# pgroup-atlast2 |- dc023_1, dc023_2 |- dc024_1, dc024_2 |- dc025_1, dc025_2 |- dc026_1, dc026_2, dc026_3, dc026_4 |- dc034_1 |- dc086_1, dc086_2, dc086_3 |- dc097_1, dc097_2, dc097_3 |- dc098_1, dc098_2, dc098_3 # pgroup-atlast2-ro |- dc056_1
- We want make a permanent cache migration from pgroup-atlast2 to pgroup-atlast2-ro according to the following rules:
- Copy only replicas accessed last 10' (0..600)
- Copy only files with size equal or bigger than 1070000 bytes
- Select target pools randomly (do not take account of the free space)
- Concurrency of 1 per migration process (want to avoid to stress the pools, so we consider the lowest value)
- Set target as pgroup (we want to migrate the data to any pool in pgroup-atlast2-ro)
- For security and in order to avoid updates in the poolmanager.conf that are not applied to the migration process, add the include and exclude options with the pools that will be affected.
- Run on every pool in pgroup-atlast2
migration cache -permanent -pins=keep -concurrency=1 -atime -accessed=0..600 -select=random -include=dc056_1 -target=pgroup \ -exclude=dc023*,dc024*,dc025*,dc026*,dc034*,dc086*,dc097*,dc098* -id=caching_recently_accessed_files -verify -size=1070000.. -- pgroup-atlast2-ro
- Optionally, save configuration in order to re-run it once pools gets restarted.
save
- In short, the above migration command permanently runs and migrates data bigger or equal than 1070000 and accessed during the last 10', from any pool in pgroup-atlast2 to pgroup-atlast2-ro. This migration is performed with concurrency 1, checksum for each file is performed, atime is maintained in order to do not modify access time for the file and preserve the real access of a file. Random target location is selected in order to do not have in account the free space.