Compare commits

..

694 Commits

Author SHA1 Message Date
Vitaliy Filippov facc276e8b move away require libv2/config from libv2/redis 2024-08-13 02:17:04 +03:00
Vitaliy Filippov c8e3999fb3 Require defaults.json instead of fs.readFileSync 2024-08-13 01:14:02 +03:00
Vitaliy Filippov 9fa777cdba Split require utils to help webpack remove libV2 2024-08-13 01:10:22 +03:00
Vitaliy Filippov e6d48f3b47 Make vault client optional / support receiving its instance from outside 2024-07-23 19:22:54 +03:00
Vitaliy Filippov 0050625f81 Change git dependency URLs 2024-07-21 18:12:40 +03:00
Vitaliy Filippov 0a66c57a0a Remove yarn lock 2024-07-21 17:34:07 +03:00
Vitaliy Filippov 6711c4241a Forget LFS object 2024-07-21 17:34:07 +03:00
Jonathan Gramain 3800e4b185 Merge remote-tracking branch 'origin/w/7.70/bugfix/UTAPI-105-useListOfSentinelNodes' into w/8.1/bugfix/UTAPI-105-useListOfSentinelNodes 2024-06-27 10:09:15 -07:00
Jonathan Gramain 20667ff741 Merge remote-tracking branch 'origin/bugfix/UTAPI-105-useListOfSentinelNodes' into w/7.70/bugfix/UTAPI-105-useListOfSentinelNodes 2024-06-27 10:06:43 -07:00
Jonathan Gramain 88d18f3eb6 UTAPI-105 bump version 2024-06-25 15:10:02 -07:00
Jonathan Gramain 426dfd0860 bf: UTAPI-105 UtapiReindex: use list of redis sentinels
Use a list of Redis sentinels that are running on stateful nodes only,
instead of localhost.

Previously, a stateless-only node wouldn't have a local sentinel node
running, causing UtapiReindex to fail.

Added a failover mechanism in case of connection error on the current
sentinel, to try each other one in turn.
2024-06-25 15:10:02 -07:00
bert-e ac4fd2c5f5 Merge branch 'improvement/UTAPI-103/support_reindex_by_account' into tmp/octopus/w/8.1/improvement/UTAPI-103/support_reindex_by_account 2024-06-12 18:28:11 +00:00
Taylor McKinnon 69b94c57aa impr(UTAPI-103): Remove undeclared variable from log message 2024-06-12 11:27:16 -07:00
Taylor McKinnon f5262b7875 impr(UTAPI-103): Support reindexing by acccount 2024-06-12 11:27:16 -07:00
Taylor McKinnon ee1c0fcd1b impr(UTAPI-103): Support multiple specified buckets and prep for account support 2024-06-12 11:27:16 -07:00
Taylor McKinnon 5efb70dc63 impr(UTAPI-103): Add --dry-run option 2024-06-12 11:27:16 -07:00
Taylor McKinnon 210ba2fd82 impr(UTAPI-103): Add BucketDClient.get_bucket_md() 2024-06-06 12:10:40 -07:00
Taylor McKinnon 34af848b93 impr(UTAPI-103): Add BucketNotFound Exeception for _get_bucket_attributes 2024-06-06 12:08:40 -07:00
Taylor McKinnon 402fd406e3 impr(UTAPI-103): Add small LRU cache to BucketDClient._get_bucket_attributes 2024-06-06 12:06:46 -07:00
bert-e f9ae694c0c Merge branch 'w/7.70/bugfix/UTAPI-101/fix_release_workflow' into tmp/octopus/w/8.1/bugfix/UTAPI-101/fix_release_workflow 2024-05-16 17:16:03 +00:00
bert-e 960d990e89 Merge branch 'bugfix/UTAPI-101/fix_release_workflow' into tmp/octopus/w/7.70/bugfix/UTAPI-101/fix_release_workflow 2024-05-16 17:16:03 +00:00
Taylor McKinnon 7fde3488b9 impr(UTAPI-101): Remove secrets: inherit from release workflow 2024-05-15 10:32:38 -07:00
Taylor McKinnon 79c2ff0c72 Merge remote-tracking branch 'origin/w/7.70/bugfix/UTAPI-100/utapi_python_version_fix' into w/8.1/bugfix/UTAPI-100/utapi_python_version_fix 2024-05-07 10:56:37 -07:00
Taylor McKinnon ae904b89bf Merge remote-tracking branch 'origin/bugfix/UTAPI-100/utapi_python_version_fix' into w/7.70/bugfix/UTAPI-100/utapi_python_version_fix 2024-05-07 10:55:23 -07:00
Taylor McKinnon 60db367054 bf(UTAPI-100): Bump version 2024-05-06 11:20:17 -07:00
Taylor McKinnon c9ba521b6d bf(UTAPI-100): Remove use of 3.7+ only parameter 2024-05-06 11:16:58 -07:00
Francois Ferrand ce89418788
Update Release.md for ghcr migration
Issue: UTAPI-99
2024-04-18 15:55:13 +02:00
Francois Ferrand 5faaf493a5
Merge branch 'w/7.70/improvement/VAULT-567' into w/8.1/improvement/VAULT-567 2024-04-18 15:54:58 +02:00
Francois Ferrand da143dba67
Merge branch 'w/7.10/improvement/VAULT-567' into w/7.70/improvement/VAULT-567 2024-04-18 15:54:35 +02:00
Francois Ferrand 6e0ec16f00
Fix caching of python packages
Issue: UTAPI-99
2024-04-18 15:54:04 +02:00
Francois Ferrand 4449f44c9a
Bump github actions
- docker-build@v2
- checkout@v4
- setup-buildx@v3
- setup-node@v4
- setup-python@v5
- login@v3
- build-push@v5
- gh-release@v2
- ssh-to-runner@1.7.0

Issue: UTAPI-99
2024-04-18 15:53:26 +02:00
Francois Ferrand c4e786d6cd
Migrate to ghcr
Issue: UTAPI-99
2024-04-18 15:53:20 +02:00
Francois Ferrand bdb483e6b4
Merge branch 'improvement/UTAPI-99' into w/7.10/improvement/VAULT-567 2024-04-18 15:52:47 +02:00
Francois Ferrand 20916c6f0e
Fix caching of python packages
Issue: UTAPI-99
2024-04-18 15:47:05 +02:00
Francois Ferrand 5976018d0e
Bump github actions
- checkout@v4
- setup-qemu@v3
- setup-buildx@v3
- setup-node@v4
- setup-python@v5
- login@v3
- build-push@v5
- gh-release@v2

Issue: UTAPI-99
2024-04-17 15:02:44 +02:00
Francois Ferrand 9e1f14ed17
Migrate to ghcr
Issue: UTAPI-99
2024-04-17 14:42:58 +02:00
bert-e 34699432ee Merge branch 'w/7.70/improvement/UTAPI-98/bump-redis' into tmp/octopus/w/8.1/improvement/UTAPI-98/bump-redis 2024-01-22 15:39:38 +00:00
bert-e 438a25982d Merge branch 'improvement/UTAPI-98/bump-redis' into tmp/octopus/w/7.70/improvement/UTAPI-98/bump-redis 2024-01-22 15:39:37 +00:00
Nicolas Humbert 8804e9ff69 UTAPI-98 Bump Redis version 2024-01-22 16:36:01 +01:00
Taylor McKinnon 27e1c44829 Merge remote-tracking branch 'origin/w/7.70/improvement/UTAPI-97/reindex_only_latest_for_olock_buckets_option' into w/8.1/improvement/UTAPI-97/reindex_only_latest_for_olock_buckets_option 2023-12-11 09:38:41 -08:00
Taylor McKinnon e8882a28cc Merge remote-tracking branch 'origin/improvement/UTAPI-97/reindex_only_latest_for_olock_buckets_option' into w/7.70/improvement/UTAPI-97/reindex_only_latest_for_olock_buckets_option 2023-12-11 09:37:25 -08:00
Taylor McKinnon b93998118c impr(UTAPI-97): Bump version 2023-12-11 09:25:01 -08:00
Taylor McKinnon 9195835f70 impr(UTAPI-97): Add config option to reindex only latest version in object locked buckets 2023-12-11 09:25:01 -08:00
bert-e 8dfb06cdbc Merge branch 'w/7.70/improvement/UTAPI-96/switch_to_scality_ssh_action' into tmp/octopus/w/8.1/improvement/UTAPI-96/switch_to_scality_ssh_action 2023-10-09 16:32:55 +00:00
bert-e 934136635e Merge branch 'improvement/UTAPI-96/switch_to_scality_ssh_action' into tmp/octopus/w/7.70/improvement/UTAPI-96/switch_to_scality_ssh_action 2023-10-09 16:32:55 +00:00
Taylor McKinnon 9f36624799 impr(UTAPI-96): Switch to scality/actions/action-ssh-to-runner 2023-10-09 09:30:34 -07:00
Taylor McKinnon 59aa9b9ab9 Merge remote-tracking branch 'origin/w/7.70/bugfix/UTAPI-92/bump_utapi_version' into w/8.1/bugfix/UTAPI-92/bump_utapi_version 2023-05-31 13:45:38 -07:00
Taylor McKinnon 9eecef0a24 Merge remote-tracking branch 'origin/bugfix/UTAPI-92/bump_utapi_version' into w/7.70/bugfix/UTAPI-92/bump_utapi_version 2023-05-31 13:44:34 -07:00
Taylor McKinnon c29af16e46 UTAPI-92: Bump version 2023-05-31 13:43:04 -07:00
bert-e 8757ac8bb0 Merge branch 'w/7.70/bugfix/UTAPI-92/fix_redis_password_config' into tmp/octopus/w/8.1/bugfix/UTAPI-92/fix_redis_password_config 2023-05-26 17:39:02 +00:00
bert-e 34ceac8563 Merge branch 'bugfix/UTAPI-92/fix_redis_password_config' into tmp/octopus/w/7.70/bugfix/UTAPI-92/fix_redis_password_config 2023-05-26 17:39:02 +00:00
Taylor McKinnon 7f9c9aa202 bf(UTAPI-92): Fix redis password loading 2023-05-25 15:03:36 -07:00
Taylor McKinnon 41b690aa5d Merge remote-tracking branch 'origin/w/7.70/bugfix/UTAPI-88/bump_version_7_10_13' into w/8.1/bugfix/UTAPI-88/bump_version_7_10_13 2023-04-11 16:10:46 -07:00
Taylor McKinnon 3f08327fe6 Merge remote-tracking branch 'origin/bugfix/UTAPI-88/bump_version_7_10_13' into w/7.70/bugfix/UTAPI-88/bump_version_7_10_13 2023-04-11 16:09:35 -07:00
Taylor McKinnon 84bc7e180f bf(UTAPI-88): Release 7.10.13 2023-04-11 16:07:23 -07:00
bert-e e328095606 Merge branches 'w/8.1/bugfix/UTAPI-88-do-not-error-500-in-case-of-negative-metric' and 'q/1279/7.70/bugfix/UTAPI-88-do-not-error-500-in-case-of-negative-metric' into tmp/octopus/q/8.1 2023-04-10 23:34:43 +00:00
bert-e cb9d2b8d2b Merge branches 'w/7.70/bugfix/UTAPI-88-do-not-error-500-in-case-of-negative-metric' and 'q/1279/7.10/bugfix/UTAPI-88-do-not-error-500-in-case-of-negative-metric' into tmp/octopus/q/7.70 2023-04-10 23:34:42 +00:00
bert-e de73fe9ee0 Merge branch 'bugfix/UTAPI-88-do-not-error-500-in-case-of-negative-metric' into q/7.10 2023-04-10 23:34:42 +00:00
bert-e 0d33f81e35 Merge branch 'w/7.70/bugfix/UTAPI-88-do-not-error-500-in-case-of-negative-metric' into tmp/octopus/w/8.1/bugfix/UTAPI-88-do-not-error-500-in-case-of-negative-metric 2023-04-10 23:28:07 +00:00
bert-e 13fb668d94 Merge branch 'bugfix/UTAPI-88-do-not-error-500-in-case-of-negative-metric' into tmp/octopus/w/7.70/bugfix/UTAPI-88-do-not-error-500-in-case-of-negative-metric 2023-04-10 23:28:07 +00:00
scality-gelbart 0fc08f3d7d bf(UTAPI-88): Replace transient state API error with info log message and 200 response 2023-04-10 16:27:21 -07:00
Naren 334c4c26a1 Merge remote-tracking branch 'origin/improvement/UTAPI-91-release-7-70-0' into w/8.1/improvement/UTAPI-91-release-7-70-0 2023-03-28 18:36:52 -07:00
Naren 5319a24704 impr: UTAPI-91 bump version to 7.70.0 2023-03-28 18:05:13 -07:00
Naren ed3628ef01 impr: UTAPI-90 bump version to 8.1.10 2023-03-15 11:20:42 -07:00
Naren 34e881f0e9 impr: UTAPI-90 upgrade bucketclient and vaultclient 2023-03-15 11:19:21 -07:00
Naren 13befbd535 Merge remote-tracking branch 'origin/improvement/UTAPI-90-upgrade-prom-client' into w/8.1/improvement/UTAPI-90-upgrade-prom-client 2023-03-15 11:12:08 -07:00
Naren 347cf3c1cb impr: UTAPI-90 bump version to 7.10.12 2023-03-15 11:03:06 -07:00
Naren 9b5fe56f48 impr: UTAPI-90 upgrade bucketclient and vaultclient 2023-03-15 11:02:22 -07:00
Naren 988f478957 impr: UTAPI-90 upgrade arsenal for prom-client upgrade 2023-03-14 18:54:16 -07:00
bert-e 5f24e749ea Merge branch 'improvement/UTAPI-89-update-metric-names' into tmp/octopus/w/8.1/improvement/UTAPI-89-update-metric-names 2023-02-28 16:56:12 +00:00
Naren 480bde079b impr UTAPI-89 update metric names 2023-02-28 08:54:25 -08:00
Taylor McKinnon 0ba5a02ba7
Bump version to 8.1.9 2022-10-26 11:47:44 -07:00
Taylor McKinnon e75ce33f35 Merge remote-tracking branch 'origin/bugfix/UTAPI-87/handle_zero_byte_objs_in_ver_susp_buck' into w/8.1/bugfix/UTAPI-87/handle_zero_byte_objs_in_ver_susp_buck 2022-10-25 13:36:06 -07:00
Taylor McKinnon 3ec818bca1 bf(UTAPI-87): Bump version to 7.10.11 2022-10-25 13:34:21 -07:00
Taylor McKinnon c3111dfadf bf(UTAPI-87): Handle deleting zero byte objects in version suspended buckets 2022-10-25 13:34:21 -07:00
Taylor McKinnon 451f88d27e Merge remote-tracking branch 'origin/bugfix/UTAPI-85/bump_version' into w/8.1/bugfix/UTAPI-85/bump_version 2022-10-17 14:47:06 -07:00
Taylor McKinnon 71f162169d bf(UTAPI-85): Bump version to 7.10.10 2022-10-17 14:45:26 -07:00
bert-e c0abf3e53f Merge branches 'w/8.1/bugfix/UTAPI-85/allow_host_port_override' and 'q/1271/7.10/bugfix/UTAPI-85/allow_host_port_override' into tmp/octopus/q/8.1 2022-10-17 21:24:40 +00:00
bert-e 3e740a2f6a Merge branch 'bugfix/UTAPI-85/allow_host_port_override' into q/7.10 2022-10-17 21:24:40 +00:00
bert-e 93134e6ccb Merge branches 'w/8.1/bugfix/UTAPI-82-v1-delete-inconsistency' and 'q/1267/7.10/bugfix/UTAPI-82-v1-delete-inconsistency' into tmp/octopus/q/8.1 2022-10-15 00:09:09 +00:00
bert-e b4b52c0de7 Merge branch 'bugfix/UTAPI-82-v1-delete-inconsistency' into q/7.10 2022-10-15 00:09:09 +00:00
Artem Bakalov 4faac178ef Merge remote-tracking branch 'origin/bugfix/UTAPI-82-v1-delete-inconsistency' into w/8.1/bugfix/UTAPI-82-v1-delete-inconsistency 2022-10-14 17:01:02 -07:00
Artem Bakalov 193d1a5d92 UTAPI-82 fix delete inconsistency 2022-10-14 16:55:16 -07:00
bert-e f90213d3d5 Merge branch 'bugfix/UTAPI-85/allow_host_port_override' into tmp/octopus/w/8.1/bugfix/UTAPI-85/allow_host_port_override 2022-10-13 18:24:33 +00:00
Taylor McKinnon 7eb35d51f4 bf(UTAPI-85): Allow host and port to be overridden 2022-10-13 11:02:31 -07:00
bert-e 2e04a5cc44 Merge branch 'improvement/UTAPI-83/provide_warp10_image' into tmp/octopus/w/8.1/improvement/UTAPI-83/provide_warp10_image 2022-10-05 20:17:37 +00:00
Taylor McKinnon 52520e4de1 impr(UTAPI-83): Add warp 10 release workflow 2022-10-05 13:17:05 -07:00
Taylor McKinnon 3391130d43 Merge remote-tracking branch 'origin/bugfix/UTAPI-84/fix_nodesvc_base_config' into w/8.1/bugfix/UTAPI-84/fix_nodesvc_base_config 2022-10-03 15:34:56 -07:00
Taylor McKinnon f3a9a57f58 bf(UTAPI-840): Fix nodesvc-base image config 2022-10-03 15:33:31 -07:00
Taylor McKinnon c0aa52beab Merge remote-tracking branch 'origin/feature/UTAPI-71/add_nodesvc_based_image_and_release_workflow' into w/8.1/feature/UTAPI-71/add_nodesvc_based_image_and_release_workflow 2022-09-23 10:46:41 -07:00
Taylor McKinnon 0ae108f15e ft(UTAPI-71): Rework release workflow to support S3C releases 2022-09-23 10:45:16 -07:00
Taylor McKinnon 2f99e1ddd5 ft(UTAPI-71): Split v2 tests into with/without sensision enabled 2022-09-23 10:45:16 -07:00
Taylor McKinnon cbeae49d47 ft(UTAPI-71): Fix sensision inside warp 10 image 2022-09-23 10:45:16 -07:00
Taylor McKinnon 64d3ecb10f ft(UTAPI-71): Call build-ci from tests 2022-09-23 10:45:16 -07:00
Taylor McKinnon df57f68b9a ft(UTAPI-71): Add build workflows 2022-09-23 10:45:16 -07:00
Taylor McKinnon db5a43f412 ft(UTAPI-71): Backport Dockerfile from development/8.1 branch 2022-09-22 10:52:20 -07:00
Taylor McKinnon 116a2108b0 ft(UTAPI-71): Add nodesvc-base based image 2022-09-22 10:52:20 -07:00
Taylor McKinnon 750cabc565 Merge remote-tracking branch 'origin/bugfix/UTAPI-81/add_bucket_tagging_methods' into w/8.1/bugfix/UTAPI-81/add_bucket_tagging_methods 2022-08-04 12:53:02 -07:00
Taylor McKinnon 469b862a69 bf(UTAPI-81): Add bucket tagging operations 2022-08-04 12:49:23 -07:00
Taylor McKinnon 62bf4d86e6 Merge remote-tracking branch 'origin/improvement/UTAPI-80/release_7_10_7' into w/8.1/improvement/UTAPI-80/release_7_10_7 2022-07-22 11:19:18 -07:00
Taylor McKinnon a072535050 impr(UTAPI-80): Release 7.10.7 2022-07-22 11:17:56 -07:00
bert-e 29b52a0346 Merge branch 'bugfix/UTAPI-78/fix_user_auth_with_no_resources' into tmp/octopus/w/8.1/bugfix/UTAPI-78/fix_user_auth_with_no_resources 2022-07-21 16:38:05 +00:00
Taylor McKinnon 1168720f98 bf(UTAPI-78): Fix second stage user auth with no resources 2022-07-20 09:37:34 -07:00
Jonathan Gramain ff5a75bb11 Merge remote-tracking branch 'origin/bugfix/UTAPI-77-bumpOasTools' into w/8.1/bugfix/UTAPI-77-bumpOasTools 2022-06-20 15:10:40 -07:00
Jonathan Gramain 84a025d430 bugfix: UTAPI-77 bump oas-tools to 2.2.2
Bump the dependency version of oas-tools to version 2.2.2, to fix a
vulnerability with mpath@0.5.0
2022-06-20 13:32:26 -07:00
bert-e 65726f6d0b Merge branches 'w/8.1/feature/UTAPI-76/breakout_leveldb_and_datalog' and 'q/1251/7.10/feature/UTAPI-76/breakout_leveldb_and_datalog' into tmp/octopus/q/8.1 2022-06-08 21:59:45 +00:00
bert-e 4fbcd109a7 Merge branch 'feature/UTAPI-76/breakout_leveldb_and_datalog' into q/7.10 2022-06-08 21:59:45 +00:00
bert-e eed137768d Merge branches 'w/8.1/feature/UTAPI-75/Add_metrics_for_latest_check_snapshot_timestamps' and 'q/1249/7.10/feature/UTAPI-75/Add_metrics_for_latest_check_snapshot_timestamps' into tmp/octopus/q/8.1 2022-06-07 22:32:44 +00:00
bert-e a71e4d48d0 Merge branch 'feature/UTAPI-75/Add_metrics_for_latest_check_snapshot_timestamps' into q/7.10 2022-06-07 22:32:44 +00:00
bert-e 0257e97bc2 Merge branch 'feature/UTAPI-76/breakout_leveldb_and_datalog' into tmp/octopus/w/8.1/feature/UTAPI-76/breakout_leveldb_and_datalog 2022-06-07 22:32:07 +00:00
Taylor McKinnon 7e596598fb ft(UTAPI-76): Breakout disk usage for leveldb and datalog 2022-06-07 15:31:27 -07:00
bert-e 55b640faba Merge branch 'feature/UTAPI-75/Add_metrics_for_latest_check_snapshot_timestamps' into tmp/octopus/w/8.1/feature/UTAPI-75/Add_metrics_for_latest_check_snapshot_timestamps 2022-06-03 16:43:12 +00:00
Taylor McKinnon fd5bea5301 ft(UTAPI-75): Add metrics for latest checkpoint and snapshot 2022-06-03 09:40:27 -07:00
bert-e 54516db267 Merge branch 'feature/UTAPI-70/add_metrics_to_http_server' into tmp/octopus/w/8.1/feature/UTAPI-70/add_metrics_to_http_server 2022-05-26 16:50:41 +00:00
Taylor McKinnon 39eee54045 ft(UTAPI-70): Add http server metrics 2022-05-26 09:50:12 -07:00
bert-e c2f121d0d3 Merge branch 'feature/UTAPI-69/Add_async_task_metrics' into q/7.10 2022-05-26 16:32:49 +00:00
bert-e 1c6c159423 Merge branches 'w/8.1/feature/UTAPI-69/Add_async_task_metrics' and 'q/1239/7.10/feature/UTAPI-69/Add_async_task_metrics' into tmp/octopus/q/8.1 2022-05-26 16:32:49 +00:00
bert-e 22805fe7e7 Merge branch 'feature/UTAPI-69/Add_async_task_metrics' into tmp/octopus/w/8.1/feature/UTAPI-69/Add_async_task_metrics 2022-05-26 16:23:04 +00:00
Taylor McKinnon fbc7f3f442 ft(UTAPI-69): Add metrics for async tasks 2022-05-26 09:22:42 -07:00
bert-e 9e1761b0a4 Merge branch 'feature/UTAPI-67/Add_base_prometheus_framework' into q/7.10 2022-05-24 17:46:30 +00:00
bert-e ca82189fd7 Merge branches 'w/8.1/feature/UTAPI-67/Add_base_prometheus_framework' and 'q/1235/7.10/feature/UTAPI-67/Add_base_prometheus_framework' into tmp/octopus/q/8.1 2022-05-24 17:46:30 +00:00
bert-e 2f26d380f6 Merge branch 'feature/UTAPI-67/Add_base_prometheus_framework' into tmp/octopus/w/8.1/feature/UTAPI-67/Add_base_prometheus_framework 2022-05-24 17:12:06 +00:00
Taylor McKinnon 50a3ba2f18 ft(UTAPI-67): Add metrics framework to BaseTask 2022-05-24 10:07:48 -07:00
Taylor McKinnon 9f1552488c impr(UTAPI-66): Update Dockerfile with --network-concurrency 2022-05-18 10:06:08 -07:00
bert-e bf366e9472 Merge branch 'improvement/UTAPI-66/migrate_to_arsenal_7_10_18' into tmp/octopus/w/8.1/improvement/UTAPI-66/migrate_to_arsenal_7_10_18 2022-05-18 16:34:32 +00:00
Taylor McKinnon 5352f8467d remove unused require 2022-05-18 09:34:26 -07:00
bert-e 002a7ad1ca Merge branch 'improvement/UTAPI-66/migrate_to_arsenal_7_10_18' into tmp/octopus/w/8.1/improvement/UTAPI-66/migrate_to_arsenal_7_10_18 2022-05-18 16:33:37 +00:00
Taylor McKinnon a8f54966bc f 2022-05-18 09:33:32 -07:00
bert-e c2bff35bc6 Merge branch 'improvement/UTAPI-66/migrate_to_arsenal_7_10_18' into tmp/octopus/w/8.1/improvement/UTAPI-66/migrate_to_arsenal_7_10_18 2022-05-18 16:30:07 +00:00
Taylor McKinnon b92102eb65
Apply suggestions from code review
Co-authored-by: Jonathan Gramain <jonathan.gramain@scality.com>
2022-05-18 09:30:02 -07:00
Taylor McKinnon 280c4bae3a Merge remote-tracking branch 'origin/improvement/UTAPI-66/migrate_to_arsenal_7_10_18' into w/8.1/improvement/UTAPI-66/migrate_to_arsenal_7_10_18 2022-05-17 13:39:36 -07:00
Taylor McKinnon d9901609ae impr(UTAPI-66): Convert v2 code 2022-05-17 11:03:23 -07:00
Taylor McKinnon 4448f79088 impr(UTAPI-66): Convert v1 code 2022-05-17 11:01:30 -07:00
Taylor McKinnon 40fa94f0d7 impr(UTAPI-66): Update arsenal to 7.10.24 2022-05-17 10:56:15 -07:00
bert-e 3c09767315 Merge branch 'bugfix/UTAPI-72/add_missing_await_to_pushMetric' into tmp/octopus/w/8.1/bugfix/UTAPI-72/add_missing_await_to_pushMetric 2022-05-06 20:13:26 +00:00
Taylor McKinnon 43ca83cab7 bf(UTAPI-72): Fix CacheClient.pushMetric() to `await` this._cacheBackend.addToShard() 2022-05-06 12:46:35 -07:00
Erwan Bernard 2c1d25a50e Merge remote-tracking branch 'origin/w/7.10/feature/RELENG-5645/patch-usage-of-action-gh-release' into w/8.1/feature/RELENG-5645/patch-usage-of-action-gh-release 2022-04-01 17:01:05 +02:00
bert-e 7dd49ca418 Merge branch 'feature/RELENG-5645/patch-usage-of-action-gh-release' into tmp/octopus/w/7.10/feature/RELENG-5645/patch-usage-of-action-gh-release 2022-04-01 14:58:01 +00:00
Erwan Bernard 87cba51d75 [RELENG-5645] Patch usage of actions gh release 2022-04-01 15:56:31 +02:00
Xin LI 3eed7b295d bugfix: UTAPI-64 update vaultclient, bucketclient, oas-tools to fix critical 2022-03-31 19:45:00 +02:00
bert-e c359ddee7e Merge branch 'w/7.10/bugfix/UTAPI-63/fix_arsenal_require_for_dhparam' into tmp/octopus/w/8.1/bugfix/UTAPI-63/fix_arsenal_require_for_dhparam 2022-03-11 18:09:51 +00:00
Taylor McKinnon f6215a1b08 bf(UTAPI-63): Fix dhparam require 2022-03-11 10:05:34 -08:00
Naren 7d7b46bc5e feature: UTAPI-59 update yarn.lock 2022-02-07 16:18:03 -08:00
Naren b82bed39db Merge remote-tracking branch 'origin/w/7.10/feature/UTAPI-59-update-version-and-deps' into w/8.1/feature/UTAPI-59-update-version-and-deps 2022-02-07 16:15:18 -08:00
Naren d9838f4198 feature: UTAPI-59 update eslintrc rules 2022-02-07 15:39:37 -08:00
Naren a6bd7e348c Merge remote-tracking branch 'origin/feature/UTAPI-59-update-version-and-deps' into w/7.10/feature/UTAPI-59-update-version-and-deps 2022-02-07 14:39:43 -08:00
Naren 5683075f59 feature: UTAPI-59 update version and deps 2022-02-07 14:31:45 -08:00
Naren cc3bceebcf Merge remote-tracking branch 'origin/w/7.10/feature/UTAPI-59/UpgradeToNode16' into w/8.1/feature/UTAPI-59/UpgradeToNode16 2022-02-07 14:22:20 -08:00
Naren 5ad53486c3 Merge remote-tracking branch 'origin/feature/UTAPI-59/UpgradeToNode16' into w/7.10/feature/UTAPI-59/UpgradeToNode16 2022-02-07 14:08:27 -08:00
Nicolas Humbert 20479f0dfa Merge remote-tracking branch 'origin/development/8.1' into w/8.1/feature/UTAPI-59/UpgradeToNode16 2022-02-04 16:56:05 +01:00
Nicolas Humbert da04548b2a Merge remote-tracking branch 'origin/development/7.10' into w/7.10/feature/UTAPI-59/UpgradeToNode16 2022-02-04 16:42:26 +01:00
Nicolas Humbert e86423f88c Merge remote-tracking branch 'origin/development/7.4' into feature/UTAPI-59/UpgradeToNode16 2022-02-04 16:31:58 +01:00
bert-e 7179917edc Merge branches 'w/8.1/feature/UTAPI-44-migrate-github-actions' and 'q/1173/7.10/feature/UTAPI-44-migrate-github-actions' into tmp/octopus/q/8.1 2022-02-03 22:18:40 +00:00
bert-e fb8be0601e Merge branch 'w/7.10/feature/UTAPI-44-migrate-github-actions' into tmp/octopus/q/7.10 2022-02-03 22:18:39 +00:00
Naren 92adb8c320 Merge remote-tracking branch 'origin/improvement/UTAPI-61-lock-bucket-client-version' into w/8.1/improvement/UTAPI-61-lock-bucket-client-version 2022-02-02 16:14:42 -08:00
Naren 2182593c4c improvement: UTAPI-61 lock bucketclient to a version 2022-02-02 15:40:00 -08:00
Thomas Carmet b770331a12 Merge remote-tracking branch 'origin/w/7.10/feature/UTAPI-44-migrate-github-actions' into w/8.1/feature/UTAPI-44-migrate-github-actions 2022-02-02 11:09:09 -08:00
Thomas Carmet 664d7fba55 Add healthcheck to all services 2022-02-02 11:08:21 -08:00
Thomas Carmet e5843723f6 Wait for warp10 to boot before starting pipeline 2022-02-02 11:08:21 -08:00
Taylor McKinnon 0285e4002b pass path to sub process 2022-01-28 16:51:11 -08:00
bert-e 765b149cbf Merge branch 'w/7.10/feature/UTAPI-59/UpgradeToNode16' into tmp/octopus/w/8.1/feature/UTAPI-59/UpgradeToNode16 2022-01-26 23:38:35 +00:00
Nicolas Humbert 7dbbfc5ee1 Update engines node to 16 2022-01-26 18:38:21 -05:00
Nicolas Humbert 1fc6c29864 update Docker node image 2022-01-26 15:50:19 -05:00
Nicolas Humbert 135581fa63 Merge remote-tracking branch 'origin/w/7.10/feature/UTAPI-59/UpgradeToNode16' into w/8.1/feature/UTAPI-59/UpgradeToNode16 2022-01-26 15:47:41 -05:00
Nicolas Humbert 1640b641a9 Merge remote-tracking branch 'origin/feature/UTAPI-59/UpgradeToNode16' into w/7.10/feature/UTAPI-59/UpgradeToNode16 2022-01-26 15:41:45 -05:00
Nicolas Humbert a018192741 UTAPI-59 upgrade to Node 16 2022-01-26 15:24:21 -05:00
Artem Bakalov c27442cc89 S3C-5397 - adds exponential backoff to metadata requests to prevent failures during leader elections 2022-01-26 11:02:26 -08:00
Thomas Carmet 6642681b58 Merge remote-tracking branch 'origin/feature/UTAPI-44-migrate-github-actions' into w/7.10/feature/UTAPI-44-migrate-github-actions 2022-01-18 15:35:27 -08:00
Thomas Carmet bffd1d2a32 ability to overwrite python interpreter 2022-01-17 11:17:01 -08:00
Thomas Carmet 3b13638b27 UTAPI-44 migrate to github actions 2022-01-17 11:17:01 -08:00
bert-e 89bb6c6e5d Merge branch 'improvement/UTAPI-58/limit_max_size_of_snapshots' into tmp/octopus/w/8.1/improvement/UTAPI-58/limit_max_size_of_snapshots 2022-01-07 19:45:00 +00:00
Taylor McKinnon 5cebcedead (f) remove extra line 2022-01-07 11:44:42 -08:00
bert-e 792580c6d6 Merge branch 'improvement/UTAPI-58/limit_max_size_of_snapshots' into tmp/octopus/w/8.1/improvement/UTAPI-58/limit_max_size_of_snapshots 2022-01-06 20:00:26 +00:00
Taylor McKinnon 41f9cc7c7d impr(UTAPI-58): Limit maximum size of snapshots 2022-01-06 11:53:00 -08:00
bert-e 29475f1b9a Merge branches 'w/8.1/improvement/UTAPI-55/warp10_request_error_logging' and 'q/1202/7.10/improvement/UTAPI-55/warp10_request_error_logging' into tmp/octopus/q/8.1 2021-12-06 21:47:28 +00:00
bert-e dec503fda3 Merge branch 'improvement/UTAPI-55/warp10_request_error_logging' into q/7.10 2021-12-06 21:47:28 +00:00
bert-e 255f428b84 Merge branch 'improvement/UTAPI-55/warp10_request_error_logging' into tmp/octopus/w/8.1/improvement/UTAPI-55/warp10_request_error_logging 2021-12-06 20:51:32 +00:00
Taylor McKinnon 2d7274c559 impr(UTAPI-55): Improve warp 10 request error logging 2021-12-06 12:49:28 -08:00
bert-e 202dc39eb5 Merge branch 'feature/UTAPI-56/expose_warp10_request_timeouts_in_config' into tmp/octopus/w/8.1/feature/UTAPI-56/expose_warp10_request_timeouts_in_config 2021-12-03 19:07:23 +00:00
Taylor McKinnon c96bc06a4b ft(UTAPI-56): Expose warp 10 request timeouts in config 2021-12-03 11:05:09 -08:00
bert-e 25bd285d35 Merge branch 'bugfix/UTAPI-54/fix_service_user_test' into tmp/octopus/w/8.1/bugfix/UTAPI-54/fix_service_user_test 2021-11-29 20:44:52 +00:00
Taylor McKinnon 19b1974e18 bf(UTAPI-54): create service user using prefix during test setup 2021-11-29 12:44:23 -08:00
Taylor McKinnon 5a4ba9f72a bf: add yarn.lock to image 2021-11-23 10:19:41 -08:00
bert-e 494381beec Merge branch 'bugfix/UTAPI-53/handle_missing_content_length' into tmp/octopus/w/8.1/bugfix/UTAPI-53/handle_missing_content_length 2021-11-23 17:58:18 +00:00
Taylor McKinnon 6c1a2c87fb bf(UTAPI-53): skip objects without a content-length during reindex 2021-11-23 09:56:52 -08:00
Taylor McKinnon be10ca2ba8 Merge remote-tracking branch 'origin/feature/UTAPI-50/bump_version_to_7.10.5' into w/8.1/feature/UTAPI-50/bump_version_to_7.10.5 2021-11-18 10:05:37 -08:00
Taylor McKinnon 5e68e14d02 ft(UTAPI-50): Bump version to 7.10.5 2021-11-18 09:58:58 -08:00
bert-e 1800407606 Merge branch 'bugfix/UTAPI-49/fix_config_file_schema_event_filter' into tmp/octopus/w/8.1/bugfix/UTAPI-49/fix_config_file_schema_event_filter 2021-11-17 22:39:27 +00:00
Taylor McKinnon 622026e0c6 bf(UTAPI-49): Fix event filter config schema 2021-11-17 14:36:49 -08:00
Taylor McKinnon 029fe17019 Merge remote-tracking branch 'origin/feature/UTAPI-48/bump_version_to_7.10.4' into w/8.1/feature/UTAPI-48/bump_version_to_7.10.4 2021-11-17 09:46:16 -08:00
Taylor McKinnon 9600a1ce59 ft(UTAPI-48): Bump version to 7.10.4 2021-11-17 09:34:11 -08:00
bert-e ada9e2bf55 Merge branches 'w/8.1/bugfix/UTAPI-46-redisv2-backoff' and 'q/1180/7.10/bugfix/UTAPI-46-redisv2-backoff' into tmp/octopus/q/8.1 2021-11-16 23:27:56 +00:00
bert-e d217850863 Merge branch 'bugfix/UTAPI-46-redisv2-backoff' into q/7.10 2021-11-16 23:27:56 +00:00
bert-e 260d2f83ef Merge branch 'bugfix/UTAPI-46-redisv2-backoff' into tmp/octopus/w/8.1/bugfix/UTAPI-46-redisv2-backoff 2021-11-16 22:55:55 +00:00
Rached Ben Mustapha 27a36b3c51
bugfix: pin python redis client for tests 2021-11-16 14:08:03 -08:00
Rached Ben Mustapha 34ed98b7fb
bugfix: support redis retry params in v2
(cherry picked from commit 5908d15cd6bb7da551bd7392c17675e07bef3456)
2021-11-16 12:16:38 -08:00
bert-e bbb6764aa7 Merge branches 'w/8.1/feature/UTAPI-43/event_allow_deny_filter' and 'q/1174/7.10/feature/UTAPI-43/event_allow_deny_filter' into tmp/octopus/q/8.1 2021-11-16 17:50:25 +00:00
bert-e 1f672d343b Merge branch 'feature/UTAPI-43/event_allow_deny_filter' into q/7.10 2021-11-16 17:50:25 +00:00
bert-e e63f9c9009 Merge branch 'feature/UTAPI-43/event_allow_deny_filter' into tmp/octopus/w/8.1/feature/UTAPI-43/event_allow_deny_filter 2021-11-16 17:35:45 +00:00
Taylor McKinnon 6c53e19ce2 ft(UTAPI-43): Add allow/deny filter for events 2021-11-16 09:35:00 -08:00
Rached Ben Mustapha 5650b072ce Merge remote-tracking branch 'origin/bugfix/UTAPI-38-wait-for-redis-ready-main' into w/8.1/bugfix/UTAPI-38-wait-for-redis-ready-main 2021-11-08 18:25:43 +00:00
Rached Ben Mustapha 8358eb7166 chore: bump version 2021-11-08 17:55:05 +00:00
Rached Ben Mustapha 1ef954975e Merge remote-tracking branch 'origin/bugfix/UTAPI-38-wait-for-redis-ready-main' into w/8.1/bugfix/UTAPI-38-wait-for-redis-ready-main 2021-11-05 02:47:51 +00:00
Rached Ben Mustapha 5afbaca3df bugfix: try and recover from redis connection errors 2021-11-05 02:25:28 +00:00
Rached Ben Mustapha c7c4fcf8dc improvement: install backo 2021-11-05 02:25:28 +00:00
Rached Ben Mustapha d01f348867 improvement: upgrade ioredis 2021-11-05 02:25:25 +00:00
bert-e 1ae4abee19 Merge branch 'feature/UTAPI-41/release_7_10_2' into q/7.10 2021-11-01 22:13:17 +00:00
bert-e 989715af95 Merge branches 'w/8.1/feature/UTAPI-41/release_7_10_2' and 'q/1169/7.10/feature/UTAPI-41/release_7_10_2' into tmp/octopus/q/8.1 2021-11-01 22:13:17 +00:00
Taylor McKinnon 87887e57a0 Merge remote-tracking branch 'origin/feature/UTAPI-41/release_7_10_2' into w/8.1/feature/UTAPI-41/release_7_10_2 2021-11-01 12:39:30 -07:00
Taylor McKinnon 7f03b7759c v7.10.2 2021-11-01 12:34:19 -07:00
bert-e e36d376dae Merge branch 'bugfix/UTAPI-42/change_upstream_warp10_repo' into q/7.10 2021-11-01 19:31:00 +00:00
bert-e a8bccfd261 Merge branches 'w/8.1/bugfix/UTAPI-42/change_upstream_warp10_repo' and 'q/1167/7.10/bugfix/UTAPI-42/change_upstream_warp10_repo' into tmp/octopus/q/8.1 2021-11-01 19:31:00 +00:00
bert-e 89bd9751e4 Merge branch 'bugfix/UTAPI-42/change_upstream_warp10_repo' into tmp/octopus/w/8.1/bugfix/UTAPI-42/change_upstream_warp10_repo 2021-11-01 19:24:49 +00:00
Taylor McKinnon 7344a0801c bf(UTAPI-42): Update upstream warp 10 image repo 2021-11-01 12:22:45 -07:00
bert-e b1d55217a8 Merge branches 'w/8.1/bugfix/UTAPI-39/add_crr_metrics_for_v2' and 'q/1159/7.10/bugfix/UTAPI-39/add_crr_metrics_for_v2' into tmp/octopus/q/8.1 2021-11-01 19:17:43 +00:00
bert-e 67c6806e99 Merge branch 'bugfix/UTAPI-39/add_crr_metrics_for_v2' into q/7.10 2021-11-01 19:17:42 +00:00
bert-e 0bb15ae1c6 Merge branch 'bugfix/UTAPI-39/add_crr_metrics_for_v2' into tmp/octopus/w/8.1/bugfix/UTAPI-39/add_crr_metrics_for_v2 2021-11-01 19:10:24 +00:00
Taylor McKinnon 0001b23c32 bf(UTAPI-39): Add crr metric operations for Utapiv2 2021-11-01 12:09:38 -07:00
bert-e cb9e79be48 Merge branch 'w/7.10/feature/UTAPI-40/release_7_4_11' into tmp/octopus/w/8.1/feature/UTAPI-40/release_7_4_11 2021-10-29 23:04:25 +00:00
Taylor McKinnon 3390c910e3 Merge remote-tracking branch 'origin/feature/UTAPI-40/release_7_4_11' into w/7.10/feature/UTAPI-40/release_7_4_11 2021-10-29 16:03:58 -07:00
Taylor McKinnon fa4c353b67 ft(UTAPI-40): bump version to 7.4.11 2021-10-29 15:58:22 -07:00
bert-e c528f75d98 Merge branches 'w/8.1/bugfix/UTAPI-34-implementCRRActions' and 'q/1154/7.10/bugfix/UTAPI-34-implementCRRActions' into tmp/octopus/q/8.1 2021-10-29 18:40:12 +00:00
bert-e 38fb9fb139 Merge branches 'w/7.10/bugfix/UTAPI-34-implementCRRActions' and 'q/1154/7.4/bugfix/UTAPI-34-implementCRRActions' into tmp/octopus/q/7.10 2021-10-29 18:40:12 +00:00
bert-e 215376910c Merge branch 'bugfix/UTAPI-34-implementCRRActions' into q/7.4 2021-10-29 18:40:12 +00:00
bert-e 14779a24ec Merge branch 'w/7.10/bugfix/UTAPI-34-implementCRRActions' into tmp/octopus/w/8.1/bugfix/UTAPI-34-implementCRRActions 2021-10-28 23:08:57 +00:00
Taylor McKinnon d8aafb5b90 Merge remote-tracking branch 'origin/bugfix/UTAPI-34-implementCRRActions' into w/7.10/bugfix/UTAPI-34-implementCRRActions 2021-10-28 16:08:38 -07:00
Jonathan Gramain 67a8a9b94d bf(UTAPI-34): Add metric types for replication 2021-10-28 16:05:00 -07:00
bert-e d5947cd548 Merge branch 'improvement/UTAPI-36/bump_vault_cpu_req' into q/7.10 2021-10-22 16:04:50 +00:00
bert-e df79f65abf Merge branches 'w/8.1/improvement/UTAPI-36/bump_vault_cpu_req' and 'q/1152/7.10/improvement/UTAPI-36/bump_vault_cpu_req' into tmp/octopus/q/8.1 2021-10-22 16:04:50 +00:00
bert-e be1be375f3 Merge branch 'improvement/UTAPI-36/bump_vault_cpu_req' into tmp/octopus/w/8.1/improvement/UTAPI-36/bump_vault_cpu_req 2021-10-21 22:00:49 +00:00
Taylor McKinnon 5668d16c2e impr(UTAPI-36): Bump vault cpu request and limit 2021-10-21 14:59:06 -07:00
bert-e df29658a2f Merge branch 'bugfix/UTAPI-28/catch_all_listing_errors' into tmp/octopus/w/7.10/bugfix/UTAPI-28/catch_all_listing_errors 2021-10-20 23:41:11 +00:00
bert-e 3a5d379510 Merge branch 'w/7.10/bugfix/UTAPI-28/catch_all_listing_errors' into tmp/octopus/w/8.1/bugfix/UTAPI-28/catch_all_listing_errors 2021-10-20 23:41:11 +00:00
bert-e 3a80fb708e Merge branch 'w/7.10/bugfix/UTAPI-35/backport_fixes_for_74' into tmp/octopus/w/8.1/bugfix/UTAPI-35/backport_fixes_for_74 2021-10-19 20:46:32 +00:00
bert-e 5d173d3bc9 Merge branch 'bugfix/UTAPI-35/backport_fixes_for_74' into tmp/octopus/w/7.10/bugfix/UTAPI-35/backport_fixes_for_74 2021-10-19 20:46:32 +00:00
Taylor McKinnon a29fbf4bb1 bf(UTAPI-26): Catch all listing errors and reraise as InvalidListing 2021-10-19 13:42:38 -07:00
Taylor McKinnon 05ff10a343 bf(UTAPI-27): Fail bucketd request after repeated errors
(cherry picked from commit 8da7c90691)
2021-10-19 13:30:02 -07:00
Taylor McKinnon b2d725e2b8 bf(UTAPI-21): convert --workers flag to int
(cherry picked from commit d44b60ec0e)
2021-10-19 13:29:53 -07:00
scality-gelbart 4463a7172a Update s3_bucketd.py
(cherry picked from commit ba3dbb0100)
2021-10-19 13:29:45 -07:00
scality-gelbart 6d9b47cb79 Update s3_bucketd.py
(cherry picked from commit 588ccf7443)
2021-10-19 13:29:38 -07:00
Taylor McKinnon ec7d68075b bf(S3C-3505): Add support for --bucket flag to s3_reindex.py
(cherry picked from commit 1e4b7bd9f2)
2021-10-19 13:28:59 -07:00
Taylor McKinnon 945aa9665f Merge remote-tracking branch 'origin/feature/UTAPI-33/add_ensure_service_user' into w/8.1/feature/UTAPI-33/add_ensure_service_user 2021-10-15 13:20:12 -07:00
Taylor McKinnon 2d876c17cf ft(UTAPI-33): Add ensureServiceUser script 2021-10-15 13:12:42 -07:00
bert-e 3b2d4a18d4 Merge branch 'improvement/UTAPI-32/change_service_user_arnPrefix_to_full_arn' into tmp/octopus/w/8.1/improvement/UTAPI-32/change_service_user_arnPrefix_to_full_arn 2021-10-12 19:27:41 +00:00
Taylor McKinnon e8519ceebb impr(UTAPI-32): Change service user arnPrefix to full arn 2021-10-12 12:25:58 -07:00
Thomas Carmet fa3fb82e5c Merge branch 'w/7.10/feature/UTAPI-30-align-package-version' into w/8.1/feature/UTAPI-30-align-package-version 2021-10-07 10:36:17 -07:00
Thomas Carmet 5938c8be5c Merge branch 'feature/UTAPI-30-align-package-version' into w/7.10/feature/UTAPI-30-align-package-version 2021-10-07 10:35:08 -07:00
Thomas Carmet a979a40260 UTAPI-30 set package.json version accordingly 2021-10-07 10:33:31 -07:00
bert-e 0001d2218a Merge branch 'bugfix/UTAPI-29/fix_bucketd_tls_config' into tmp/octopus/w/8.1/bugfix/UTAPI-29/fix_bucketd_tls_config 2021-10-06 00:30:05 +00:00
Taylor McKinnon cf7b302414 bf(UTAPI-29): Fix bucketd tls config 2021-10-05 17:28:42 -07:00
bert-e b38000c771 Merge branch 'bugfix/UTAPI-27/max_retries_for_bucketd_requests' into tmp/octopus/w/8.1/bugfix/UTAPI-27/max_retries_for_bucketd_requests 2021-10-04 20:41:14 +00:00
Taylor McKinnon 8da7c90691 bf(UTAPI-27): Fail bucketd request after repeated errors 2021-10-04 13:40:39 -07:00
bert-e 34d38fb2b7 Merge branches 'w/8.1/feature/UTAPI-26/add_service_user' and 'q/1141/7.10/feature/UTAPI-26/add_service_user' into tmp/octopus/q/8.1 2021-10-01 17:43:00 +00:00
bert-e 6d6c455de4 Merge branch 'feature/UTAPI-26/add_service_user' into q/7.10 2021-10-01 17:43:00 +00:00
bert-e 6d99ac3dae Merge branch 'feature/UTAPI-26/add_service_user' into tmp/octopus/w/8.1/feature/UTAPI-26/add_service_user 2021-10-01 17:34:30 +00:00
bert-e 84a9485af6 Merge branch 'feature/UTAPI-24/limit_user_credentials_via_filtering' into q/7.10 2021-10-01 17:24:19 +00:00
bert-e 7755a3fa3d Merge branches 'w/8.1/feature/UTAPI-24/limit_user_credentials_via_filtering' and 'q/1138/7.10/feature/UTAPI-24/limit_user_credentials_via_filtering' into tmp/octopus/q/8.1 2021-10-01 17:24:19 +00:00
bert-e 9bca308db7 Merge branch 'feature/UTAPI-24/limit_user_credentials_via_filtering' into tmp/octopus/w/8.1/feature/UTAPI-24/limit_user_credentials_via_filtering 2021-10-01 17:18:46 +00:00
bert-e 3a2b345ada Merge branch 'feature/UTAPI-23/limit_account_keys_via_filtering' into q/7.10 2021-10-01 16:56:45 +00:00
bert-e 0159387ba9 Merge branch 'q/1137/7.10/feature/UTAPI-23/limit_account_keys_via_filtering' into tmp/normal/q/8.1 2021-10-01 16:56:45 +00:00
Taylor McKinnon 05017af754 Merge remote-tracking branch 'origin/development/8.1' into w/8.1/feature/UTAPI-23/limit_account_keys_via_filtering 2021-10-01 09:48:47 -07:00
Taylor McKinnon 5bef96367c ft(UTAPI-26): Add authorization for service user 2021-10-01 09:08:04 -07:00
Taylor McKinnon 774aaef0dd ft(UTAPI-24): Limit user credentials 2021-10-01 09:07:12 -07:00
bert-e 6544f118c0 Merge branch 'feature/UTAPI-23/limit_account_keys_via_filtering' into tmp/octopus/w/8.1/feature/UTAPI-23/limit_account_keys_via_filtering 2021-10-01 16:04:37 +00:00
Taylor McKinnon 0e947f255b ft(UTAPI-23): Limit account level credentials 2021-10-01 09:04:12 -07:00
bert-e 42d92e68ac Merge branch 'bugfix/UTAPI-25_add-bucket-lifecycle-operations' into q/7.10 2021-09-29 22:22:37 +00:00
bert-e b970847ca3 Merge branches 'w/8.1/bugfix/UTAPI-25_add-bucket-lifecycle-operations' and 'q/1140/7.10/bugfix/UTAPI-25_add-bucket-lifecycle-operations' into tmp/octopus/q/8.1 2021-09-29 22:22:37 +00:00
bert-e f19f6c7ab6 Merge branch 'bugfix/UTAPI-25_add-bucket-lifecycle-operations' into tmp/octopus/w/8.1/bugfix/UTAPI-25_add-bucket-lifecycle-operations 2021-09-29 16:31:47 +00:00
Ilke ba85f3e2a7 bf(S3C-4872): Support bucket lifecycle operations 2021-09-29 09:18:14 -07:00
bert-e 54362134de Merge branches 'w/8.1/feature/UTAPI-7-pin-arsenal' and 'q/1122/7.10/feature/UTAPI-7-pin-arsenal' into tmp/octopus/q/8.1 2021-09-14 20:03:52 +00:00
bert-e f8e85ee7cc Merge branch 'w/7.10/feature/UTAPI-7-pin-arsenal' into tmp/octopus/q/7.10 2021-09-14 20:03:51 +00:00
bert-e 959a26cc62 Merge branch 'bugfix/UTAPI-21__convert_reindex_workers_flag_to_int' into tmp/octopus/w/8.1/bugfix/UTAPI-21__convert_reindex_workers_flag_to_int 2021-09-13 16:54:30 +00:00
Taylor McKinnon d44b60ec0e bf(UTAPI-21): convert --workers flag to int 2021-09-13 09:47:28 -07:00
bert-e ea3fff30b8 Merge branch 'bugfix/S3C-4784_redis-connection-build-up' into q/7.10 2021-09-02 17:22:21 +00:00
bert-e 96e2b2c731 Merge branches 'w/8.1/bugfix/S3C-4784_redis-connection-build-up' and 'q/1128/7.10/bugfix/S3C-4784_redis-connection-build-up' into tmp/octopus/q/8.1 2021-09-02 17:22:21 +00:00
bert-e 85182216e4 Merge branch 'bugfix/S3C-4784_redis-connection-build-up' into tmp/octopus/w/8.1/bugfix/S3C-4784_redis-connection-build-up 2021-09-02 17:15:50 +00:00
= bc8c791170 disconnect stuck connection on failover before retry 2021-09-01 19:49:22 -07:00
Thomas Carmet 9532aec058 Merge branch 'w/7.10/feature/UTAPI-7-pin-arsenal' into w/8.1/feature/UTAPI-7-pin-arsenal 2021-09-01 14:08:24 -07:00
Thomas Carmet 37fa1e184f Merge branch 'feature/UTAPI-7-pin-arsenal' into w/7.10/feature/UTAPI-7-pin-arsenal 2021-09-01 14:07:23 -07:00
Thomas Carmet 6ea61e4c49 UTAPI-7 pin arsenal version 2021-09-01 14:06:17 -07:00
bert-e b815471663 Merge branch 'bugfix/S3C-4784_redis-connection-buildup-stabilization' into tmp/octopus/w/7.10/bugfix/S3C-4784_redis-connection-buildup-stabilization 2021-08-27 16:27:36 +00:00
bert-e 214bf4189f Merge branch 'w/7.10/bugfix/S3C-4784_redis-connection-buildup-stabilization' into tmp/octopus/w/8.1/bugfix/S3C-4784_redis-connection-buildup-stabilization 2021-08-27 16:27:36 +00:00
= 44fc07ade9 disconnect stuck connection on failover before retry 2021-08-27 09:11:06 -07:00
bert-e 89c5ae0560 Merge branch 'bugfix/UTAPI-6_warp10_leak_fix_jmx' into tmp/octopus/w/7.10/bugfix/UTAPI-6_warp10_leak_fix_jmx 2021-08-16 17:22:02 +00:00
bert-e 2e4c2c66d5 Merge branch 'w/7.10/bugfix/UTAPI-6_warp10_leak_fix_jmx' into tmp/octopus/w/8.1/bugfix/UTAPI-6_warp10_leak_fix_jmx 2021-08-16 17:22:02 +00:00
Taylor McKinnon c5f24d619a bf(UTAPI-6): Update to fixed version and add jmx exporter 2021-08-16 10:21:32 -07:00
bert-e b2e4683c5d Merge branch 'w/7.10/feature/UTAPI-5-bump-werelogs' into tmp/octopus/w/8.1/feature/UTAPI-5-bump-werelogs 2021-08-12 17:25:13 +00:00
Thomas Carmet bae63a036c Merge remote-tracking branch 'origin/feature/UTAPI-5-bump-werelogs' into w/7.10/feature/UTAPI-5-bump-werelogs 2021-08-12 10:23:24 -07:00
Thomas Carmet 2fc2531b97 UTAPI-5 update werelogs to tagged version 2021-08-12 10:19:07 -07:00
bert-e f96bc66c5e Merge branch 'feature/UTAPI-1_prometheus_metrics' into q/7.10 2021-07-31 00:40:12 +00:00
bert-e 5487001cee Merge branches 'w/8.1/feature/UTAPI-1_prometheus_metrics' and 'q/1093/7.10/feature/UTAPI-1_prometheus_metrics' into tmp/octopus/q/8.1 2021-07-31 00:40:12 +00:00
bert-e 3779c8c144 Merge branch 'feature/UTAPI-1_prometheus_metrics' into tmp/octopus/w/8.1/feature/UTAPI-1_prometheus_metrics 2021-07-31 00:30:42 +00:00
= 12a900f436 Prometheus Exporters for Nodejs, Redis, Warp10 2021-07-30 17:25:28 -07:00
bert-e ff23d1d5cd Merge branch 'bugfix/S3C-4550_avoid_reindex_diff_flapping' into q/7.10 2021-07-01 22:18:18 +00:00
bert-e 7f3f6bb753 Merge branches 'w/8.1/bugfix/S3C-4550_avoid_reindex_diff_flapping' and 'q/1082/7.10/bugfix/S3C-4550_avoid_reindex_diff_flapping' into tmp/octopus/q/8.1 2021-07-01 22:18:18 +00:00
bert-e 8715b0d096 Merge branch 'feature/S3C-4439_bucket-encryption-api-operations-to-utapi-v2' into tmp/octopus/w/8.1/feature/S3C-4439_bucket-encryption-api-operations-to-utapi-v2 2021-06-29 02:23:39 +00:00
artem bakalov db69b03879 add Encryption api funcs to metrics 2021-06-28 18:51:52 -07:00
bert-e 7e38de823a Merge branch 'bugfix/S3C-4550_avoid_reindex_diff_flapping' into tmp/octopus/w/8.1/bugfix/S3C-4550_avoid_reindex_diff_flapping 2021-06-24 19:47:21 +00:00
Taylor McKinnon f5b15573fe bf(S3C-4550): Do not take previous reindex diffs into account when calculating its own 2021-06-24 12:46:25 -07:00
bert-e 1695a01f9e Merge branch 'feature/S3C-4240_specific_bucxket_reindex_flag' into tmp/octopus/w/8.1/feature/S3C-4240_specific_bucxket_reindex_flag 2021-06-24 18:28:13 +00:00
Taylor McKinnon 1f6e2642d0 ft(S3C-4240): Allow reindexing a specific bucket 2021-06-24 11:27:52 -07:00
bert-e 77042591b6 Merge branch 'bugfix/S3C-4429_null_sizeD_during_reindex' into tmp/octopus/w/8.1/bugfix/S3C-4429_null_sizeD_during_reindex 2021-05-26 19:28:25 +00:00
Taylor McKinnon 13351ddcd8 bf(S3C-4429): null sizeD calculated during reindex 2021-05-26 12:23:04 -07:00
bert-e a2d1c47451 Merge branches 'w/8.1/bugfix/S3C-3692_fix_accountid_support_in_list_metrics_js' and 'q/1068/7.10/bugfix/S3C-3692_fix_accountid_support_in_list_metrics_js' into tmp/octopus/q/8.1 2021-05-25 19:10:04 +00:00
bert-e 3b6e7cecd0 Merge branch 'bugfix/S3C-3692_fix_accountid_support_in_list_metrics_js' into q/7.10 2021-05-25 19:10:04 +00:00
bert-e 0adc018e10 Merge branch 'bugfix/S3C-3692_fix_accountid_support_in_list_metrics_js' into tmp/octopus/w/8.1/bugfix/S3C-3692_fix_accountid_support_in_list_metrics_js 2021-05-25 17:15:57 +00:00
Taylor McKinnon 1357840566 ft(S3C-3692): Add accountId -> canonicalId conversion for ListRecentMetrics 2021-05-25 10:15:13 -07:00
bert-e 3af6c176b3 Merge branch 'improvement/S3C-4388_adjust_reindex_log_levels' into tmp/octopus/w/8.1/improvement/S3C-4388_adjust_reindex_log_levels 2021-05-25 00:14:37 +00:00
Taylor McKinnon 8ac1d1b212 impr(S3C-4388): Adjust reindex task log levels 2021-05-24 17:13:05 -07:00
Taylor McKinnon edbae18a62 Merge remote-tracking branch 'origin/bugfix/S3C-4424_switch_protobuf_ext_to_git_lfs' into w/8.1/bugfix/S3C-4424_switch_protobuf_ext_to_git_lfs 2021-05-24 16:32:17 -07:00
Taylor McKinnon 89fc600b81 bf(S3C-4424): Switch protobuf extension to use git lfs 2021-05-24 15:25:17 -07:00
bert-e 16c3782ca4 Merge branch 'bugfix/S3C-4151_fix_user_support' into tmp/octopus/w/8.1/bugfix/S3C-4151_fix_user_support 2021-04-15 21:12:11 +00:00
Taylor McKinnon db4424eb59 bf(S3C-4151): Correctly pass api method to Vault request context 2021-04-15 14:09:47 -07:00
bert-e d5967bcee1 Merge branch 'w/7.10/bugfix/S3C-3996-backport-7.4' into tmp/octopus/w/8.1/bugfix/S3C-3996-backport-7.4 2021-04-01 17:08:52 +00:00
bert-e 91a16f9a19 Merge branch 'w/7.9/bugfix/S3C-3996-backport-7.4' into tmp/octopus/w/7.10/bugfix/S3C-3996-backport-7.4 2021-04-01 17:08:52 +00:00
bert-e b7dce7b85c Merge branch 'bugfix/S3C-3996-backport-7.4' into tmp/octopus/w/7.9/bugfix/S3C-3996-backport-7.4 2021-04-01 17:08:52 +00:00
Gregoire Doumergue acbf8880f6 S3C-3996: Reduce logging amount from s3_bucketd.py 2021-04-01 19:03:13 +02:00
bert-e e0d816a759 Merge branch 'bugfix/S3C-3996/reduce-reindex-logging' into tmp/octopus/w/8.1/bugfix/S3C-3996/reduce-reindex-logging 2021-04-01 07:21:29 +00:00
Gregoire Doumergue 67545ef783 S3C-3996: Reduce logging amount from s3_bucketd.py 2021-03-31 08:53:11 +02:00
Alexander Chan 3d0b92f319 bugfix: ZENKO-3300 fix incrby call 2021-03-18 15:18:38 -07:00
bert-e 9e85797380 Merge branches 'w/8.1/bugfix/S3C-4061-missing-content-length-workaround' and 'q/1030/7.10/bugfix/S3C-4061-missing-content-length-workaround' into tmp/octopus/q/8.1 2021-03-17 21:30:15 +00:00
bert-e 0591e5a0ef Merge branches 'w/7.9/bugfix/S3C-4061-missing-content-length-workaround' and 'q/1030/7.9.0/bugfix/S3C-4061-missing-content-length-workaround' into tmp/octopus/q/7.9 2021-03-17 21:30:14 +00:00
bert-e 75e09d1a82 Merge branches 'w/7.10/bugfix/S3C-4061-missing-content-length-workaround' and 'q/1030/7.9/bugfix/S3C-4061-missing-content-length-workaround' into tmp/octopus/q/7.10 2021-03-17 21:30:14 +00:00
bert-e 498044f2f6 Merge branch 'bugfix/S3C-4061-missing-content-length-workaround' into q/7.9.0 2021-03-17 21:30:14 +00:00
bert-e 2cce109bd6 Merge branch 'w/7.10/bugfix/S3C-4061-missing-content-length-workaround' into tmp/octopus/w/8.1/bugfix/S3C-4061-missing-content-length-workaround 2021-03-17 21:18:52 +00:00
bert-e 31edc21241 Merge branch 'w/7.9/bugfix/S3C-4061-missing-content-length-workaround' into tmp/octopus/w/7.10/bugfix/S3C-4061-missing-content-length-workaround 2021-03-17 21:18:52 +00:00
bert-e fd55afdfd8 Merge branch 'bugfix/S3C-4061-missing-content-length-workaround' into tmp/octopus/w/7.9/bugfix/S3C-4061-missing-content-length-workaround 2021-03-17 21:18:51 +00:00
bert-e 7eba6d84a7 Merge branch 'w/7.10/bugfix/S3C-4119-stale-bucket-workaround' into tmp/octopus/w/8.1/bugfix/S3C-4119-stale-bucket-workaround 2021-03-17 21:17:29 +00:00
bert-e 11b972b21c Merge branch 'bugfix/S3C-4119-stale-bucket-workaround' into tmp/octopus/w/7.9/bugfix/S3C-4119-stale-bucket-workaround 2021-03-17 21:17:29 +00:00
bert-e 41a8236fb6 Merge branch 'w/7.9/bugfix/S3C-4119-stale-bucket-workaround' into tmp/octopus/w/7.10/bugfix/S3C-4119-stale-bucket-workaround 2021-03-17 21:17:29 +00:00
scality-gelbart ba3dbb0100 Update s3_bucketd.py 2021-03-17 14:16:52 -07:00
scality-gelbart 588ccf7443 Update s3_bucketd.py 2021-03-17 14:16:17 -07:00
bert-e cecc13d63e Merge branch 'w/7.10/bugfix/S3C-4167_fix_typo_in_error_message' into tmp/octopus/w/8.1/bugfix/S3C-4167_fix_typo_in_error_message 2021-03-17 19:05:49 +00:00
bert-e 246ee6dcd0 Merge branch 'w/7.9/bugfix/S3C-4167_fix_typo_in_error_message' into tmp/octopus/w/7.10/bugfix/S3C-4167_fix_typo_in_error_message 2021-03-17 19:05:49 +00:00
bert-e 163d9bc9a1 Merge branch 'bugfix/S3C-4167_fix_typo_in_error_message' into tmp/octopus/w/7.9/bugfix/S3C-4167_fix_typo_in_error_message 2021-03-17 19:05:48 +00:00
Taylor McKinnon 0f489a88df bf(S3C-4167): Fix typo in error description 2021-03-17 12:04:47 -07:00
bert-e 361bc24c79 Merge branches 'w/8.1/bugfix/S3C-4145_fix_error_responses' and 'q/1023/7.10/bugfix/S3C-4145_fix_error_responses' into tmp/octopus/q/8.1 2021-03-16 03:43:59 +00:00
bert-e 2d61b470d7 Merge branches 'w/7.10/bugfix/S3C-4145_fix_error_responses' and 'q/1023/7.9/bugfix/S3C-4145_fix_error_responses' into tmp/octopus/q/7.10 2021-03-16 03:43:59 +00:00
bert-e daaa8d7391 Merge branches 'w/7.9/bugfix/S3C-4145_fix_error_responses' and 'q/1023/7.9.0/bugfix/S3C-4145_fix_error_responses' into tmp/octopus/q/7.9 2021-03-16 03:43:59 +00:00
bert-e 8f6d8ea7e6 Merge branch 'bugfix/S3C-4145_fix_error_responses' into q/7.9.0 2021-03-16 03:43:59 +00:00
bert-e 6e66608d0e Merge branches 'w/8.1/feature/S3C-4100_manual_adjustment_task' and 'q/998/7.10/feature/S3C-4100_manual_adjustment_task' into tmp/octopus/q/8.1 2021-03-16 03:43:45 +00:00
bert-e 67ec88259c Merge branches 'w/7.10/feature/S3C-4100_manual_adjustment_task' and 'q/998/7.9/feature/S3C-4100_manual_adjustment_task' into tmp/octopus/q/7.10 2021-03-16 03:43:45 +00:00
bert-e 645510460f Merge branches 'w/7.9/feature/S3C-4100_manual_adjustment_task' and 'q/998/7.9.0/feature/S3C-4100_manual_adjustment_task' into tmp/octopus/q/7.9 2021-03-16 03:43:44 +00:00
bert-e d456965c9b Merge branch 'feature/S3C-4100_manual_adjustment_task' into q/7.9.0 2021-03-16 03:43:44 +00:00
bert-e 00a3475bf6 Merge branch 'w/7.10/bugfix/S3C-4145_fix_error_responses' into tmp/octopus/w/8.1/bugfix/S3C-4145_fix_error_responses 2021-03-16 01:53:05 +00:00
bert-e 679cad1397 Merge branch 'w/7.9/bugfix/S3C-4145_fix_error_responses' into tmp/octopus/w/7.10/bugfix/S3C-4145_fix_error_responses 2021-03-16 01:53:05 +00:00
bert-e 0efe45930c Merge branch 'bugfix/S3C-4145_fix_error_responses' into tmp/octopus/w/7.9/bugfix/S3C-4145_fix_error_responses 2021-03-16 01:53:05 +00:00
Taylor McKinnon 30be7afac4 bf(S3C-4145): Fix error response correctly 2021-03-15 18:52:36 -07:00
bert-e 3fb8078677 Merge branches 'development/8.1' and 'w/7.10/feature/S3C-4100_manual_adjustment_task' into tmp/octopus/w/8.1/feature/S3C-4100_manual_adjustment_task 2021-03-16 01:33:13 +00:00
bert-e f6654a266f Merge branches 'development/7.10' and 'w/7.9/feature/S3C-4100_manual_adjustment_task' into tmp/octopus/w/7.10/feature/S3C-4100_manual_adjustment_task 2021-03-16 01:33:13 +00:00
bert-e ff57316fc7 Merge branches 'development/7.9' and 'feature/S3C-4100_manual_adjustment_task' into tmp/octopus/w/7.9/feature/S3C-4100_manual_adjustment_task 2021-03-16 01:33:12 +00:00
Taylor McKinnon 42eba88653 remove uneeded assert 2021-03-15 18:33:03 -07:00
bert-e 9f6c56d682 Merge branches 'w/8.1/bugfix/S3C-4139_allow_only_start_timestamp_in_req' and 'q/1013/7.10/bugfix/S3C-4139_allow_only_start_timestamp_in_req' into tmp/octopus/q/8.1 2021-03-16 00:14:59 +00:00
bert-e c4e978b8ba Merge branches 'w/7.10/bugfix/S3C-4139_allow_only_start_timestamp_in_req' and 'q/1013/7.9/bugfix/S3C-4139_allow_only_start_timestamp_in_req' into tmp/octopus/q/7.10 2021-03-16 00:14:59 +00:00
bert-e 560c2e011b Merge branch 'bugfix/S3C-4139_allow_only_start_timestamp_in_req' into q/7.9.0 2021-03-16 00:14:58 +00:00
bert-e 0619a0316c Merge branches 'w/7.9/bugfix/S3C-4139_allow_only_start_timestamp_in_req' and 'q/1013/7.9.0/bugfix/S3C-4139_allow_only_start_timestamp_in_req' into tmp/octopus/q/7.9 2021-03-16 00:14:58 +00:00
bert-e 28618ac3d8 Merge branches 'w/8.1/bugfix/S3C-4137_add_opId_translation_to_ingest_route' and 'q/1002/7.10/bugfix/S3C-4137_add_opId_translation_to_ingest_route' into tmp/octopus/q/8.1 2021-03-16 00:14:27 +00:00
bert-e a183877a6a Merge branches 'w/7.10/bugfix/S3C-4137_add_opId_translation_to_ingest_route' and 'q/1002/7.9/bugfix/S3C-4137_add_opId_translation_to_ingest_route' into tmp/octopus/q/7.10 2021-03-16 00:14:26 +00:00
bert-e e995b6fc45 Merge branch 'bugfix/S3C-4137_add_opId_translation_to_ingest_route' into q/7.9.0 2021-03-16 00:14:26 +00:00
bert-e 90cda62c3e Merge branches 'w/7.9/bugfix/S3C-4137_add_opId_translation_to_ingest_route' and 'q/1002/7.9.0/bugfix/S3C-4137_add_opId_translation_to_ingest_route' into tmp/octopus/q/7.9 2021-03-16 00:14:26 +00:00
bert-e d467389474 Merge branch 'w/7.10/bugfix/S3C-4139_allow_only_start_timestamp_in_req' into tmp/octopus/w/8.1/bugfix/S3C-4139_allow_only_start_timestamp_in_req 2021-03-15 19:03:04 +00:00
bert-e fcee5e30ed Merge branch 'w/7.9/bugfix/S3C-4139_allow_only_start_timestamp_in_req' into tmp/octopus/w/7.10/bugfix/S3C-4139_allow_only_start_timestamp_in_req 2021-03-15 19:03:04 +00:00
bert-e 078fc12930 Merge branch 'bugfix/S3C-4139_allow_only_start_timestamp_in_req' into tmp/octopus/w/7.9/bugfix/S3C-4139_allow_only_start_timestamp_in_req 2021-03-15 19:03:04 +00:00
Taylor McKinnon 360c5b33d7 linting 2021-03-15 12:02:49 -07:00
bert-e 167c5e36fe Merge branch 'w/7.10/bugfix/S3C-4145_fix_error_responses' into tmp/octopus/w/8.1/bugfix/S3C-4145_fix_error_responses 2021-03-15 18:54:40 +00:00
bert-e cbf1ba99d8 Merge branch 'w/7.9/bugfix/S3C-4145_fix_error_responses' into tmp/octopus/w/7.10/bugfix/S3C-4145_fix_error_responses 2021-03-15 18:54:40 +00:00
bert-e 87d082fb71 Merge branch 'bugfix/S3C-4145_fix_error_responses' into tmp/octopus/w/7.9/bugfix/S3C-4145_fix_error_responses 2021-03-15 18:54:39 +00:00
Taylor McKinnon 35d1daba78 bf(S3C-4145): Fix error response 2021-03-15 11:54:15 -07:00
bert-e 8bd5e56ee9 Merge branch 'w/7.10/bugfix/S3C-4139_allow_only_start_timestamp_in_req' into tmp/octopus/w/8.1/bugfix/S3C-4139_allow_only_start_timestamp_in_req 2021-03-15 18:51:35 +00:00
bert-e 18445deff0 Merge branch 'w/7.9/bugfix/S3C-4139_allow_only_start_timestamp_in_req' into tmp/octopus/w/7.10/bugfix/S3C-4139_allow_only_start_timestamp_in_req 2021-03-15 18:51:35 +00:00
bert-e ed117c1090 Merge branch 'bugfix/S3C-4139_allow_only_start_timestamp_in_req' into tmp/octopus/w/7.9/bugfix/S3C-4139_allow_only_start_timestamp_in_req 2021-03-15 18:51:35 +00:00
Taylor McKinnon 88babe3060 bf(S3C-4137): Allow only start timestamp in listMetrics request 2021-03-15 11:51:02 -07:00
bert-e bd82e9ec8c Merge branch 'w/7.10/bugfix/S3C-4137_add_opId_translation_to_ingest_route' into tmp/octopus/w/8.1/bugfix/S3C-4137_add_opId_translation_to_ingest_route 2021-03-15 17:12:57 +00:00
bert-e 97dabd52dc Merge branch 'w/7.9/bugfix/S3C-4137_add_opId_translation_to_ingest_route' into tmp/octopus/w/7.10/bugfix/S3C-4137_add_opId_translation_to_ingest_route 2021-03-15 17:12:57 +00:00
bert-e 53c45f5318 Merge branch 'bugfix/S3C-4137_add_opId_translation_to_ingest_route' into tmp/octopus/w/7.9/bugfix/S3C-4137_add_opId_translation_to_ingest_route 2021-03-15 17:12:57 +00:00
Taylor McKinnon 10f081e689 bf(S3C-4137): Add operationId translsation to ingestion route for v1 compat 2021-03-15 10:12:30 -07:00
bert-e 037b0f5d17 Merge branch 'w/7.10/feature/S3C-4100_manual_adjustment_task' into tmp/octopus/w/8.1/feature/S3C-4100_manual_adjustment_task 2021-03-13 00:55:12 +00:00
bert-e a71d76e85b Merge branch 'w/7.9/feature/S3C-4100_manual_adjustment_task' into tmp/octopus/w/7.10/feature/S3C-4100_manual_adjustment_task 2021-03-13 00:55:12 +00:00
bert-e 76e1a7e0c8 Merge branch 'feature/S3C-4100_manual_adjustment_task' into tmp/octopus/w/7.9/feature/S3C-4100_manual_adjustment_task 2021-03-13 00:55:11 +00:00
Taylor McKinnon 6cf59e2744 ft(S3C-4100): Add task for manual metric adjustment 2021-03-12 16:53:35 -08:00
bert-e a1002dc126 Merge branches 'w/8.1/bugfix/S3C-4085_handle_unauthorized' and 'q/985/7.10/bugfix/S3C-4085_handle_unauthorized' into tmp/octopus/q/8.1 2021-03-09 18:47:12 +00:00
bert-e 2b15684c6c Merge branches 'w/7.10/bugfix/S3C-4085_handle_unauthorized' and 'q/985/7.9/bugfix/S3C-4085_handle_unauthorized' into tmp/octopus/q/7.10 2021-03-09 18:47:12 +00:00
bert-e df40c01b20 Merge branches 'w/7.9/bugfix/S3C-4085_handle_unauthorized' and 'q/985/7.9.0/bugfix/S3C-4085_handle_unauthorized' into tmp/octopus/q/7.9 2021-03-09 18:47:11 +00:00
bert-e 11868de367 Merge branch 'bugfix/S3C-4085_handle_unauthorized' into q/7.9.0 2021-03-09 18:47:11 +00:00
bert-e 6c02f3e109 Merge branches 'w/8.1/bugfix/S3C-4022-bump-warp10' and 'q/989/7.10/bugfix/S3C-4022-bump-warp10' into tmp/octopus/q/8.1 2021-03-09 16:21:55 +00:00
bert-e 06055fa26e Merge branches 'w/7.10/bugfix/S3C-4022-bump-warp10' and 'q/989/7.9/bugfix/S3C-4022-bump-warp10' into tmp/octopus/q/7.10 2021-03-09 16:21:54 +00:00
bert-e 6d26034612 Merge branches 'w/7.9/bugfix/S3C-4022-bump-warp10' and 'q/989/7.9.0/bugfix/S3C-4022-bump-warp10' into tmp/octopus/q/7.9 2021-03-09 16:21:54 +00:00
bert-e 6fe22616d4 Merge branch 'bugfix/S3C-4022-bump-warp10' into q/7.9.0 2021-03-09 16:21:54 +00:00
bert-e 594b34472f Merge branch 'w/7.10/bugfix/S3C-4085_handle_unauthorized' into tmp/octopus/w/8.1/bugfix/S3C-4085_handle_unauthorized 2021-03-09 01:51:19 +00:00
bert-e 98aac303b0 Merge branch 'w/7.9/bugfix/S3C-4085_handle_unauthorized' into tmp/octopus/w/7.10/bugfix/S3C-4085_handle_unauthorized 2021-03-09 01:51:19 +00:00
bert-e 848e74c746 Merge branch 'bugfix/S3C-4085_handle_unauthorized' into tmp/octopus/w/7.9/bugfix/S3C-4085_handle_unauthorized 2021-03-09 01:51:19 +00:00
Taylor McKinnon f7652d58f4 bf(S3C-4085): Don't try to translate resources if auth has failed 2021-03-08 17:50:47 -08:00
bert-e dbfac82feb Merge branch 'w/7.10/bugfix/S3C-4022-bump-warp10' into tmp/octopus/w/8.1/bugfix/S3C-4022-bump-warp10 2021-03-09 01:12:39 +00:00
bert-e 4556bfa4e1 Merge branch 'w/7.9/bugfix/S3C-4022-bump-warp10' into tmp/octopus/w/7.10/bugfix/S3C-4022-bump-warp10 2021-03-09 01:12:39 +00:00
bert-e b1c003ca5e Merge branch 'bugfix/S3C-4022-bump-warp10' into tmp/octopus/w/7.9/bugfix/S3C-4022-bump-warp10 2021-03-09 01:12:39 +00:00
Rahul Padigela 9e1aaf482e bugfix: S3C-4022 bump warp10 2021-03-08 17:12:15 -08:00
bert-e a03af1f05f Merge branches 'w/8.1/bugfix/S3C-4067_handle_multiple_gts_during_checkpoint_creation' and 'q/978/7.10/bugfix/S3C-4067_handle_multiple_gts_during_checkpoint_creation' into tmp/octopus/q/8.1 2021-03-05 19:43:38 +00:00
bert-e 6c22bf8fd9 Merge branch 'bugfix/S3C-4067_handle_multiple_gts_during_checkpoint_creation' into q/7.9.0 2021-03-05 19:43:37 +00:00
bert-e 8861b798e4 Merge branches 'w/7.10/bugfix/S3C-4067_handle_multiple_gts_during_checkpoint_creation' and 'q/978/7.9/bugfix/S3C-4067_handle_multiple_gts_during_checkpoint_creation' into tmp/octopus/q/7.10 2021-03-05 19:43:37 +00:00
bert-e 97129fc407 Merge branches 'w/7.9/bugfix/S3C-4067_handle_multiple_gts_during_checkpoint_creation' and 'q/978/7.9.0/bugfix/S3C-4067_handle_multiple_gts_during_checkpoint_creation' into tmp/octopus/q/7.9 2021-03-05 19:43:37 +00:00
bert-e 387f0f9a9b Merge branch 'w/7.10/bugfix/S3C-4067_handle_multiple_gts_during_checkpoint_creation' into tmp/octopus/w/8.1/bugfix/S3C-4067_handle_multiple_gts_during_checkpoint_creation 2021-03-05 00:37:08 +00:00
bert-e 0af492ce15 Merge branch 'w/7.9/bugfix/S3C-4067_handle_multiple_gts_during_checkpoint_creation' into tmp/octopus/w/7.10/bugfix/S3C-4067_handle_multiple_gts_during_checkpoint_creation 2021-03-05 00:37:08 +00:00
bert-e 62b326f735 Merge branch 'bugfix/S3C-4067_handle_multiple_gts_during_checkpoint_creation' into tmp/octopus/w/7.9/bugfix/S3C-4067_handle_multiple_gts_during_checkpoint_creation 2021-03-05 00:37:08 +00:00
Taylor McKinnon f0ea18b697 bf(S3C-4067): Handle multiple GTS iduring checkpoint creation 2021-03-04 16:35:59 -08:00
bert-e ec250c4df2 Merge branches 'w/8.1/bugfix/S3C-4049_call_delete_in_slices_rather_than_once' and 'q/968/7.10/bugfix/S3C-4049_call_delete_in_slices_rather_than_once' into tmp/octopus/q/8.1 2021-03-05 00:10:12 +00:00
bert-e 2fb0767235 Merge branch 'bugfix/S3C-4049_call_delete_in_slices_rather_than_once' into q/7.9.0 2021-03-05 00:10:11 +00:00
bert-e 1d3e51f2ac Merge branches 'w/7.10/bugfix/S3C-4049_call_delete_in_slices_rather_than_once' and 'q/968/7.9/bugfix/S3C-4049_call_delete_in_slices_rather_than_once' into tmp/octopus/q/7.10 2021-03-05 00:10:11 +00:00
bert-e ea271f77cb Merge branches 'w/7.9/bugfix/S3C-4049_call_delete_in_slices_rather_than_once' and 'q/968/7.9.0/bugfix/S3C-4049_call_delete_in_slices_rather_than_once' into tmp/octopus/q/7.9 2021-03-05 00:10:11 +00:00
bert-e 5a28fc992e Merge branch 'w/7.10/bugfix/S3C-4049_call_delete_in_slices_rather_than_once' into tmp/octopus/w/8.1/bugfix/S3C-4049_call_delete_in_slices_rather_than_once 2021-03-04 19:12:19 +00:00
bert-e f2ee7345bc Merge branch 'w/7.9/bugfix/S3C-4049_call_delete_in_slices_rather_than_once' into tmp/octopus/w/7.10/bugfix/S3C-4049_call_delete_in_slices_rather_than_once 2021-03-04 19:12:19 +00:00
bert-e 3852b996ee Merge branch 'bugfix/S3C-4049_call_delete_in_slices_rather_than_once' into tmp/octopus/w/7.9/bugfix/S3C-4049_call_delete_in_slices_rather_than_once 2021-03-04 19:12:19 +00:00
Taylor McKinnon d90899e4b4 bf(S3C-4049): Call delete in slices rather than once 2021-03-04 11:11:47 -08:00
bert-e b5a5f70d1c Merge branch 'w/7.10/bugfix/S3C-4057_handle_timestamp_conflicts' into tmp/octopus/w/8.1/bugfix/S3C-4057_handle_timestamp_conflicts 2021-03-03 19:21:01 +00:00
bert-e 5ef784023a Merge branch 'w/7.9/bugfix/S3C-4057_handle_timestamp_conflicts' into tmp/octopus/w/7.10/bugfix/S3C-4057_handle_timestamp_conflicts 2021-03-03 19:21:00 +00:00
bert-e 4cf58762a9 Merge branch 'bugfix/S3C-4057_handle_timestamp_conflicts' into tmp/octopus/w/7.9/bugfix/S3C-4057_handle_timestamp_conflicts 2021-03-03 19:21:00 +00:00
Taylor McKinnon 4c8a08903d bf(S3C-4057): Handle timestamp conflicts during ingestion 2021-03-03 11:20:25 -08:00
bert-e ee814772a7 Merge branch 'w/7.10/improvement/S3C-4034_simplify_soft_limit' into tmp/octopus/w/8.1/improvement/S3C-4034_simplify_soft_limit 2021-02-23 23:21:36 +00:00
bert-e 1c8403552d Merge branch 'w/7.9/improvement/S3C-4034_simplify_soft_limit' into tmp/octopus/w/7.10/improvement/S3C-4034_simplify_soft_limit 2021-02-23 23:21:36 +00:00
bert-e c0d6f9e686 Merge branch 'improvement/S3C-4034_simplify_soft_limit' into tmp/octopus/w/7.9/improvement/S3C-4034_simplify_soft_limit 2021-02-23 23:21:35 +00:00
Taylor McKinnon 72a96ee24e impr(S3C-4034): Simplify soft limit 2021-02-23 15:20:30 -08:00
bert-e 4c016f2838 Merge branch 'w/7.10/bugfix/S3C-4035-increasePodLimits' into tmp/octopus/w/8.1/bugfix/S3C-4035-increasePodLimits 2021-02-23 18:45:10 +00:00
bert-e 1bfb460a72 Merge branch 'bugfix/S3C-4035-increasePodLimits' into tmp/octopus/w/7.10/bugfix/S3C-4035-increasePodLimits 2021-02-23 18:45:10 +00:00
Jonathan Gramain 99574e7305 bugfix: S3C-4035 increase CI worker mem request
Increase the memory request of CI worker pods from 1G to 3G (limit is
still 3G)
2021-02-22 17:43:27 -08:00
bert-e 875b1fed30 Merge branch 'w/7.10/bugfix/S3C-3620-avoidCrashOnRedisError' into tmp/octopus/w/8.1/bugfix/S3C-3620-avoidCrashOnRedisError 2021-02-23 00:18:05 +00:00
bert-e de7f3cc14a Merge branch 'w/7.9/bugfix/S3C-3620-avoidCrashOnRedisError' into tmp/octopus/w/7.10/bugfix/S3C-3620-avoidCrashOnRedisError 2021-02-23 00:18:04 +00:00
Jonathan Gramain e009ad00ed bugfix: S3C-3620 don't raise exception on ioredis client error
When ioredis emits an error (e.g. connection issue), instead of
unconditionally raising it in the RedisClient wrapper, only raise it
if there is at least one listener for the 'error' event.

Some users of RedisClient (v2 cache and reindex tasks) do not set a
listener on error events, resulting in assertions being raised and
process crash.
2021-02-22 16:17:15 -08:00
Jonathan Gramain 7639b3f90a bugfix: S3C-3620 ioredis client error test
Add unit test failing when ioredis client emits an error that is not
caught
2021-02-22 16:17:15 -08:00
bert-e bc58f13b8d Merge branches 'w/8.1/feature/S3C-4033-bump-version' and 'q/938/7.10/feature/S3C-4033-bump-version' into tmp/octopus/q/8.1 2021-02-23 00:02:08 +00:00
bert-e c0419505de Merge branch 'feature/S3C-4033-bump-version' into q/7.10 2021-02-23 00:02:08 +00:00
Thomas Carmet cae286b4ac Merge remote-tracking branch 'origin/feature/S3C-4033-bump-version' into w/8.1/feature/S3C-4033-bump-version 2021-02-22 15:52:39 -08:00
Thomas Carmet b193423801 S3C-4033 bump version on package.json file 2021-02-22 15:51:26 -08:00
bert-e d862258cfd Merge branches 'w/8.1/bugfix/S3C-4023_add_missing_property' and 'q/918/7.10/bugfix/S3C-4023_add_missing_property' into tmp/octopus/q/8.1 2021-02-22 21:39:22 +00:00
bert-e 892cf53f20 Merge branch 'bugfix/S3C-4023_add_missing_property' into q/7.9.0 2021-02-22 21:39:21 +00:00
bert-e 0fa9b0081c Merge branch 'w/7.10/bugfix/S3C-4030_fix_flaky_testGetStorage' into tmp/octopus/w/8.1/bugfix/S3C-4030_fix_flaky_testGetStorage 2021-02-22 19:29:00 +00:00
Taylor McKinnon a0c3595a72 bf(S3C-4030): Fix flaky test 2021-02-22 11:27:44 -08:00
bert-e b257883e4b Merge branch 'w/7.10/bugfix/S3C-4023_add_missing_property' into tmp/octopus/w/8.1/bugfix/S3C-4023_add_missing_property 2021-02-22 17:53:59 +00:00
Taylor McKinnon 13ba6d65e7 bf(S3C-4023): Add missing property in constructor 2021-02-22 09:46:48 -08:00
bert-e 459464a97f Merge branch 'w/7.10/bugfix/S3C-3970_reduce_scanning_of_event_during_query' into tmp/octopus/w/8.1/bugfix/S3C-3970_reduce_scanning_of_event_during_query 2021-02-19 06:44:01 +00:00
Taylor McKinnon b1904e50a0 bf(S3C-3970): Reduce scanning of unrelated events during metric calculation 2021-02-18 22:42:59 -08:00
bert-e 357f8fe1e4 Merge branch 'w/7.10/improvement/S3C-3971_reduce_event_footprint' into tmp/octopus/w/8.1/improvement/S3C-3971_reduce_event_footprint 2021-02-11 00:17:20 +00:00
Taylor McKinnon 1af6532d83 impr(S3C-3971): Allow filtering of pushed event fields 2021-02-10 16:16:36 -08:00
bert-e 2dee952d57 Merge branch 'w/7.9/bugfix/S3C-3940_prevent_negative_metrics' into tmp/octopus/w/8.1/bugfix/S3C-3940_prevent_negative_metrics 2021-02-09 23:02:18 +00:00
Taylor McKinnon 884669b694 bf(S3C-3940): Prevent reporting of negative metrics 2021-02-09 15:01:45 -08:00
bert-e 71dac22a13 Merge branch 'feature/ZENKO-3110-add-release-stage' into q/8.1 2021-02-06 00:07:12 +00:00
Thomas Carmet 740951dc4b Switching to yarn instead of npm to run utapi 2021-02-05 15:12:20 -08:00
Thomas Carmet 5ef95cbc1e Upgrading nodejs to 10.22 2021-02-05 15:08:00 -08:00
Thomas Carmet 72463f72df ZENKO-3110 setting up release stage for utapi 2021-02-05 14:41:42 -08:00
bert-e bbde7b7644 Merge branch 'bugfix/S3C-3937_handle_phd_in_reindex' into q/7.9 2021-02-05 22:03:02 +00:00
bert-e e424a00bc6 Merge branches 'w/8.1/bugfix/S3C-3937_handle_phd_in_reindex' and 'q/898/7.9/bugfix/S3C-3937_handle_phd_in_reindex' into tmp/octopus/q/8.1 2021-02-05 22:03:02 +00:00
bert-e 0afb78e7ae Merge branch 'bugfix/S3C-3937_handle_phd_in_reindex' into tmp/octopus/w/8.1/bugfix/S3C-3937_handle_phd_in_reindex 2021-02-05 21:01:35 +00:00
Taylor McKinnon f70f7e1a74 bf(S3C-3937): Handle PHD objects returned by bucketd 2021-02-05 12:59:48 -08:00
bert-e df5e11eb9d Merge branch 'bugfix/S3C-3830-fetch-warp10-results' into tmp/octopus/w/8.1/bugfix/S3C-3830-fetch-warp10-results 2021-02-02 23:28:05 +00:00
Taylor McKinnon 947dcad5fb update method for seed generation, remove unused initial token gen 2021-02-02 15:27:53 -08:00
bert-e 97406a81a2 Merge branch 'bugfix/S3C-3830-fetch-warp10-results' into tmp/octopus/w/8.1/bugfix/S3C-3830-fetch-warp10-results 2021-02-02 20:33:41 +00:00
Rahul Padigela f033a58c1b bugfix: S3C-3830 bump warp10 2021-02-02 12:32:00 -08:00
bert-e 684c3389e9 Merge branch 'improvement/S3C-3800_rework_failover' into tmp/octopus/w/8.1/improvement/S3C-3800_rework_failover 2021-01-30 00:27:10 +00:00
Taylor McKinnon 0d2b21038e impr(S3C-3800): Refactor warp 10 failover 2021-01-29 16:26:12 -08:00
bert-e 9b1f55ec76 Merge branch 'feature/S3C-3609_hard_disk_limit' into tmp/octopus/w/8.1/feature/S3C-3609_hard_disk_limit 2021-01-29 20:49:10 +00:00
Taylor McKinnon a74f0dbb9b ft(S3C-3609): Add hard limit 2021-01-29 12:48:42 -08:00
bert-e 56a368d9a6 Merge branch 'feature/S3C-3707_soft_disk_limit' into tmp/octopus/w/8.1/feature/S3C-3707_soft_disk_limit 2021-01-28 22:14:07 +00:00
Taylor McKinnon 25ef0c9d0d ft(S3C-3707): Add disk usage soft limit task 2021-01-28 14:13:00 -08:00
bert-e 32e14b5099 Merge branch 'feature/S3C-3707_fix_tests' into tmp/octopus/w/8.1/feature/S3C-3707_fix_tests 2021-01-27 22:28:15 +00:00
Taylor McKinnon ab523ff579 ft(S3C-3707): Fix functional tests 2021-01-27 14:27:45 -08:00
bert-e 2349cf3791 Merge branches 'w/8.1/feature/S3C-3812_warp10_deletion' and 'q/874/7.9/feature/S3C-3812_warp10_deletion' into tmp/octopus/q/8.1 2021-01-15 23:19:15 +00:00
bert-e a674fab416 Merge branch 'feature/S3C-3812_warp10_deletion' into q/7.9 2021-01-15 23:19:15 +00:00
bert-e 3ca3e7fd32 Merge branch 'feature/S3C-3812_warp10_deletion' into tmp/octopus/w/8.1/feature/S3C-3812_warp10_deletion 2021-01-15 23:15:51 +00:00
Taylor McKinnon 8317d7422e ft(S3C-3812): Add support for deletion in warp 10 client 2021-01-15 15:15:22 -08:00
bert-e bb559f08c8 Merge branch 'bugfix/S3C-3764_upgrade_@senx/warp10' into tmp/octopus/w/8.1/bugfix/S3C-3764_upgrade_@senx/warp10 2021-01-13 23:38:57 +00:00
Taylor McKinnon ffd58af6aa bf(S3C-3764): Update @senx/warp10 2021-01-13 15:38:23 -08:00
bert-e e6eb18cc4c Merge branch 'feature/S3C-3721_monitor_disk_usage_task' into q/7.9 2021-01-12 18:36:40 +00:00
bert-e 0e83b9db55 Merge branches 'w/8.1/feature/S3C-3721_monitor_disk_usage_task' and 'q/871/7.9/feature/S3C-3721_monitor_disk_usage_task' into tmp/octopus/q/8.1 2021-01-12 18:36:40 +00:00
bert-e 58ff50b734 Merge branch 'bugfix/S3C-3725_translate_listBucketMulitpartUploads_in_migrate' into q/7.9 2021-01-12 17:38:45 +00:00
bert-e 3ff4fc9cc1 Merge branches 'w/8.1/bugfix/S3C-3725_translate_listBucketMulitpartUploads_in_migrate' and 'q/869/7.9/bugfix/S3C-3725_translate_listBucketMulitpartUploads_in_migrate' into tmp/octopus/q/8.1 2021-01-12 17:38:45 +00:00
bert-e 22cead971d Merge branch 'feature/S3C-3721_monitor_disk_usage_task' into tmp/octopus/w/8.1/feature/S3C-3721_monitor_disk_usage_task 2021-01-12 01:12:02 +00:00
Taylor McKinnon f7972f1ea1 ft(S3C-3721): Add generic disk usage monitor task 2021-01-11 17:11:30 -08:00
bert-e 98f528cf62 Merge branch 'bugfix/S3C-3725_translate_listBucketMulitpartUploads_in_migrate' into tmp/octopus/w/8.1/bugfix/S3C-3725_translate_listBucketMulitpartUploads_in_migrate 2021-01-11 19:41:28 +00:00
bert-e daf0b2c8d9 Merge branch 'feature/S3C-3524_update_warp10' into tmp/octopus/w/8.1/feature/S3C-3524_update_warp10 2021-01-07 22:13:22 +00:00
Taylor McKinnon 1d8183cf23 ft(S3C-3524): upgrade warp10 to 2.7.2 2021-01-07 14:07:29 -08:00
Taylor McKinnon a8491a3241 Merge remote-tracking branch 'origin/feature/S3C-3771_add_client_tls_support' into w/8.1/feature/S3C-3771_add_client_tls_support 2021-01-05 09:54:52 -08:00
Taylor McKinnon 9182e60d0e ft(S3C-3771): Add support for Utapiv2 client to communicate over TLS 2021-01-05 09:52:45 -08:00
bert-e c8a36e45bb Merge branch 'bugfix/S3C-3767_fix_internal_tls_config' into tmp/octopus/w/8.1/bugfix/S3C-3767_fix_internal_tls_config 2020-12-30 23:09:19 +00:00
Taylor McKinnon c5c4c79afb bf(S3C-3767): Fix internal TLS config for Vault 2020-12-30 15:07:35 -08:00
bert-e 97784eaa70 Merge branch 'bugfix/S3C-3763_batch_ingestion_of_shards' into tmp/octopus/w/8.1/bugfix/S3C-3763_batch_ingestion_of_shards 2020-12-28 19:22:40 +00:00
Taylor McKinnon 0fb3b79e71 bf(S3C-3763): Batch shard ingestion 2020-12-28 11:18:54 -08:00
Taylor McKinnon b2dfe765c8 bf(S3C-3725): translate listBucketMultipartUploads during migration 2020-12-16 13:02:31 -08:00
bert-e cd88e9f4bf Merge branches 'w/8.1/bugfix/S3C-3505_support_bucket_option_in_v1_reindex' and 'q/853/7.9/bugfix/S3C-3505_support_bucket_option_in_v1_reindex' into tmp/octopus/q/8.1 2020-12-15 19:06:43 +00:00
bert-e 558840c55c Merge branch 'bugfix/S3C-3505_support_bucket_option_in_v1_reindex' into q/7.9 2020-12-15 19:06:43 +00:00
bert-e c784f09bc0 Merge branch 'bugfix/S3C-3505_support_bucket_option_in_v1_reindex' into tmp/octopus/w/8.1/bugfix/S3C-3505_support_bucket_option_in_v1_reindex 2020-12-15 18:51:54 +00:00
Taylor McKinnon 1e4b7bd9f2 bf(S3C-3505): Add support for --bucket flag to s3_reindex.py 2020-12-15 10:49:26 -08:00
bert-e d61f9997c8 Merge branch 'bugfix/S3C-3680_fix_mpu_edgecase_in_migration' into tmp/octopus/w/8.1/bugfix/S3C-3680_fix_mpu_edgecase_in_migration 2020-12-11 21:58:07 +00:00
Taylor McKinnon e61655baea bf(S3C-3680): Handle pending MPU edgecase in migration 2020-12-11 13:57:29 -08:00
bert-e d911aadd15 Merge branch 'bugfix/S3C-3696_bump_default_java_max_heap' into q/7.9 2020-12-11 21:41:28 +00:00
bert-e d9555e0038 Merge branches 'w/8.1/bugfix/S3C-3696_bump_default_java_max_heap' and 'q/849/7.9/bugfix/S3C-3696_bump_default_java_max_heap' into tmp/octopus/q/8.1 2020-12-11 21:41:28 +00:00
bert-e e5a814aa13 Merge branch 'bugfix/S3C-3696_bump_default_java_max_heap' into tmp/octopus/w/8.1/bugfix/S3C-3696_bump_default_java_max_heap 2020-12-11 20:15:43 +00:00
Taylor McKinnon adf9ee325f bf(S3C-3696): Bump default java max heap to 4GiB 2020-12-11 12:15:08 -08:00
bert-e acf5bc273c Merge branch 'bugfix/S3C-3689_fix_incorrect_Date.now' into q/7.9 2020-12-11 20:08:14 +00:00
bert-e 1ed1b901c2 Merge branches 'w/8.1/bugfix/S3C-3689_fix_incorrect_Date.now' and 'q/845/7.9/bugfix/S3C-3689_fix_incorrect_Date.now' into tmp/octopus/q/8.1 2020-12-11 20:08:14 +00:00
bert-e 8ebb10c051 Merge branch 'bugfix/S3C-3679_cleanup_closed_redis_client' into q/7.9 2020-12-11 19:53:54 +00:00
bert-e 04a8021fe5 Merge branches 'w/8.1/bugfix/S3C-3679_cleanup_closed_redis_client' and 'q/841/7.9/bugfix/S3C-3679_cleanup_closed_redis_client' into tmp/octopus/q/8.1 2020-12-11 19:53:54 +00:00
bert-e d80bd66387 Merge branch 'bugfix/S3C-3679_cleanup_closed_redis_client' into tmp/octopus/w/8.1/bugfix/S3C-3679_cleanup_closed_redis_client 2020-12-11 19:49:20 +00:00
Taylor McKinnon 1caa33cf9e remove usage of -ci variant of warp 10 2020-12-09 14:21:41 -08:00
bert-e 4678fdae05 Merge branch 'bugfix/S3C-3689_fix_incorrect_Date.now' into tmp/octopus/w/8.1/bugfix/S3C-3689_fix_incorrect_Date.now 2020-12-08 19:46:35 +00:00
Taylor McKinnon 304bca04c7 bf(S3C-3689): Fix InterpolatedClock 2020-12-08 11:46:01 -08:00
Taylor McKinnon c05d2dbaf6 bf(S3C-3679): cleanup redis client on disconnect 2020-12-06 10:31:08 -08:00
bert-e 0814b59fda Merge branch 'bugfix/S3C-3553_improve_redis_reconnection_logic' into tmp/octopus/w/7.9/bugfix/S3C-3553_improve_redis_reconnection_logic 2020-11-18 23:53:16 +00:00
bert-e 3f7ea3e121 Merge branch 'w/7.9/bugfix/S3C-3553_improve_redis_reconnection_logic' into tmp/octopus/w/8.1/bugfix/S3C-3553_improve_redis_reconnection_logic 2020-11-18 23:53:16 +00:00
Taylor McKinnon d17e5545b3 bf(S3C-3553): Prevent leaking timeout event handlers 2020-11-18 11:46:30 -08:00
bert-e 3d9d949b05 Merge branch 'bugfix/S3C-3516_fix_authz_for_account_lvl_metrics' into q/7.9 2020-11-12 18:16:20 +00:00
bert-e f136d7d994 Merge branches 'w/8.1/bugfix/S3C-3516_fix_authz_for_account_lvl_metrics' and 'q/823/7.9/bugfix/S3C-3516_fix_authz_for_account_lvl_metrics' into tmp/octopus/q/8.1 2020-11-12 18:16:20 +00:00
bert-e d36af35db5 Merge branch 'improvement/S3C-3520_drop_hex_encoding_for_warp10_data' into q/7.9 2020-11-11 23:50:04 +00:00
bert-e a1e6c4d11a Merge branches 'w/8.1/improvement/S3C-3520_drop_hex_encoding_for_warp10_data' and 'q/828/7.9/improvement/S3C-3520_drop_hex_encoding_for_warp10_data' into tmp/octopus/q/8.1 2020-11-11 23:50:04 +00:00
bert-e 97014cf67b Merge branch 'bugfix/S3C-3516_fix_authz_for_account_lvl_metrics' into tmp/octopus/w/8.1/bugfix/S3C-3516_fix_authz_for_account_lvl_metrics 2020-11-11 23:49:42 +00:00
Taylor McKinnon 61cc5de8b5 bf(S3C-3516): Rework authz and accountId conversion 2020-11-11 15:49:12 -08:00
bert-e c8ac4cf688 Merge branch 'improvement/S3C-3520_drop_hex_encoding_for_warp10_data' into tmp/octopus/w/8.1/improvement/S3C-3520_drop_hex_encoding_for_warp10_data 2020-11-11 22:08:42 +00:00
Taylor McKinnon eb785bf3b3 impr(S3C-3520): Remove intermediate hex encoding for warp10 datapoints 2020-11-11 14:03:12 -08:00
bert-e 4554828a52 Merge branch 'bugfix/S3C-3514_add_missing_logline' into tmp/octopus/w/8.1/bugfix/S3C-3514_add_missing_logline 2020-11-09 23:46:53 +00:00
Taylor McKinnon 64c2f7307a bf(S3C-3514): Add missing call to responseLoggerMiddleware in errorMiddleware 2020-11-09 15:46:21 -08:00
bert-e 4caa7f5641 Merge branch 'w/7.9/feature/S3C-3484_extend_warp10_image' into tmp/octopus/w/8.1/feature/S3C-3484_extend_warp10_image 2020-10-28 22:06:32 +00:00
bert-e 92ffbcc3d7 Merge branch 'w/7.9/feature/S3C-3484_extend_warp10_image' into tmp/octopus/w/8.1/feature/S3C-3484_extend_warp10_image 2020-10-28 19:10:21 +00:00
bert-e 1197733b17 Merge branch 'w/7.9/bugfix/S3C-3485_fix_listMetrics_handler' into tmp/octopus/w/8.1/bugfix/S3C-3485_fix_listMetrics_handler 2020-10-27 23:39:12 +00:00
bert-e 78bc6290b2 Merge branch 'w/7.9/bugfix/S3C-3483_adjust_default_schedules' into tmp/octopus/w/8.1/bugfix/S3C-3483_adjust_default_schedules 2020-10-27 20:21:53 +00:00
bert-e 124744b562 Merge branches 'w/8.1/bugfix/S3C-3426_fix_user_creds_support' and 'q/785/7.9/bugfix/S3C-3426_fix_user_creds_support' into tmp/octopus/q/8.1 2020-10-23 05:47:28 +00:00
bert-e 4a806da678 Merge branches 'w/8.1/bugfix/S3C-3446_convert_account_to_canonical_id' and 'q/794/7.9/bugfix/S3C-3446_convert_account_to_canonical_id' into tmp/octopus/q/8.1 2020-10-23 05:33:40 +00:00
bert-e 3f4f34976c Merge branch 'w/7.9/bugfix/S3C-3446_convert_account_to_canonical_id' into tmp/octopus/w/8.1/bugfix/S3C-3446_convert_account_to_canonical_id 2020-10-23 00:37:32 +00:00
bert-e 74b8c91244 Merge branches 'w/8.1/feature/S3C-3010_add_migrate_task' and 'q/769/7.9/feature/S3C-3010_add_migrate_task' into tmp/octopus/q/8.1 2020-10-23 00:23:34 +00:00
bert-e df4e96132c Merge branch 'w/7.9/feature/S3C-3010_add_migrate_task' into tmp/octopus/w/8.1/feature/S3C-3010_add_migrate_task 2020-10-21 23:45:37 +00:00
bert-e ffe3ece284 Merge branch 'w/7.9/bugfix/S3C-3426_fix_user_creds_support' into tmp/octopus/w/8.1/bugfix/S3C-3426_fix_user_creds_support 2020-10-21 23:42:51 +00:00
bert-e d6d53eed8a Merge branch 'w/7.9/bugfix/S3C-3447_switch_to_node_schedule' into tmp/octopus/w/8.1/bugfix/S3C-3447_switch_to_node_schedule 2020-10-21 21:15:29 +00:00
bert-e e3a6844fc5 Merge branch 'w/7.9/bugfix/S3C-3438_add_missing_reindex_schema' into tmp/octopus/w/8.1/bugfix/S3C-3438_add_missing_reindex_schema 2020-10-19 22:30:06 +00:00
bert-e 58c73db7c3 Merge branch 'w/7.9/bugfix/S3C-2576-update-vaultclient' into tmp/octopus/w/8.1/bugfix/S3C-2576-update-vaultclient 2020-10-13 14:48:06 +00:00
bert-e 9a7ea1e564 Merge branch 'w/7.9/bugfix/S3C-3322_bump_vaultclient' into tmp/octopus/w/8.1/bugfix/S3C-3322_bump_vaultclient 2020-10-12 20:56:10 +00:00
bert-e 3a6e0a4c40 Merge branch 'w/7.9/bugfix/S3C-3424_remove_creds_from_client' into tmp/octopus/w/8.1/bugfix/S3C-3424_remove_creds_from_client 2020-10-08 23:17:10 +00:00
bert-e 62498fa330 Merge branches 'w/8.1/feature/S3C-3423_add_client_ip_limiting_middleware' and 'q/741/7.9/feature/S3C-3423_add_client_ip_limiting_middleware' into tmp/octopus/q/8.1 2020-10-08 20:54:35 +00:00
bert-e e9a252f3c4 Merge branch 'w/7.9/feature/S3C-3423_add_client_ip_limiting_middleware' into tmp/octopus/w/8.1/feature/S3C-3423_add_client_ip_limiting_middleware 2020-10-08 20:50:22 +00:00
bert-e 1b1d1ce35c Merge branch 'w/7.9/feature/S3C-3418_change_on_wire_ingestion_format_json' into tmp/octopus/w/8.1/feature/S3C-3418_change_on_wire_ingestion_format_json 2020-10-07 22:32:37 +00:00
bert-e 0b308adf07 Merge branches 'w/8.1/bugfix/S3C-3307_handle_deleted_buckets_in_reindex_port' and 'q/726/7.9/bugfix/S3C-3307_handle_deleted_buckets_in_reindex_port' into tmp/octopus/q/8.1 2020-10-06 22:55:41 +00:00
bert-e 73735f0f27 Merge branch 'w/7.9/bugfix/S3C-3307_handle_deleted_buckets_in_reindex_port' into tmp/octopus/w/8.1/bugfix/S3C-3307_handle_deleted_buckets_in_reindex_port 2020-10-06 21:19:03 +00:00
bert-e 7b1ed984e8 Merge branches 'w/8.1/bugfix/S3C-3307_handle_deleted_buckets_in_reindex' and 'q/695/7.9/bugfix/S3C-3307_handle_deleted_buckets_in_reindex' into tmp/octopus/q/8.1 2020-10-06 19:08:10 +00:00
bert-e 0adb775a1c Merge branch 'w/7.9/bugfix/S3C-3308_fix_mpuShadowBucket_counters_for_v2' into tmp/octopus/w/8.1/bugfix/S3C-3308_fix_mpuShadowBucket_counters_for_v2 2020-10-05 17:42:07 +00:00
bert-e a64b25c26e Merge branch 'w/7.9/bugfix/S3C-3307_handle_deleted_buckets_in_reindex' into tmp/octopus/w/8.1/bugfix/S3C-3307_handle_deleted_buckets_in_reindex 2020-10-05 17:36:57 +00:00
bert-e 392636df76 Merge branch 'w/7.8/bugfix/S3C-3376_remove_requirement_for_access_key' into tmp/octopus/w/8.1/bugfix/S3C-3376_remove_requirement_for_access_key 2020-10-01 18:14:21 +00:00
bert-e 32920a5a97 Merge branches 'w/8.1/feature/S3C-3382_add_configuration_for_ingestion_speed' and 'q/698/7.8/feature/S3C-3382_add_configuration_for_ingestion_speed' into tmp/octopus/q/8.1 2020-10-01 17:25:48 +00:00
bert-e b875c72451 Merge branches 'w/8.1/bugfix/S3C-3361_bump_v2_toggle_timeout' and 'q/702/7.8/bugfix/S3C-3361_bump_v2_toggle_timeout' into tmp/octopus/q/8.1 2020-09-30 00:11:03 +00:00
bert-e abbbad9f5f Merge branch 'bugfix/S3C-3361_bump_v2_toggle_timeout' into tmp/octopus/w/8.1/bugfix/S3C-3361_bump_v2_toggle_timeout 2020-09-29 21:43:20 +00:00
bert-e 4028b265f3 Merge branch 'feature/S3C-3382_add_configuration_for_ingestion_speed' into tmp/octopus/w/8.1/feature/S3C-3382_add_configuration_for_ingestion_speed 2020-09-29 21:33:15 +00:00
bert-e 6051cada33 Merge branch 'w/7.8/bugfix/S3C-3308_fix_mpuShadowBucket_counters' into tmp/octopus/w/8.1/bugfix/S3C-3308_fix_mpuShadowBucket_counters 2020-09-28 20:05:41 +00:00
bert-e dd1ef6860e Merge branch 'bugfix/S3C-3363_add_missing_logger_for_replay' into tmp/octopus/w/8.1/bugfix/S3C-3363_add_missing_logger_for_replay 2020-09-24 20:54:47 +00:00
bert-e 33c1af303d Merge branches 'w/8.1/bugfix/S3C-3362_fix_reindex_default_schedule' and 'q/683/7.8/bugfix/S3C-3362_fix_reindex_default_schedule' into tmp/octopus/q/8.1 2020-09-24 02:53:34 +00:00
bert-e 3d79444672 Merge branches 'w/8.1/feature/S3C-3324_redis_backed_routes' and 'q/668/7.8/feature/S3C-3324_redis_backed_routes' into tmp/octopus/q/8.1 2020-09-24 02:53:08 +00:00
bert-e 1bd45ffd68 Merge branch 'bugfix/S3C-3362_fix_reindex_default_schedule' into tmp/octopus/w/8.1/bugfix/S3C-3362_fix_reindex_default_schedule 2020-09-24 01:10:25 +00:00
bert-e f19018f9e7 Merge branch 'feature/S3C-3324_redis_backed_routes' into tmp/octopus/w/8.1/feature/S3C-3324_redis_backed_routes 2020-09-24 00:48:00 +00:00
bert-e 87658b4351 Merge branches 'w/8.1/bugfix/S3C-1997_fix_ioredis_failover' and 'q/671/7.8/bugfix/S3C-1997_fix_ioredis_failover' into tmp/octopus/q/8.1 2020-09-24 00:07:29 +00:00
bert-e 91eed09651 Merge branch 'bugfix/S3C-1997_fix_ioredis_failover' into tmp/octopus/w/8.1/bugfix/S3C-1997_fix_ioredis_failover 2020-09-24 00:03:21 +00:00
bert-e 728d54501b Merge branches 'w/8.1/bugfix/S3C-3358_fix_metrics_resp_for_single_resource' and 'q/674/7.8/bugfix/S3C-3358_fix_metrics_resp_for_single_resource' into tmp/octopus/q/8.1 2020-09-23 18:20:27 +00:00
bert-e 911cfb3c36 Merge branch 'bugfix/S3C-3358_fix_metrics_resp_for_single_resource' into tmp/octopus/w/8.1/bugfix/S3C-3358_fix_metrics_resp_for_single_resource 2020-09-23 18:06:30 +00:00
bert-e 5d7ed8520f Merge branches 'w/8.1/feature/S3C-3351_cache_node_deps_backport' and 'q/659/7.8/feature/S3C-3351_cache_node_deps_backport' into tmp/octopus/q/8.1 2020-09-23 17:23:14 +00:00
Taylor McKinnon dd118e4a44 Merge remote-tracking branch 'origin/feature/S3C-3286_add_reindex_task' into w/8.1/feature/S3C-3286_add_reindex_task 2020-09-17 11:57:37 -07:00
bert-e 3076eaf115 Merge branch 'w/7.8/feature/S3C-3351_cache_node_deps_backport' into tmp/octopus/w/8.1/feature/S3C-3351_cache_node_deps_backport 2020-09-17 18:47:55 +00:00
bert-e fcd74b5707 Merge branches 'w/8.1/feature/S3C-3351_cache_node_deps' and 'q/656/7.8/feature/S3C-3351_cache_node_deps' into tmp/octopus/q/8.1 2020-09-17 17:20:16 +00:00
bert-e ad52015f73 Merge branch 'feature/S3C-3351_cache_node_deps' into tmp/octopus/w/8.1/feature/S3C-3351_cache_node_deps 2020-09-17 17:11:39 +00:00
bert-e 60d0dc794d Merge branches 'w/8.1/feature/S3C-3324_add_counter_backend_to_cache' and 'q/651/7.8/feature/S3C-3324_add_counter_backend_to_cache' into tmp/octopus/q/8.1 2020-09-16 17:21:13 +00:00
bert-e 14db2c93ce Merge branch 'feature/S3C-3324_add_counter_backend_to_cache' into tmp/octopus/w/8.1/feature/S3C-3324_add_counter_backend_to_cache 2020-09-11 22:28:44 +00:00
bert-e 4d4906dc94 Merge branch 'improvement/S3C-3325-support-bucket-notification-apis-utapi' into tmp/octopus/w/8.1/improvement/S3C-3325-support-bucket-notification-apis-utapi 2020-09-09 18:59:27 +00:00
bert-e 7a9ce9bf2c Merge branches 'w/8.1/feature/S3C-3269_add_tls_support' and 'q/642/7.8/feature/S3C-3269_add_tls_support' into tmp/octopus/q/8.1 2020-09-04 22:24:54 +00:00
bert-e d203a94367 Merge branch 'feature/S3C-3269_add_tls_support' into tmp/octopus/w/8.1/feature/S3C-3269_add_tls_support 2020-09-04 21:52:16 +00:00
Taylor McKinnon bea0a4607a Merge remote-tracking branch 'origin/bugfix/S3C-3087_bump_version' into w/8.1/bugfix/S3C-3087_bump_version 2020-08-24 12:05:39 -07:00
bert-e 047aa4f8eb Merge branch 'feature/S3C-3265_warp10_failover' into tmp/octopus/w/8.1/feature/S3C-3265_warp10_failover 2020-08-15 02:37:51 +00:00
bert-e 4aae68208c Merge branch 'bugfix/S3C-3264_fix_sentinel_parsing' into tmp/octopus/w/8.1/bugfix/S3C-3264_fix_sentinel_parsing 2020-08-14 18:46:21 +00:00
bert-e 0b7781d3b8 Merge branches 'w/8.1/bugfix/S3C-3255_ensure_ingest_timestamp_precision' and 'q/624/7.8/bugfix/S3C-3255_ensure_ingest_timestamp_precision' into tmp/octopus/q/8.1 2020-08-13 21:46:34 +00:00
bert-e 2a2f818745 Merge branch 'bugfix/S3C-3255_ensure_ingest_timestamp_precision' into tmp/octopus/w/8.1/bugfix/S3C-3255_ensure_ingest_timestamp_precision 2020-08-13 20:57:20 +00:00
bert-e 8064180984 Merge branch 'feature/S3C-3003_add_repair_process' into tmp/octopus/w/8.1/feature/S3C-3003_add_repair_process 2020-08-13 19:55:43 +00:00
bert-e 6e096c9d39 Merge branches 'w/8.1/bugfix/S3C-3242_rework_redis_config' and 'q/614/7.8/bugfix/S3C-3242_rework_redis_config' into tmp/octopus/q/8.1 2020-08-11 17:30:55 +00:00
bert-e 86285d1e45 Merge branches 'w/8.1/bugfix/S3C-3240_fix_tasks_exports' and 'q/611/7.8/bugfix/S3C-3240_fix_tasks_exports' into tmp/octopus/q/8.1 2020-08-10 23:43:30 +00:00
bert-e 8ea9a5dc0a Merge branches 'w/8.1/bugfix/S3C-3243_fix_warp10_token_config' and 'q/609/7.8/bugfix/S3C-3243_fix_warp10_token_config' into tmp/octopus/q/8.1 2020-08-10 23:29:27 +00:00
bert-e 390f1bb3c1 Merge branch 'w/7.8/bugfix/S3C-3240_fix_tasks_exports' into tmp/octopus/w/8.1/bugfix/S3C-3240_fix_tasks_exports 2020-08-10 23:23:15 +00:00
bert-e 3b2fc1b045 Merge branch 'w/7.8/bugfix/S3C-3242_rework_redis_config' into tmp/octopus/w/8.1/bugfix/S3C-3242_rework_redis_config 2020-08-10 23:18:17 +00:00
bert-e 58f20049f3 Merge branch 'bugfix/S3C-3243_fix_warp10_token_config' into tmp/octopus/w/8.1/bugfix/S3C-3243_fix_warp10_token_config 2020-08-10 23:13:09 +00:00
bert-e db05aaf2a3 Merge branch 'bugfix/S3C-3241_remove_warp10_workaround' into tmp/octopus/w/8.1/bugfix/S3C-3241_remove_warp10_workaround 2020-08-10 23:11:48 +00:00
bert-e 16c43c202b Merge branch 'feature/S3C-3235_fix_getMetricsAt_with_no_data_81' into q/8.1 2020-08-07 19:08:16 +00:00
bert-e 90beac2fa7 Merge branch 'w/7.8/feature/S3C-3230_add_authv4_support' into tmp/octopus/w/8.1/feature/S3C-3230_add_authv4_support 2020-08-05 23:03:09 +00:00
bert-e 14446a10c2 Merge branch 'w/7.8/bugfix/S3C-3235_fix_getMetricsAt_with_no_data' into tmp/octopus/w/8.1/bugfix/S3C-3235_fix_getMetricsAt_with_no_data 2020-08-04 20:04:49 +00:00
bert-e 8e1417ad6b Merge branch 'w/7.8/feature/S3C-3020_add_functional_test_for_snapshot_task' into tmp/octopus/w/8.1/feature/S3C-3020_add_functional_test_for_snapshot_task 2020-08-03 21:49:18 +00:00
bert-e ae29a7d346 Merge branch 'w/7.8/feature/S3C-3020_add_functional_test_for_checkpoint_task' into tmp/octopus/w/8.1/feature/S3C-3020_add_functional_test_for_checkpoint_task 2020-08-03 21:42:48 +00:00
bert-e 646f921ded Merge branch 'w/7.8/feature/S3C-3020_add_functional_test_ingest_task' into tmp/octopus/w/8.1/feature/S3C-3020_add_functional_test_ingest_task 2020-08-03 19:39:54 +00:00
bert-e 31ff2aa63b Merge branch 'w/7.8/feature/S3C-3007_Add_listMetrics_handler' into tmp/octopus/w/8.1/feature/S3C-3007_Add_listMetrics_handler 2020-08-03 19:30:40 +00:00
bert-e 37f6b4ddc5 Merge branch 'w/7.8/feature/S3C-3132-utapi-v2-push-metric' into tmp/octopus/w/8.1/feature/S3C-3132-utapi-v2-push-metric 2020-07-31 20:37:59 +00:00
bert-e d043d8dcae Merge branch 'w/7.8/feature/S3C-3006_add_metric_calculation_macro' into tmp/octopus/w/8.1/feature/S3C-3006_add_metric_calculation_macro 2020-07-29 21:52:13 +00:00
bert-e 6dbc500fa9 Merge branch 'w/7.8/feature/S3C-3196-update-node' into tmp/octopus/w/8.1/feature/S3C-3196-update-node 2020-07-27 21:55:34 +00:00
bert-e 8e1550d61a Merge branch 'w/7.8/feature/S3C-3196-update-node' into tmp/octopus/w/8.1/feature/S3C-3196-update-node 2020-07-27 07:40:11 +00:00
bert-e 4a811d7e86 Merge branch 'w/7.8/feature/S3C-3196-update-node' into tmp/octopus/w/8.1/feature/S3C-3196-update-node 2020-07-25 03:28:19 +00:00
bert-e 1225f45805 Merge branch 'w/7.8/feature/S3C-3020_add_lag_flag_to_task' into tmp/octopus/w/8.1/feature/S3C-3020_add_lag_flag_to_task 2020-07-20 21:40:30 +00:00
bert-e 81e5c9e98c Merge branch 'w/7.8/feature/S3C-3020_add_snapshot_creation_task' into tmp/octopus/w/8.1/feature/S3C-3020_add_snapshot_creation_task 2020-07-20 20:21:06 +00:00
bert-e cdb9ef06d1 Merge branch 'w/7.8/feature/S3C-3002_add_checkpoint_creation_task' into tmp/octopus/w/8.1/feature/S3C-3002_add_checkpoint_creation_task 2020-07-20 19:21:38 +00:00
bert-e 186807e798 Merge branch 'w/7.8/feature/S3C-3001_add_redis_to_warp10_task' into tmp/octopus/w/8.1/feature/S3C-3001_add_redis_to_warp10_task 2020-07-16 17:21:16 +00:00
bert-e 25ffbe3bbc Merge branch 'w/7.8/feature/S3C-3001_Add_shard_tracking_to_redis_backend' into tmp/octopus/w/8.1/feature/S3C-3001_Add_shard_tracking_to_redis_backend 2020-07-13 19:06:58 +00:00
bert-e 77f8fc4b11 Merge branch 'w/7.8/feature/S3C-3001_usec_resolution_for_shards' into tmp/octopus/w/8.1/feature/S3C-3001_usec_resolution_for_shards 2020-07-13 00:25:33 +00:00
bert-e a02fd4830c Merge branch 'w/7.8/feature/S3C-3005_add_ingest_api' into tmp/octopus/w/8.1/feature/S3C-3005_add_ingest_api 2020-07-07 17:54:27 +00:00
bert-e aa314c5ed9 Merge branch 'w/7.8/feature/S3C-3008_Add_cloudserver_client' into tmp/octopus/w/8.1/feature/S3C-3008_Add_cloudserver_client 2020-07-01 18:58:57 +00:00
bert-e 51858fe41a Merge branch 'w/7.8/feature/S3C-3004_Add_http_server' into tmp/octopus/w/8.1/feature/S3C-3004_Add_http_server 2020-06-29 19:45:46 +00:00
Taylor McKinnon 70a79537fe Merge remote-tracking branch 'origin/w/7.8/feature/S3C-3004_Add_http_server' into w/8.1/feature/S3C-3004_Add_http_server 2020-06-28 21:10:12 -07:00
bert-e b335675a36 Merge branch 'w/7.8/feature/S3C-3004_Add_process_absraction' into tmp/octopus/w/8.1/feature/S3C-3004_Add_process_absraction 2020-06-19 19:36:52 +00:00
bert-e 707620acf7 Merge branch 'w/7.8/feature/S3C-3004_Add_config_loading' into tmp/octopus/w/8.1/feature/S3C-3004_Add_config_loading 2020-06-18 23:47:50 +00:00
bert-e 30306f3dce Merge branch 'w/7.8/feature/S3C-3004_Add_stub_openapi' into tmp/octopus/w/8.1/feature/S3C-3004_Add_stub_openapi 2020-06-18 22:26:29 +00:00
bert-e d82623014d Merge branch 'w/7.8/feature/S3C-3000_Add_warp10_client' into tmp/octopus/w/8.1/feature/S3C-3000_Add_warp10_client 2020-06-18 20:31:03 +00:00
bert-e 1c5c011699 Merge branch 'w/7.8/feature/S3C-2999_add_redis_cache_client' into tmp/octopus/w/8.1/feature/S3C-2999_add_redis_cache_client 2020-06-18 20:26:15 +00:00
bert-e 282a55c724 Merge branch 'w/7.8/feature/S3C-2960-object-lock-metrics' into tmp/octopus/w/8.1/feature/S3C-2960-object-lock-metrics 2020-06-16 17:33:22 +00:00
Taylor McKinnon 90c8f49222 Merge remote-tracking branch 'origin/w/7.8/feature/S3C-3041_Add_v2_toggle' into w/8.1/feature/S3C-3041_Add_v2_toggle 2020-06-10 17:40:37 -07:00
Taylor McKinnon 2d0c104cc2 Merge remote-tracking branch 'origin/w/7.8/bugfix/S3C-3043_bump_scality_guidelines' into w/8.1/bugfix/S3C-3043_bump_scality_guidelines 2020-06-10 15:10:04 -07:00
bert-e 72bda734bf Merge branch 'w/7.8/bugfix/S3C-3023_bump_warp10_version' into tmp/octopus/w/8.1/bugfix/S3C-3023_bump_warp10_version 2020-06-02 23:09:36 +00:00
bert-e a2e8fe51b4 Merge branch 'feature/S3C-2878_Add_warp10_Dockerfile' into tmp/octopus/w/8.1/feature/S3C-2878_Add_warp10_Dockerfile 2020-05-26 19:49:42 +00:00
bert-e b494f1e85c Merge branch 'w/7.7/bugfix/S3C-2408/mpu-overwrite' into tmp/octopus/w/8.1/bugfix/S3C-2408/mpu-overwrite 2020-04-22 02:20:17 +00:00
bert-e af1a01b692 Merge branch 'w/8.0/improvement/S3C-2808-clean-upstream' into tmp/octopus/w/8.1/improvement/S3C-2808-clean-upstream 2020-04-21 23:59:47 +00:00
Rahul Padigela 1f0f7d91ff Merge remote-tracking branch 'origin/w/7.7/improvement/S3C-2808-clean-upstream' into w/8.0/improvement/S3C-2808-clean-upstream 2020-04-21 16:59:04 -07:00
Taylor McKinnon 5c6386e33d Merge remote-tracking branch 'origin/w/8.0/bugfix/S3C-2603_Update_utapi_reindex' into w/8.1/bugfix/S3C-2603_Update_utapi_reindex 2020-04-06 12:34:17 -07:00
Taylor McKinnon 0bf2a533f5 Merge remote-tracking branch 'origin/w/7.7/bugfix/S3C-2603_Update_utapi_reindex' into w/8.0/bugfix/S3C-2603_Update_utapi_reindex 2020-04-06 12:33:13 -07:00
bert-e c59323952d Merge branch 'w/8.0/bugfix/S3C-2604-listMultipleBucketMetrics' into tmp/octopus/w/8.1/bugfix/S3C-2604-listMultipleBucketMetrics 2020-02-26 09:30:18 +00:00
bert-e b05d8c5528 Merge branch 'w/7.7/bugfix/S3C-2604-listMultipleBucketMetrics' into tmp/octopus/w/8.0/bugfix/S3C-2604-listMultipleBucketMetrics 2020-02-26 09:30:17 +00:00
bert-e a5430ba8a8 Merge branch 'w/8.0/bugfix/S3C-2604-list-multiple-bucket-metrics' into tmp/octopus/w/8.1/bugfix/S3C-2604-list-multiple-bucket-metrics 2020-02-25 19:24:23 +00:00
bert-e cc0087c3ba Merge branch 'w/7.7/bugfix/S3C-2604-list-multiple-bucket-metrics' into tmp/octopus/w/8.0/bugfix/S3C-2604-list-multiple-bucket-metrics 2020-02-25 19:24:23 +00:00
bert-e 6a45a13ab4 Merge branch 'w/8.0/bugfix/S3C-2475/utapi_response_correction' into tmp/octopus/w/8.1/bugfix/S3C-2475/utapi_response_correction 2020-02-05 19:34:07 +00:00
bert-e 3dd760835f Merge branch 'w/7.7/bugfix/S3C-2475/utapi_response_correction' into tmp/octopus/w/8.0/bugfix/S3C-2475/utapi_response_correction 2020-02-05 19:34:07 +00:00
Flavien Lebarbé 2720bdb096 Merge branch 'development/8.0' into development/8.1 2019-11-11 19:15:36 +01:00
Flavien Lebarbé 54390f82ba Merge branch 'development/7.6' into development/8.0 2019-11-11 19:15:09 +01:00
Katherine Laue 5ccb8d03be Update yarn.lock 2019-09-11 15:41:50 -07:00
Katherine Laue 9fdad30ca0 improvement/S3C-2365 update vaultclient dep 2019-09-11 15:41:27 -07:00
Katherine Laue 3180aa2d02 update yarn.lock 2019-09-11 15:40:50 -07:00
Katherine Laue 60e4ed7880 remove yarn.lock 2019-09-11 15:40:50 -07:00
Katherine Laue d77d15c7dd update yarn.lock 2019-09-11 15:40:50 -07:00
Katherine Laue dc34912298 improvement/S3C-2364 install yarn frozen lockfile 2019-09-11 15:40:50 -07:00
Katherine Laue 4a845e80cd improvement/S3C-2364 migrate package manager to yarn 2019-09-11 15:40:50 -07:00
bbuchanan9 b56405f031 improvement S3C-234 Operation counters config 2019-09-11 15:40:50 -07:00
bbuchanan9 4d6fd39693 bugfix: S3C-2342 BucketD listing functional tests 2019-09-11 15:38:11 -07:00
bbuchanan9 b3a3383289 bugfix: S3C-2317 Add uuid module as a dependency 2019-09-11 15:38:11 -07:00
bbuchanan9 196acf9fc8 bugfix: S3C-2342 Add bucket listing pagination 2019-09-11 15:38:11 -07:00
bbuchanan9 347ac8faf1 bugfix: S3C-2315 Support versioning with reindex 2019-09-11 15:38:11 -07:00
bbuchanan9 a62c22f06d improvement: S3C-2337 Parallelize tests 2019-09-11 15:38:11 -07:00
bbuchanan9 d65b9a65ee bugfix: S3C-2317 Append UUID to sorted set members 2019-09-11 15:38:11 -07:00
bert-e 9533009100 Merge branch 'w/7.5/improvement/S3C-2337-parallelize-tests' into tmp/octopus/w/8.0/improvement/S3C-2337-parallelize-tests 2019-07-19 17:12:55 +00:00
bert-e d336997813 Merge branch 'w/7.5/bugfix/S3C-2317/use-uuid' into tmp/octopus/w/8.0/bugfix/S3C-2317/use-uuid 2019-07-19 01:22:33 +00:00
Katherine Laue 166d2c06cf Merge remote-tracking branch 'origin/w/7.5/improvement/S3C-2332-update-vaultclient' into w/8.0/improvement/S3C-2332-update-vaultclient 2019-07-16 13:52:39 -07:00
bbuchanan9 9042956610 improvement: S3C-2314 Update Scality dependencies 2019-07-15 13:54:34 -07:00
bert-e 4f754e26f9 Merge branch 'improvement/S3C-2314/update-scality-dependencies' into tmp/octopus/w/8.0/improvement/S3C-2314/update-scality-dependencies 2019-07-15 20:29:07 +00:00
bbuchanan9 dfb7a83b2a Merge remote-tracking branch 'origin/w/7.5/bugfix/S3C-2322/incorrect-expire-TTL-config-field' into w/8.0/bugfix/S3C-2322/incorrect-expire-TTL-config-field 2019-07-12 17:22:14 -07:00
Katherine Laue e8ac66ff09 Merge remote-tracking branch 'origin/w/7.5/improvement/S3C-2290-upgrade-nodejs' into w/8.0/improvement/S3C-2290-upgrade-nodejs 2019-07-08 14:39:56 -07:00
bert-e 1919808c09 Merge branch 'w/7.5/feature/S3C-2273/maintenance-testing-for-utapi-reindexer' into tmp/octopus/w/8.0/feature/S3C-2273/maintenance-testing-for-utapi-reindexer 2019-06-25 20:24:37 +00:00
bert-e 46f62388cd Merge branch 'w/7.5/feature/S3C-2260-maintenance-testing-for-utapi-reindexer' into tmp/octopus/w/8.0/feature/S3C-2260-maintenance-testing-for-utapi-reindexer 2019-06-19 17:58:13 +00:00
bert-e 894f37750f Merge branch 'w/7.5/bugfix/S3C-2019-reindex-script-redis-authentication' into tmp/octopus/w/8.0/bugfix/S3C-2019-reindex-script-redis-authentication 2019-06-07 04:28:53 +00:00
bert-e a990c743af Merge branch 'w/7.5/bugfix/S3C-2019-redis-sentinel-password' into tmp/octopus/w/8.0/bugfix/S3C-2019-redis-sentinel-password 2019-06-06 05:34:54 +00:00
bert-e 3a3083c379 Merge branch 'w/7.5/bugfix/S3C-2076/update-default-reindex-schedule' into tmp/octopus/w/8.0/bugfix/S3C-2076/update-default-reindex-schedule 2019-06-05 18:48:28 +00:00
bert-e 39b4b8b623 Merge branch 'w/7.5/bugfix/S3C-2076/add-utapi-reindex' into tmp/octopus/w/8.0/bugfix/S3C-2076/add-utapi-reindex 2019-06-05 04:45:30 +00:00
bert-e c5165a0338 Merge branches 'w/8.0/improvement/S3C-2034-bump-ioredis-version' and 'q/249/7.5/improvement/S3C-2034-bump-ioredis-version' into tmp/octopus/q/8.0 2019-05-21 17:38:37 +00:00
bert-e ef56d39193 Merge branch 'w/7.5/improvement/S3C-2034-bump-ioredis-version' into tmp/octopus/w/8.0/improvement/S3C-2034-bump-ioredis-version 2019-05-20 21:49:40 +00:00
bert-e da7144389d Merge branch 'w/7.5/bugfix/S3C-2195/upload-copy-part-metrics' into tmp/octopus/w/8.0/bugfix/S3C-2195/upload-copy-part-metrics 2019-05-20 20:44:58 +00:00
bert-e d2020f8190 Merge branches 'w/8.0/bugfix/S3C-2155/allow-range-into-the-future' and 'q/242/7.5/bugfix/S3C-2155/allow-range-into-the-future' into tmp/octopus/q/8.0 2019-05-10 00:03:41 +00:00
bert-e 27ef9dfa33 Merge branch 'w/7.5/bugfix/S3C-2155/allow-range-into-the-future' into tmp/octopus/w/8.0/bugfix/S3C-2155/allow-range-into-the-future 2019-05-09 23:17:31 +00:00
bert-e fae26f0933 Merge branches 'w/8.0/bugfix/S3C-2105/push-backbeat-metrics' and 'q/236/7.5/bugfix/S3C-2105/push-backbeat-metrics' into tmp/octopus/q/8.0 2019-05-09 23:09:10 +00:00
bert-e 270591bf23 Merge branch 'w/7.5/bugfix/S3C-1506/start-end-reducer-values' into tmp/octopus/w/8.0/bugfix/S3C-1506/start-end-reducer-values 2019-05-09 17:01:16 +00:00
bert-e 12fa8b567c Merge branch 'w/7.5/bugfix/S3C-2105/push-backbeat-metrics' into tmp/octopus/w/8.0/bugfix/S3C-2105/push-backbeat-metrics 2019-05-08 23:04:06 +00:00
bbuchanan9 fac88a209f Merge remote-tracking branch 'origin/w/7.5/bugfix/S3C-1506/add-long-range-request-ft-tests' into w/8.0/bugfix/S3C-1506/add-long-range-request-ft-tests 2019-05-02 15:34:43 -07:00
bert-e ef2c350724 Merge branch 'w/7.5/bugfix/S3C-2155/time-range-validation' into tmp/octopus/w/8.0/bugfix/S3C-2155/time-range-validation 2019-05-01 23:15:42 +00:00
bert-e 46bb81e9f8 Merge branch 'w/7.5/bugfix/S3C-2155/time-range-validation' into tmp/octopus/w/8.0/bugfix/S3C-2155/time-range-validation 2019-05-01 23:08:25 +00:00
bert-e 829369d37b Merge branch 'w/7.5/bugfix/S3C-1506/prevent-heap-memory-issue' into tmp/octopus/w/8.0/bugfix/S3C-1506/prevent-heap-memory-issue 2019-04-30 20:07:06 +00:00
bert-e b5def9cb54 Merge branch 'w/7.5/improvement/S3C-2140/do-not-track-dump.rbd' into tmp/octopus/w/8.0/improvement/S3C-2140/do-not-track-dump.rbd 2019-04-26 20:13:05 +00:00
bert-e 2b514a618e Merge branch 'w/7.5/feature/S3C-2133/add-eve-support' into tmp/octopus/w/8.0/feature/S3C-2133/add-eve-support 2019-04-26 17:25:50 +00:00
bbuchanan9 4f119ea917 documentation: S3C-2070 Update README
* Remove outdated CI badge
* Update links
* Update component name
* Fix typos
* Redefine CLI input fields
2019-04-04 14:58:35 -07:00
anurag4dsb 608fddb4bd
Merge remote-tracking branch 'origin/feature/S3C-1561-getStorageUsedForAccountQuotas' into w/8.0/feature/S3C-1561-getStorageUsedForAccountQuotas 2019-01-24 11:25:17 -08:00
Rahul Padigela f2f1d0c742 improvement: reply arsenal errors to the client
Without replying Arsenal style errors, the lib breaks the contract and causes an
exception on the cloudserver
2018-08-31 16:27:33 -07:00
Dora Korpar 6d0c8dd1c0 bf: ZENKO 676 - only location metrics 2018-07-06 13:17:05 -07:00
bert-e cd3324df87 Merge branch 'bugfix/dependencies' into tmp/octopus/w/8.0/bugfix/dependencies 2018-06-29 14:07:22 +00:00
David Pineau 4664ee3cca Merge remote-tracking branch 'origin/development/7.4' into development/8.0 2018-06-28 19:45:42 +02:00
David Pineau a00aa6f05f Merge remote-tracking branch 'origin/development/7.4' into development/8.0 2018-06-28 14:58:45 +02:00
bert-e 4b646285d2 Merge branch 'feature/ZENKO-142-location-quota-metric' into q/8.0 2018-06-27 17:27:55 +00:00
bert-e e77bcc8e72 Merge branch 'feature/S3C-1212-expire-metrics' into tmp/octopus/w/8.0/feature/S3C-1212-expire-metrics 2018-06-26 22:10:37 +00:00
Rahul Padigela e3511ee7ef Merge remote-tracking branch 'origin/development/7.4' into improvement/port-7.4 2018-06-26 14:55:42 -07:00
Dora Korpar fc634ee028 ft: ZENKO 142 Location quota metrics 2018-06-26 14:44:35 -07:00
Rahul Padigela 4c776b3eb5
Merge pull request #177 from scality/ft/ZENKO-465-utapi-docker-image
ft: ZENKO 465 Utapi docker image
2018-06-07 14:10:00 -07:00
Dora Korpar 33024215e3 ft: ZENKO 465 Utapi docker image 2018-06-07 10:37:20 -07:00
Dora Korpar 4965d96f5c
Merge pull request #175 from scality/ft/ZENKO-386-utapi-service-accounts
ft: ZENKO 386 zenko utapi integration
2018-06-04 14:10:32 -07:00
Dora Korpar 0bfd8a66fb ft: ZENKO 386 zenko utapi integration 2018-05-31 11:59:40 -07:00
Rahul Padigela a8a8ad42ff chore: update version and dependencies 2018-05-31 11:23:56 -07:00
Rahul Padigela 8e11d15893
Merge pull request #174 from scality/fwdport/7.4-beta-master
Fwdport/7.4 beta master
2018-04-23 00:15:16 -07:00
Rahul Padigela bf1cbe4bf4 Merge remote-tracking branch 'origin/rel/7.4-beta' into fwdport/7.4-beta-master 2018-04-23 00:12:38 -07:00
Rahul Padigela a4ab00ad92
Merge pull request #173 from scality/fwdport/7.4-7.4-beta
Fwdport/7.4 7.4 beta
2018-04-19 11:04:48 -07:00
Rahul Padigela 6c4e7aedce Merge remote-tracking branch 'origin/rel/7.4' into fwdport/7.4-7.4-beta 2018-04-19 11:01:53 -07:00
Stefano Maffulli b27c57bcfc
Merge pull request #172 from scality/FT/addIssueTemplate
FT: ZNC-26: add issue template
2018-04-12 15:45:21 -07:00
LaureVergeron 1fda068967 FT: ZNC-26: add issue template 2018-04-11 11:11:17 +02:00
Rahul Padigela 18bf5bb00e
Merge pull request #170 from scality/fwdport/7.4-beta-master
Fwdport/7.4 beta master
2018-03-27 15:33:52 -07:00
Alexander Chan 6de529b8b4 fix dependencies 2018-03-20 08:15:38 -07:00
Alexander Chan ec3efcb9af Merge remote-tracking branch 'origin/rel/7.4-beta' into fwdport/7.4-beta-master 2018-03-19 16:05:52 -07:00
Rahul Padigela d77f8cc46c ft: update version 2018-03-14 13:30:18 -07:00
Rahul Padigela 7487555957
Merge pull request #141 from scality/ft/add-example-python-request-script
FT: Add python request example
2018-03-05 11:13:03 -08:00
Bennett Buchanan 7fbddc071b FT: Add python request example 2018-03-05 11:10:56 -08:00
ironman-machine 6d708d54d0 merge #160 2018-02-09 14:14:25 +00:00
Rayene Ben Rayana 6ab610b27f ft: add eve ci support 2018-02-01 16:48:06 -08:00
157 changed files with 7036 additions and 8639 deletions

View File

@ -1,8 +1,18 @@
{
"extends": "scality",
"env": {
"es6": true
},
"parserOptions": {
"ecmaVersion": 9
},
"rules": {
"no-underscore-dangle": "off",
"implicit-arrow-linebreak" : "off"
"implicit-arrow-linebreak" : "off",
"import/extensions": 0,
"prefer-spread": 0,
"no-param-reassign": 0,
"array-callback-return": 0
},
"settings": {
"import/resolver": {

87
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,87 @@
# General support information
GitHub Issues are **reserved** for actionable bug reports (including
documentation inaccuracies), and feature requests.
**All questions** (regarding configuration, usecases, performance, community,
events, setup and usage recommendations, among other things) should be asked on
the **[Zenko Forum](http://forum.zenko.io/)**.
> Questions opened as GitHub issues will systematically be closed, and moved to
> the [Zenko Forum](http://forum.zenko.io/).
--------------------------------------------------------------------------------
## Avoiding duplicates
When reporting a new issue/requesting a feature, make sure that we do not have
any duplicates already open:
- search the issue list for this repository (use the search bar, select
"Issues" on the left pane after searching);
- if there is a duplicate, please do not open your issue, and add a comment
to the existing issue instead.
--------------------------------------------------------------------------------
## Bug report information
(delete this section (everything between the lines) if you're not reporting a
bug but requesting a feature)
### Description
Briefly describe the problem you are having in a few paragraphs.
### Steps to reproduce the issue
Please provide steps to reproduce, including full log output
### Actual result
Describe the results you received
### Expected result
Describe the results you expected
### Additional information
- Node.js version,
- Docker version,
- npm version,
- distribution/OS,
- optional: anything else you deem helpful to us.
--------------------------------------------------------------------------------
## Feature Request
(delete this section (everything between the lines) if you're not requesting
a feature but reporting a bug)
### Proposal
Describe the feature
### Current behavior
What currently happens
### Desired behavior
What you would like to happen
### Usecase
Please provide usecases for changing the current behavior
### Additional information
- Is this request for your company? Y/N
- If Y: Company name:
- Are you using any Scality Enterprise Edition products (RING, Zenko EE)? Y/N
- Are you willing to contribute this feature yourself?
- Position/Title:
- How did you hear about us?
--------------------------------------------------------------------------------

14
.github/docker/redis-replica/Dockerfile vendored Normal file
View File

@ -0,0 +1,14 @@
# Creating this image for the CI as GitHub Actions
# is unable to overwrite the entrypoint
ARG REDIS_IMAGE="redis:latest"
FROM ${REDIS_IMAGE}
ENV REDIS_LISTEN_PORT 6380
ENV REDIS_MASTER_HOST redis
ENV REDIS_MASTER_PORT_NUMBER 6379
ENTRYPOINT redis-server \
--port ${REDIS_LISTEN_PORT} \
--slaveof ${REDIS_MASTER_HOST} ${REDIS_MASTER_PORT_NUMBER}

7
.github/docker/vault/Dockerfile vendored Normal file
View File

@ -0,0 +1,7 @@
FROM ghcr.io/scality/vault:c2607856
ENV VAULT_DB_BACKEND LEVELDB
RUN chmod 400 tests/utils/keyfile
ENTRYPOINT yarn start

View File

@ -17,6 +17,6 @@ if [ -z "$SETUP_CMD" ]; then
SETUP_CMD="start"
fi
UTAPI_INTERVAL_TEST_MODE=$1 npm $SETUP_CMD 2>&1 | tee -a "/artifacts/setup_$2.log" &
UTAPI_INTERVAL_TEST_MODE=$1 npm $SETUP_CMD 2>&1 | tee -a "setup_$2.log" &
bash tests/utils/wait_for_local_port.bash $PORT 40
UTAPI_INTERVAL_TEST_MODE=$1 npm run $2 | tee -a "/artifacts/test_$2.log"
UTAPI_INTERVAL_TEST_MODE=$1 npm run $2 | tee -a "test_$2.log"

65
.github/workflows/build-ci.yaml vendored Normal file
View File

@ -0,0 +1,65 @@
name: build-ci-images
on:
workflow_call:
jobs:
warp10-ci:
uses: scality/workflows/.github/workflows/docker-build.yaml@v2
secrets:
REGISTRY_LOGIN: ${{ github.repository_owner }}
REGISTRY_PASSWORD: ${{ github.token }}
with:
name: warp10-ci
context: .
file: images/warp10/Dockerfile
lfs: true
redis-ci:
uses: scality/workflows/.github/workflows/docker-build.yaml@v2
secrets:
REGISTRY_LOGIN: ${{ github.repository_owner }}
REGISTRY_PASSWORD: ${{ github.token }}
with:
name: redis-ci
context: .
file: images/redis/Dockerfile
redis-replica-ci:
uses: scality/workflows/.github/workflows/docker-build.yaml@v2
needs:
- redis-ci
secrets:
REGISTRY_LOGIN: ${{ github.repository_owner }}
REGISTRY_PASSWORD: ${{ github.token }}
with:
name: redis-replica-ci
context: .github/docker/redis-replica
build-args: |
REDIS_IMAGE=ghcr.io/${{ github.repository }}/redis-ci:${{ github.sha }}
vault-ci:
runs-on: ubuntu-20.04
steps:
- name: Checkout
uses: actions/checkout@v4
with:
lfs: true
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ github.token }}
- name: Build and push vault Image
uses: docker/build-push-action@v5
with:
push: true
context: .github/docker/vault
tags: ghcr.io/${{ github.repository }}/vault-ci:${{ github.sha }}
cache-from: type=gha,scope=vault
cache-to: type=gha,mode=max,scope=vault

16
.github/workflows/build-dev.yaml vendored Normal file
View File

@ -0,0 +1,16 @@
name: build-dev-image
on:
push:
branches-ignore:
- 'development/**'
jobs:
build-dev:
uses: scality/workflows/.github/workflows/docker-build.yaml@v2
secrets:
REGISTRY_LOGIN: ${{ github.repository_owner }}
REGISTRY_PASSWORD: ${{ github.token }}
with:
namespace: ${{ github.repository_owner }}
name: ${{ github.event.repository.name }}

39
.github/workflows/release-warp10.yaml vendored Normal file
View File

@ -0,0 +1,39 @@
name: release-warp10
on:
workflow_dispatch:
inputs:
tag:
type: string
description: 'Tag to be released'
required: true
create-github-release:
type: boolean
description: Create a tag and matching Github release.
required: false
default: true
jobs:
build:
uses: scality/workflows/.github/workflows/docker-build.yaml@v2
secrets: inherit
with:
name: warp10
context: .
file: images/warp10/Dockerfile
tag: ${{ github.event.inputs.tag }}
lfs: true
release:
if: ${{ inputs.create-github-release }}
runs-on: ubuntu-latest
needs: build
steps:
- uses: softprops/action-gh-release@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
name: Release utapi/warp10:${{ github.event.inputs.tag }}-warp10
tag_name: ${{ github.event.inputs.tag }}-warp10
generate_release_notes: false
target_commitish: ${{ github.sha }}

45
.github/workflows/release.yaml vendored Normal file
View File

@ -0,0 +1,45 @@
name: release
on:
workflow_dispatch:
inputs:
dockerfile:
description: Dockerfile to build image from
type: choice
options:
- images/nodesvc-base/Dockerfile
- Dockerfile
required: true
tag:
type: string
description: 'Tag to be released'
required: true
create-github-release:
type: boolean
description: Create a tag and matching Github release.
required: false
default: false
jobs:
build:
uses: scality/workflows/.github/workflows/docker-build.yaml@v2
with:
namespace: ${{ github.repository_owner }}
name: ${{ github.event.repository.name }}
context: .
file: ${{ github.event.inputs.dockerfile}}
tag: ${{ github.event.inputs.tag }}
release:
if: ${{ inputs.create-github-release }}
runs-on: ubuntu-latest
needs: build
steps:
- uses: softprops/action-gh-release@v2
env:
GITHUB_TOKEN: ${{ github.token }}
with:
name: Release ${{ github.event.inputs.tag }}
tag_name: ${{ github.event.inputs.tag }}
generate_release_notes: true
target_commitish: ${{ github.sha }}

361
.github/workflows/tests.yaml vendored Normal file
View File

@ -0,0 +1,361 @@
---
name: tests
on:
push:
branches-ignore:
- 'development/**'
workflow_dispatch:
inputs:
debug:
description: Debug (enable the ability to SSH to runners)
type: boolean
required: false
default: 'false'
connection-timeout-m:
type: number
required: false
description: Timeout for ssh connection to worker (minutes)
default: 30
jobs:
build-ci:
uses: ./.github/workflows/build-ci.yaml
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
lfs: true
- uses: actions/setup-node@v4
with:
node-version: '16.13.2'
cache: yarn
- name: install dependencies
run: yarn install --frozen-lockfile --network-concurrency 1
- name: run static analysis tools on markdown
run: yarn run lint_md
- name: run static analysis tools on code
run: yarn run lint
tests-v1:
needs:
- build-ci
runs-on: ubuntu-latest
env:
REINDEX_PYTHON_INTERPRETER: python3
name: ${{ matrix.test.name }}
strategy:
fail-fast: false
matrix:
test:
- name: run unit tests
command: yarn test
env:
UTAPI_METRICS_ENABLED: 'true'
- name: run v1 client tests
command: bash ./.github/scripts/run_ft_tests.bash false ft_test:client
env: {}
- name: run v1 server tests
command: bash ./.github/scripts/run_ft_tests.bash false ft_test:server
env: {}
- name: run v1 cron tests
command: bash ./.github/scripts/run_ft_tests.bash false ft_test:cron
env: {}
- name: run v1 interval tests
command: bash ./.github/scripts/run_ft_tests.bash true ft_test:interval
env: {}
services:
redis:
image: ghcr.io/${{ github.repository }}/redis-ci:${{ github.sha }}
ports:
- 6379:6379
- 9121:9121
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis-replica:
image: ghcr.io/${{ github.repository }}/redis-replica-ci:${{ github.sha }}
ports:
- 6380:6380
options: >-
--health-cmd "redis-cli -p 6380 ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis-sentinel:
image: bitnami/redis-sentinel:7.2.4
env:
REDIS_MASTER_SET: scality-s3
REDIS_SENTINEL_PORT_NUMBER: '16379'
REDIS_SENTINEL_QUORUM: '1'
ports:
- 16379:16379
options: >-
--health-cmd "redis-cli -p 16379 ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
warp10:
image: ghcr.io/${{ github.repository }}/warp10-ci:${{ github.sha }}
env:
standalone.port: '4802'
warpscript.maxops: '10000000'
ENABLE_SENSISION: 't'
options: >-
--health-cmd "curl localhost:4802/api/v0/check"
--health-interval 10s
--health-timeout 5s
--health-retries 10
--health-start-period 60s
ports:
- 4802:4802
- 8082:8082
- 9718:9718
steps:
- name: Checkout
uses: actions/checkout@v4
with:
lfs: true
- uses: actions/setup-node@v4
with:
node-version: '16.13.2'
cache: yarn
- uses: actions/setup-python@v5
with:
python-version: '3.9'
cache: pip
- name: Install python deps
run: pip install -r requirements.txt
- name: install dependencies
run: yarn install --frozen-lockfile --network-concurrency 1
- name: ${{ matrix.test.name }}
run: ${{ matrix.test.command }}
env: ${{ matrix.test.env }}
tests-v2-with-vault:
needs:
- build-ci
runs-on: ubuntu-latest
env:
REINDEX_PYTHON_INTERPRETER: python3
services:
redis:
image: ghcr.io/${{ github.repository }}/redis-ci:${{ github.sha }}
ports:
- 6379:6379
- 9121:9121
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis-replica:
image: ghcr.io/${{ github.repository }}/redis-replica-ci:${{ github.sha }}
ports:
- 6380:6380
options: >-
--health-cmd "redis-cli -p 6380 ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis-sentinel:
image: bitnami/redis-sentinel:7.2.4
env:
REDIS_MASTER_SET: scality-s3
REDIS_SENTINEL_PORT_NUMBER: '16379'
REDIS_SENTINEL_QUORUM: '1'
ports:
- 16379:16379
options: >-
--health-cmd "redis-cli -p 16379 ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
warp10:
image: ghcr.io/${{ github.repository }}/warp10-ci:${{ github.sha }}
env:
standalone.port: '4802'
warpscript.maxops: '10000000'
ENABLE_SENSISION: 't'
ports:
- 4802:4802
- 8082:8082
- 9718:9718
options: >-
--health-cmd "curl localhost:4802/api/v0/check"
--health-interval 10s
--health-timeout 5s
--health-retries 10
--health-start-period 60s
vault:
image: ghcr.io/${{ github.repository }}/vault-ci:${{ github.sha }}
ports:
- 8500:8500
- 8600:8600
- 8700:8700
- 8800:8800
options: >-
--health-cmd "curl http://localhost:8500/_/healthcheck"
--health-interval 10s
--health-timeout 5s
--health-retries 10
steps:
- name: Checkout
uses: actions/checkout@v4
with:
lfs: true
- uses: actions/setup-node@v4
with:
node-version: '16.13.2'
cache: yarn
- uses: actions/setup-python@v5
with:
python-version: '3.9'
cache: pip
- name: Install python deps
run: pip install -r requirements.txt
- name: install dependencies
run: yarn install --frozen-lockfile --network-concurrency 1
- name: Wait for warp10 for 60 seconds
run: sleep 60
- name: run v2 functional tests
run: bash ./.github/scripts/run_ft_tests.bash true ft_test:v2
env:
UTAPI_CACHE_BACKEND: redis
UTAPI_SERVICE_USER_ENABLED: 'true'
UTAPI_LOG_LEVEL: trace
SETUP_CMD: "run start_v2:server"
- name: 'Debug: SSH to runner'
uses: scality/actions/action-ssh-to-runner@1.7.0
timeout-minutes: ${{ fromJSON(github.event.inputs.connection-timeout-m) }}
continue-on-error: true
with:
tmate-server-host: ${{ secrets.TMATE_SERVER_HOST }}
tmate-server-port: ${{ secrets.TMATE_SERVER_PORT }}
tmate-server-rsa-fingerprint: ${{ secrets.TMATE_SERVER_RSA_FINGERPRINT }}
tmate-server-ed25519-fingerprint: ${{ secrets.TMATE_SERVER_ED25519_FINGERPRINT }}
if: ${{ ( github.event.inputs.debug == true || github.event.inputs.debug == 'true' ) }}
tests-v2-without-sensision:
needs:
- build-ci
runs-on: ubuntu-latest
env:
REINDEX_PYTHON_INTERPRETER: python3
name: ${{ matrix.test.name }}
strategy:
fail-fast: false
matrix:
test:
- name: run v2 soft limit test
command: bash ./.github/scripts/run_ft_tests.bash true ft_test:softLimit
env:
UTAPI_CACHE_BACKEND: redis
UTAPI_LOG_LEVEL: trace
SETUP_CMD: "run start_v2:server"
- name: run v2 hard limit test
command: bash ./.github/scripts/run_ft_tests.bash true ft_test:hardLimit
env:
UTAPI_CACHE_BACKEND: redis
UTAPI_LOG_LEVEL: trace
SETUP_CMD: "run start_v2:server"
services:
redis:
image: ghcr.io/${{ github.repository }}/redis-ci:${{ github.sha }}
ports:
- 6379:6379
- 9121:9121
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis-replica:
image: ghcr.io/${{ github.repository }}/redis-replica-ci:${{ github.sha }}
ports:
- 6380:6380
options: >-
--health-cmd "redis-cli -p 6380 ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis-sentinel:
image: bitnami/redis-sentinel:7.2.4
env:
REDIS_MASTER_SET: scality-s3
REDIS_SENTINEL_PORT_NUMBER: '16379'
REDIS_SENTINEL_QUORUM: '1'
ports:
- 16379:16379
options: >-
--health-cmd "redis-cli -p 16379 ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
warp10:
image: ghcr.io/${{ github.repository }}/warp10-ci:${{ github.sha }}
env:
standalone.port: '4802'
warpscript.maxops: '10000000'
ports:
- 4802:4802
- 8082:8082
- 9718:9718
options: >-
--health-cmd "curl localhost:4802/api/v0/check"
--health-interval 10s
--health-timeout 5s
--health-retries 10
--health-start-period 60s
vault:
image: ghcr.io/${{ github.repository }}/vault-ci:${{ github.sha }}
ports:
- 8500:8500
- 8600:8600
- 8700:8700
- 8800:8800
options: >-
--health-cmd "curl http://localhost:8500/_/healthcheck"
--health-interval 10s
--health-timeout 5s
--health-retries 10
steps:
- name: Checkout
uses: actions/checkout@v4
with:
lfs: true
- uses: actions/setup-node@v4
with:
node-version: '16.13.2'
cache: yarn
- uses: actions/setup-python@v5
with:
python-version: '3.9'
cache: pip
- name: Install python deps
run: pip install -r requirements.txt
- name: install dependencies
run: yarn install --frozen-lockfile --network-concurrency 1
- name: Wait for warp10 a little bit
run: sleep 60
- name: ${{ matrix.test.name }}
run: ${{ matrix.test.command }}
env: ${{ matrix.test.env }}
- name: 'Debug: SSH to runner'
uses: scality/actions/action-ssh-to-runner@1.7.0
timeout-minutes: ${{ fromJSON(github.event.inputs.connection-timeout-m) }}
continue-on-error: true
with:
tmate-server-host: ${{ secrets.TMATE_SERVER_HOST }}
tmate-server-port: ${{ secrets.TMATE_SERVER_PORT }}
tmate-server-rsa-fingerprint: ${{ secrets.TMATE_SERVER_RSA_FINGERPRINT }}
tmate-server-ed25519-fingerprint: ${{ secrets.TMATE_SERVER_ED25519_FINGERPRINT }}
if: ${{ ( github.event.inputs.debug == true || github.event.inputs.debug == 'true' ) }}

31
Dockerfile Normal file
View File

@ -0,0 +1,31 @@
FROM node:16.13.2-buster-slim
WORKDIR /usr/src/app
COPY package.json yarn.lock /usr/src/app/
RUN apt-get update \
&& apt-get install -y \
curl \
gnupg2
RUN curl -sS http://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo "deb http://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update \
&& apt-get install -y jq git python3 build-essential yarn --no-install-recommends \
&& yarn cache clean \
&& yarn install --frozen-lockfile --production --ignore-optional --network-concurrency=1 \
&& apt-get autoremove --purge -y python3 git build-essential \
&& rm -rf /var/lib/apt/lists/* \
&& yarn cache clean \
&& rm -rf ~/.node-gyp \
&& rm -rf /tmp/yarn-*
# Keep the .git directory in order to properly report version
COPY . /usr/src/app
ENTRYPOINT ["/usr/src/app/docker-entrypoint.sh"]
CMD [ "yarn", "start" ]
EXPOSE 8100

View File

@ -3,9 +3,8 @@
![Utapi logo](res/utapi-logo.png)
[![Circle CI][badgepub]](https://circleci.com/gh/scality/utapi)
[![Scality CI][badgepriv]](http://ci.ironmann.io/gh/scality/utapi)
Service Utilization API for tracking resource usage and metrics reporting
Service Utilization API for tracking resource usage and metrics reporting.
## Design
@ -88,13 +87,13 @@ Server is running.
1. Create an IAM user
```
aws iam --endpoint-url <endpoint> create-user --user-name utapiuser
aws iam --endpoint-url <endpoint> create-user --user-name <user-name>
```
2. Create access key for the user
```
aws iam --endpoint-url <endpoint> create-access-key --user-name utapiuser
aws iam --endpoint-url <endpoint> create-access-key --user-name <user-name>
```
3. Define a managed IAM policy
@ -203,12 +202,11 @@ Server is running.
5. Attach user to the managed policy
```
aws --endpoint-url <endpoint> iam attach-user-policy --user-name utapiuser
--policy-arn <policy arn>
aws --endpoint-url <endpoint> iam attach-user-policy --user-name
<user-name> --policy-arn <policy arn>
```
Now the user `utapiuser` has access to ListMetrics request in Utapi on all
buckets.
Now the user has access to ListMetrics request in Utapi on all buckets.
### Signing request with Auth V4
@ -224,16 +222,18 @@ following urls for reference.
You may also view examples making a request with Auth V4 using various languages
and AWS SDKs [here](/examples).
Alternatively, you can use a nifty command line tool available in Scality's S3.
Alternatively, you can use a nifty command line tool available in Scality's
CloudServer.
You can git clone S3 repo from here https://github.com/scality/S3.git and follow
the instructions in README to install the dependencies.
You can git clone the CloudServer repo from here
https://github.com/scality/cloudserver and follow the instructions in the README
to install the dependencies.
If you have S3 running inside a docker container you can docker exec into the S3
container as
If you have CloudServer running inside a docker container you can docker exec
into the CloudServer container as
```
docker exec -it <container id> bash
docker exec -it <container-id> bash
```
and then run the command
@ -271,7 +271,7 @@ Usage: list_metrics [options]
-v, --verbose
```
A typical call to list metrics for a bucket `demo` to Utapi in a https enabled
An example call to list metrics for a bucket `demo` to Utapi in a https enabled
deployment would be
```
@ -283,7 +283,7 @@ Both start and end times are time expressed as UNIX epoch timestamps **expressed
in milliseconds**.
Keep in mind, since Utapi metrics are normalized to the nearest 15 min.
interval, so start time and end time need to be in specific format as follows.
interval, start time and end time need to be in the specific format as follows.
#### Start time
@ -297,7 +297,7 @@ Date: Tue Oct 11 2016 17:35:25 GMT-0700 (PDT)
Unix timestamp (milliseconds): 1476232525320
Here's a typical JS method to get start timestamp
Here's an example JS method to get a start timestamp
```javascript
function getStartTimestamp(t) {
@ -317,7 +317,7 @@ seconds and milliseconds set to 59 and 999 respectively. So valid end timestamps
would look something like `09:14:59:999`, `09:29:59:999`, `09:44:59:999` and
`09:59:59:999`.
Here's a typical JS method to get end timestamp
Here's an example JS method to get an end timestamp
```javascript
function getEndTimestamp(t) {
@ -342,4 +342,3 @@ In order to contribute, please follow the
https://github.com/scality/Guidelines/blob/master/CONTRIBUTING.md).
[badgepub]: http://circleci.com/gh/scality/utapi.svg?style=svg
[badgepriv]: http://ci.ironmann.io/gh/scality/utapi.svg?style=svg

View File

@ -1,12 +1,13 @@
const { tasks } = require('..');
const { LoggerContext } = require('../libV2/utils');
const { clients: warp10Clients } = require('../libV2/warp10');
const logger = new LoggerContext({
task: 'CreateCheckpoint',
});
const task = new tasks.CreateCheckpoint();
const task = new tasks.CreateCheckpoint({ warp10: [warp10Clients[0]] });
task.setup()
.then(() => logger.info('Starting checkpoint creation'))

View File

@ -1,11 +1,12 @@
const { tasks } = require('..');
const { LoggerContext } = require('../libV2/utils');
const { clients: warp10Clients } = require('../libV2/warp10');
const logger = new LoggerContext({
task: 'CreateSnapshot',
});
const task = new tasks.CreateSnapshot();
const task = new tasks.CreateSnapshot({ warp10: [warp10Clients[0]] });
task.setup()
.then(() => logger.info('Starting snapshot creation'))

15
bin/diskUsage.js Normal file
View File

@ -0,0 +1,15 @@
const { tasks } = require('..');
const { LoggerContext } = require('../libV2/utils');
const { clients: warp10Clients } = require('../libV2/warp10');
const logger = new LoggerContext({
task: 'MonitorDiskUsage',
});
const task = new tasks.MonitorDiskUsage({ warp10: [warp10Clients[0]] });
task.setup()
.then(() => logger.info('Starting disk usage monitor'))
.then(() => task.start())
.then(() => logger.info('Disk usage monitor started'));

View File

@ -1,14 +0,0 @@
const { tasks } = require('..');
const { LoggerContext } = require('../libV2/utils');
const logger = new LoggerContext({
task: 'Downsample',
});
const task = new tasks.DownsampleTask();
task.setup()
.then(() => logger.info('Starting Downsample daemon'))
.then(() => task.start())
.then(() => logger.info('Downsample started'));

276
bin/ensureServiceUser Executable file
View File

@ -0,0 +1,276 @@
#! /usr/bin/env node
// TODO
// - deduplicate with Vault's seed script at https://github.com/scality/Vault/pull/1627
// - add permission boundaries to user when https://scality.atlassian.net/browse/VAULT-4 is implemented
const { errors } = require('arsenal');
const program = require('commander');
const werelogs = require('werelogs');
const async = require('async');
const { IAM } = require('aws-sdk');
const { version } = require('../package.json');
const systemPrefix = '/scality-internal/';
function generateUserPolicyDocument() {
return {
Version: '2012-10-17',
Statement: {
Effect: 'Allow',
Action: 'utapi:ListMetrics',
Resource: 'arn:scality:utapi:::*/*',
},
};
}
function createIAMClient(opts) {
return new IAM({
endpoint: opts.iamEndpoint,
});
}
function needsCreation(v) {
if (Array.isArray(v)) {
return !v.length;
}
return !v;
}
class BaseHandler {
constructor(serviceName, iamClient, log) {
this.serviceName = serviceName;
this.iamClient = iamClient;
this.log = log;
}
applyWaterfall(values, done) {
this.log.debug('applyWaterfall', { values, type: this.resourceType });
const v = values[this.resourceType];
if (needsCreation(v)) {
this.log.debug('creating', { v, type: this.resourceType });
return this.create(values)
.then(res =>
done(null, Object.assign(values, {
[this.resourceType]: res,
})))
.catch(done);
}
this.log.debug('conflicts check', { v, type: this.resourceType });
if (this.conflicts(v)) {
return done(errors.EntityAlreadyExists.customizeDescription(
`${this.resourceType} ${this.serviceName} already exists and conflicts with the expected value.`));
}
this.log.debug('nothing to do', { v, type: this.resourceType });
return done(null, values);
}
}
class UserHandler extends BaseHandler {
get resourceType() {
return 'user';
}
collect() {
return this.iamClient.getUser({
UserName: this.serviceName,
})
.promise()
.then(res => res.User);
}
create(allResources) {
return this.iamClient.createUser({
UserName: this.serviceName,
Path: systemPrefix,
})
.promise()
.then(res => res.User);
}
conflicts(u) {
return u.Path !== systemPrefix;
}
}
class PolicyHandler extends BaseHandler {
get resourceType() {
return 'policy';
}
collect() {
return this.iamClient.listPolicies({
MaxItems: 100,
OnlyAttached: false,
Scope: 'All',
})
.promise()
.then(res => res.Policies.find(p => p.PolicyName === this.serviceName));
}
create(allResources) {
const doc = generateUserPolicyDocument();
return this.iamClient.createPolicy({
PolicyName: this.serviceName,
PolicyDocument: JSON.stringify(doc),
Path: systemPrefix,
})
.promise()
.then(res => res.Policy);
}
conflicts(p) {
return p.Path !== systemPrefix;
}
}
class PolicyAttachmentHandler extends BaseHandler {
get resourceType() {
return 'policyAttachment';
}
collect() {
return this.iamClient.listAttachedUserPolicies({
UserName: this.serviceName,
MaxItems: 100,
})
.promise()
.then(res => res.AttachedPolicies)
}
create(allResources) {
return this.iamClient.attachUserPolicy({
PolicyArn: allResources.policy.Arn,
UserName: this.serviceName,
})
.promise();
}
conflicts(p) {
return false;
}
}
class AccessKeyHandler extends BaseHandler {
get resourceType() {
return 'accessKey';
}
collect() {
return this.iamClient.listAccessKeys({
UserName: this.serviceName,
MaxItems: 100,
})
.promise()
.then(res => res.AccessKeyMetadata)
}
create(allResources) {
return this.iamClient.createAccessKey({
UserName: this.serviceName,
})
.promise()
.then(res => res.AccessKey);
}
conflicts(a) {
return false;
}
}
function collectResource(v, done) {
v.collect()
.then(res => done(null, res))
.catch(err => {
if (err.code === 'NoSuchEntity') {
return done(null, null);
}
done(err);
});
}
function collectResourcesFromHandlers(handlers, cb) {
const tasks = handlers.reduce((acc, v) => ({
[v.resourceType]: done => collectResource(v, done),
...acc,
}), {});
async.parallel(tasks, cb);
}
function buildServiceUserHandlers(serviceName, client, log) {
return [
UserHandler,
PolicyHandler,
PolicyAttachmentHandler,
AccessKeyHandler,
].map(h => new h(serviceName, client, log));
}
function apply(client, serviceName, log, cb) {
const handlers = buildServiceUserHandlers(serviceName, client, log);
async.waterfall([
done => collectResourcesFromHandlers(handlers, done),
...handlers.map(h => h.applyWaterfall.bind(h)),
(values, done) => done(null, values.accessKey),
], cb);
}
function wrapAction(actionFunc, serviceName, options) {
werelogs.configure({
level: options.logLevel,
dump: options.logDumpLevel,
});
const log = new werelogs.Logger(process.argv[1]).newRequestLogger();
const client = createIAMClient(options);
actionFunc(client, serviceName, log, (err, data) => {
if (err) {
log.error('failed', {
data,
error: err,
});
if (err.EntityAlreadyExists) {
log.error(`run "${process.argv[1]} purge ${serviceName}" to fix.`);
}
process.exit(1);
}
log.info('success', { data });
process.exit();
});
}
program.version(version);
[
{
name: 'apply <service-name>',
actionFunc: apply,
},
].forEach(cmd => {
program
.command(cmd.name)
.option('--iam-endpoint <url>', 'IAM endpoint', 'http://localhost:8600')
.option('--log-level <level>', 'log level', 'info')
.option('--log-dump-level <level>', 'log level that triggers a dump of the debug buffer', 'error')
.action(wrapAction.bind(null, cmd.actionFunc));
});
const validCommands = program.commands.map(n => n._name);
// Is the command given invalid or are there too few arguments passed
if (!validCommands.includes(process.argv[2])) {
program.outputHelp();
process.stdout.write('\n');
process.exit(1);
} else {
program.parse(process.argv);
}

View File

@ -1,12 +1,13 @@
const { tasks } = require('..');
const { LoggerContext } = require('../libV2/utils');
const { clients: warp10Clients } = require('../libV2/warp10');
const logger = new LoggerContext({
task: 'IngestShard',
});
const task = new tasks.IngestShard();
const task = new tasks.IngestShard({ warp10: warp10Clients });
task.setup()
.then(() => logger.info('Starting shard ingestion'))

15
bin/manualAdjust.js Normal file
View File

@ -0,0 +1,15 @@
const { tasks } = require('..');
const { LoggerContext } = require('../libV2/utils');
const { clients: warp10Clients } = require('../libV2/warp10');
const logger = new LoggerContext({
task: 'ManualAdjust',
});
const task = new tasks.ManualAdjust({ warp10: warp10Clients });
task.setup()
.then(() => logger.info('Starting manual adjustment'))
.then(() => task.start())
.then(() => logger.info('Manual adjustment started'));

View File

@ -1,11 +1,12 @@
const { tasks } = require('..');
const { LoggerContext } = require('../libV2/utils');
const { clients: warp10Clients } = require('../libV2/warp10');
const logger = new LoggerContext({
task: 'Migrate',
});
const task = new tasks.MigrateTask();
const task = new tasks.MigrateTask({ warp10: [warp10Clients[0]] });
task.setup()
.then(() => logger.info('Starting utapi v1 => v2 migration'))

View File

@ -1,12 +1,13 @@
const { tasks } = require('..');
const { LoggerContext } = require('../libV2/utils');
const { clients: warp10Clients } = require('../libV2/warp10');
const logger = new LoggerContext({
task: 'Reindex',
});
const task = new tasks.ReindexTask();
const task = new tasks.ReindexTask({ warp10: [warp10Clients[0]] });
task.setup()
.then(() => logger.info('Starting Reindex daemon'))

View File

@ -1,12 +1,13 @@
const { tasks } = require('..');
const { LoggerContext } = require('../libV2/utils');
const { clients: warp10Clients } = require('../libV2/warp10');
const logger = new LoggerContext({
task: 'Repair',
});
const task = new tasks.RepairTask();
const task = new tasks.RepairTask({ warp10: [warp10Clients[0]] });
task.setup()
.then(() => logger.info('Starting Repair daemon'))

75
docker-compose.yaml Normal file
View File

@ -0,0 +1,75 @@
version: '3.8'
x-models:
warp10: &warp10
build:
context: .
dockerfile: ./images/warp10/Dockerfile
volumes: [ $PWD/warpscript:/usr/local/share/warpscript ]
warp10_env: &warp10_env
ENABLE_WARPSTUDIO: 'true'
ENABLE_SENSISION: 'true'
warpscript.repository.refresh: 1000
warpscript.maxops: 1000000000
warpscript.maxops.hard: 1000000000
warpscript.maxfetch: 1000000000
warpscript.maxfetch.hard: 1000000000
warpscript.extension.debug: io.warp10.script.ext.debug.DebugWarpScriptExtension
warpscript.maxrecursion: 1000
warpscript.repository.directory: /usr/local/share/warpscript
warpscript.extension.logEvent: io.warp10.script.ext.logging.LoggingWarpScriptExtension
redis: &redis
build:
context: .
dockerfile: ./images/redis/Dockerfile
services:
redis-0:
image: redis:7.2.4
command: redis-server --port 6379 --slave-announce-ip "${EXTERNAL_HOST}"
ports:
- 6379:6379
environment:
- HOST_IP="${EXTERNAL_HOST}"
redis-1:
image: redis:7.2.4
command: redis-server --port 6380 --slaveof "${EXTERNAL_HOST}" 6379 --slave-announce-ip "${EXTERNAL_HOST}"
ports:
- 6380:6380
environment:
- HOST_IP="${EXTERNAL_HOST}"
redis-sentinel-0:
image: redis:7.2.4
command: |-
bash -c 'cat > /tmp/sentinel.conf <<EOF
port 16379
logfile ""
dir /tmp
sentinel announce-ip ${EXTERNAL_HOST}
sentinel announce-port 16379
sentinel monitor scality-s3 "${EXTERNAL_HOST}" 6379 1
EOF
redis-sentinel /tmp/sentinel.conf'
environment:
- HOST_IP="${EXTERNAL_HOST}"
ports:
- 16379:16379
warp10:
<< : *warp10
environment:
<< : *warp10_env
ports:
- 4802:4802
- 8081:8081
- 9718:9718
volumes:
- /tmp/warp10:/data
- '${PWD}/warpscript:/usr/local/share/warpscript'

47
docker-entrypoint.sh Executable file
View File

@ -0,0 +1,47 @@
#!/bin/bash
# set -e stops the execution of a script if a command or pipeline has an error
set -e
# modifying config.json
JQ_FILTERS_CONFIG="."
if [[ "$LOG_LEVEL" ]]; then
if [[ "$LOG_LEVEL" == "info" || "$LOG_LEVEL" == "debug" || "$LOG_LEVEL" == "trace" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .log.logLevel=\"$LOG_LEVEL\""
echo "Log level has been modified to $LOG_LEVEL"
else
echo "The log level you provided is incorrect (info/debug/trace)"
fi
fi
if [[ "$WORKERS" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .workers=\"$WORKERS\""
fi
if [[ "$REDIS_HOST" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .redis.host=\"$REDIS_HOST\""
fi
if [[ "$REDIS_PORT" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .redis.port=\"$REDIS_PORT\""
fi
if [[ "$VAULTD_HOST" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .vaultd.host=\"$VAULTD_HOST\""
fi
if [[ "$VAULTD_PORT" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .vaultd.port=\"$VAULTD_PORT\""
fi
if [[ "$HEALTHCHECKS_ALLOWFROM" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .healthChecks.allowFrom=[\"$HEALTHCHECKS_ALLOWFROM\"]"
fi
if [[ $JQ_FILTERS_CONFIG != "." ]]; then
jq "$JQ_FILTERS_CONFIG" config.json > config.json.tmp
mv config.json.tmp config.json
fi
exec "$@"

42
docs/RELEASE.md Normal file
View File

@ -0,0 +1,42 @@
# Utapi Release Plan
## Docker Image Generation
Docker images are hosted on [ghcr.io](https://github.com/orgs/scality/packages).
Utapi has one namespace there:
* Namespace: ghcr.io/scality/utapi
With every CI build, the CI will push images, tagging the
content with the developer branch's short SHA-1 commit hash.
This allows those images to be used by developers, CI builds,
build chain and so on.
Tagged versions of utapi will be stored in the production namespace.
## How to Pull Docker Images
```sh
docker pull ghcr.io/scality/utapi:<commit hash>
docker pull ghcr.io/scality/utapi:<tag>
```
## Release Process
To release a production image:
* Name the tag for the repository and Docker image.
* Use the `yarn version` command with the same tag to update `package.json`.
* Create a PR and merge the `package.json` change.
* Tag the repository using the same tag.
* [Force a build] using:
* A given branch that ideally matches the tag.
* The `release` stage.
* An extra property with the name `tag` and its value being the actual tag.
[Force a build]:
https://eve.devsca.com/github/scality/utapi/#/builders/bootstrap/force/force

View File

@ -1,141 +0,0 @@
---
version: 0.2
branches:
default:
stage: pre-merge
models:
- Git: &clone
name: Pull repo
repourl: '%(prop:git_reference)s'
shallow: True
retryFetch: True
haltOnFailure: True
- Workspace: &workspace
type: kube_pod
path: eve/workers/pod.yml
images:
aggressor:
context: '.'
dockerfile: eve/workers/unit_and_feature_tests/Dockerfile
warp10:
context: '.'
dockerfile: 'images/warp10/Dockerfile'
vault: eve/workers/mocks/vault
- Upload: &upload_artifacts
source: /artifacts
urls:
- "*"
stages:
pre-merge:
worker:
type: local
steps:
- MasterShellCommand:
name: Replace upstream image with `-ci` variant
command: "sed -i '/^FROM/ s/$/-ci/' %(prop:master_builddir)s/build/images/warp10/Dockerfile"
- TriggerStages:
name: trigger all the tests
stage_names:
- linting-coverage
- run-unit-tests
- run-client-tests
- run-server-tests
- run-cron-tests
- run-interval-tests
- run-v2-functional-tests
linting-coverage:
worker: *workspace
steps:
- Git: *clone
- ShellCommand:
name: run static analysis tools on markdown
command: yarn run lint_md
- ShellCommand:
name: run static analysis tools on code
command: yarn run lint
run-unit-tests:
worker: *workspace
steps:
- Git: *clone
- ShellCommand:
name: run unit tests
command: yarn test
run-client-tests:
worker: *workspace
steps:
- Git: *clone
- ShellCommand:
name: run client tests
command: bash ./eve/workers/unit_and_feature_tests/run_ft_tests.bash false ft_test:client
logfiles:
utapi:
filename: "/artifacts/setup_ft_test:client.log"
follow: true
run-server-tests:
worker: *workspace
steps:
- Git: *clone
- ShellCommand:
name: run server tests
command: bash ./eve/workers/unit_and_feature_tests/run_ft_tests.bash false ft_test:server
logfiles:
utapi:
filename: "/artifacts/setup_ft_test:server.log"
follow: true
run-cron-tests:
worker: *workspace
steps:
- Git: *clone
- ShellCommand:
name: run cron tests
command: bash ./eve/workers/unit_and_feature_tests/run_ft_tests.bash false ft_test:cron
logfiles:
utapi:
filename: "/artifacts/setup_ft_test:cron.log"
follow: true
run-interval-tests:
worker: *workspace
steps:
- Git: *clone
- ShellCommand:
name: run interval tests
command: bash ./eve/workers/unit_and_feature_tests/run_ft_tests.bash true ft_test:interval
logfiles:
utapi:
filename: "/artifacts/setup_ft_test:interval.log"
follow: true
run-v2-functional-tests:
worker:
<< : *workspace
vars:
vault: enabled
steps:
- Git: *clone
- ShellCommand:
name: Wait for Warp 10
command: |
bash -c "
set -ex
bash tests/utils/wait_for_local_port.bash 4802 60"
logfiles:
warp10:
filename: "/artifacts/warp10.log"
follow: true
- ShellCommand:
name: run v2 functional tests
command: SETUP_CMD="run start_v2:server" bash ./eve/workers/unit_and_feature_tests/run_ft_tests.bash true ft_test:v2
env:
UTAPI_CACHE_BACKEND: redis
UTAPI_LOG_LEVEL: trace
logfiles:
warp10:
filename: "/artifacts/warp10.log"
follow: true
utapi:
filename: "/artifacts/setup_ft_test:v2.log"
follow: true
- Upload: *upload_artifacts

View File

@ -1,7 +0,0 @@
FROM node:alpine
ADD ./vault.js /usr/share/src/
WORKDIR /usr/share/src/
CMD node vault.js

View File

@ -1,32 +0,0 @@
const http = require('http');
const port = process.env.VAULT_PORT || 8500;
class Vault {
constructor() {
this._server = null;
}
static _onRequest(req, res) {
res.writeHead(200);
return res.end();
}
start() {
this._server = http.createServer(Vault._onRequest).listen(port);
}
end() {
this._server.close();
}
}
const vault = new Vault();
['SIGINT', 'SIGQUIT', 'SIGTERM'].forEach(eventName => {
process.on(eventName, () => process.exit(0));
});
// eslint-disable-next-line no-console
console.log('Starting Vault Mock...');
vault.start();

View File

@ -1,67 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: "utapi-test-pod"
spec:
activeDeadlineSeconds: 3600
restartPolicy: Never
terminationGracePeriodSeconds: 10
containers:
- name: aggressor
image: "{{ images.aggressor }}"
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: "2"
memory: 3Gi
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-socket
- name: artifacts
readOnly: false
mountPath: /artifacts
- name: warp10
image: "{{ images.warp10 }}"
command:
- sh
- -ce
- /init | tee -a /artifacts/warp10.log
env:
- name: standalone.port
value: '4802'
- name: warp.token.file
value: /opt/warp10/etc/ci.tokens
- name: warpscript.maxops
value: '10000000'
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1750m
memory: 3Gi
volumeMounts:
- name: artifacts
readOnly: false
mountPath: /artifacts
{% if vars.vault is defined and vars.vault == 'enabled' %}
- name: vault
image: "{{ images.vault }}"
resources:
requests:
cpu: 10m
memory: 64Mi
limits:
cpu: 50m
memory: 128Mi
{% endif %}
volumes:
- name: docker-socket
hostPath:
path: /var/run/docker.sock
type: Socket
- name: artifacts
emptyDir: {}

View File

@ -1,48 +0,0 @@
FROM buildpack-deps:jessie-curl
#
# Install apt packages needed by utapi and buildbot_worker
#
ENV LANG C.UTF-8
ENV NODE_VERSION 10.22.0
ENV PATH=$PATH:/utapi/node_modules/.bin
ENV NODE_PATH=/utapi/node_modules
COPY eve/workers/unit_and_feature_tests/utapi_packages.list eve/workers/unit_and_feature_tests/buildbot_worker_packages.list /tmp/
WORKDIR /utapi
RUN wget https://nodejs.org/dist/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-x64.tar.gz \
&& tar -xf node-v${NODE_VERSION}-linux-x64.tar.gz --directory /usr/local --strip-components 1 \
&& curl -sS http://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo "deb http://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list \
&& apt-get update -qq \
&& cat /tmp/*packages.list | xargs apt-get install -y \
&& pip install pip==9.0.1 \
&& rm -rf /var/lib/apt/lists/* \
&& rm -f /tmp/*packages.list \
&& rm -f /etc/supervisor/conf.d/*.conf \
&& rm -f node-v${NODE_VERSION}-linux-x64.tar.gz
#
# Install yarn dependencies
#
COPY package.json yarn.lock /utapi/
RUN yarn cache clean \
&& yarn install --frozen-lockfile \
&& yarn cache clean
#
# Run buildbot-worker on startup through supervisor
#
ARG BUILDBOT_VERSION
RUN pip install buildbot-worker==$BUILDBOT_VERSION
RUN pip3 install requests
RUN pip3 install redis
ADD eve/workers/unit_and_feature_tests/supervisor/buildbot_worker.conf /etc/supervisor/conf.d/
ADD eve/workers/unit_and_feature_tests/redis/sentinel.conf /etc/sentinel.conf
CMD ["supervisord", "-n"]

View File

@ -1,11 +0,0 @@
ca-certificates
git
libffi-dev
libssl-dev
python2.7
python2.7-dev
python-pip
sudo
supervisor
lsof
netcat

View File

@ -1,35 +0,0 @@
# Example sentinel.conf
# The port that this sentinel instance will run on
port 16379
# Specify the log file name. Also the empty string can be used to force
# Sentinel to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""
# dir <working-directory>
# Every long running process should have a well-defined working directory.
# For Redis Sentinel to chdir to /tmp at startup is the simplest thing
# for the process to don't interfere with administrative tasks such as
# unmounting filesystems.
dir /tmp
# sentinel monitor <master-name> <ip> <redis-port> <quorum>
#
# Tells Sentinel to monitor this master, and to consider it in O_DOWN
# (Objectively Down) state only if at least <quorum> sentinels agree.
#
# Note that whatever is the ODOWN quorum, a Sentinel will require to
# be elected by the majority of the known Sentinels in order to
# start a failover, so no failover can be performed in minority.
#
# Replicas are auto-discovered, so you don't need to specify replicas in
# any way. Sentinel itself will rewrite this configuration file adding
# the replicas using additional configuration options.
# Also note that the configuration file is rewritten when a
# replica is promoted to master.
#
# Note: master name should not include special characters or spaces.
# The valid charset is A-z 0-9 and the three characters ".-_".
sentinel monitor scality-s3 127.0.0.1 6379 1

View File

@ -1,14 +0,0 @@
[program:buildbot_worker]
command=/bin/sh -c 'buildbot-worker create-worker . "%(ENV_BUILDMASTER)s:%(ENV_BUILDMASTER_PORT)s" "%(ENV_WORKERNAME)s" "%(ENV_WORKERPASS)s" && buildbot-worker start --nodaemon'
autostart=true
autorestart=false
[program:redis_server]
command=/usr/bin/redis-server
autostart=true
autorestart=false
[program:redis_sentinel]
command=/usr/bin/redis-server /etc/sentinel.conf --sentinel
autostart=true
autorestart=false

View File

@ -1,5 +0,0 @@
build-essential
redis-server
python3
python3-pip
yarn

View File

@ -0,0 +1,90 @@
import sys, os, base64, datetime, hashlib, hmac, datetime, calendar, json
import requests # pip install requests
access_key = '9EQTVVVCLSSG6QBMNKO5'
secret_key = 'T5mK/skkkwJ/mTjXZnHyZ5UzgGIN=k9nl4dyTmDH'
method = 'POST'
service = 's3'
host = 'localhost:8100'
region = 'us-east-1'
canonical_uri = '/buckets'
canonical_querystring = 'Action=ListMetrics&Version=20160815'
content_type = 'application/x-amz-json-1.0'
algorithm = 'AWS4-HMAC-SHA256'
t = datetime.datetime.utcnow()
amz_date = t.strftime('%Y%m%dT%H%M%SZ')
date_stamp = t.strftime('%Y%m%d')
# Key derivation functions. See:
# http://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html#signature-v4-examples-python
def sign(key, msg):
return hmac.new(key, msg.encode("utf-8"), hashlib.sha256).digest()
def getSignatureKey(key, date_stamp, regionName, serviceName):
kDate = sign(('AWS4' + key).encode('utf-8'), date_stamp)
kRegion = sign(kDate, regionName)
kService = sign(kRegion, serviceName)
kSigning = sign(kService, 'aws4_request')
return kSigning
def get_start_time(t):
start = t.replace(minute=t.minute - t.minute % 15, second=0, microsecond=0)
return calendar.timegm(start.utctimetuple()) * 1000;
def get_end_time(t):
end = t.replace(minute=t.minute - t.minute % 15, second=0, microsecond=0)
return calendar.timegm(end.utctimetuple()) * 1000 - 1;
start_time = get_start_time(datetime.datetime(2016, 1, 1, 0, 0, 0, 0))
end_time = get_end_time(datetime.datetime(2016, 2, 1, 0, 0, 0, 0))
# Request parameters for listing Utapi bucket metrics--passed in a JSON block.
bucketListing = {
'buckets': [ 'utapi-test' ],
'timeRange': [ start_time, end_time ],
}
request_parameters = json.dumps(bucketListing)
payload_hash = hashlib.sha256(request_parameters).hexdigest()
canonical_headers = \
'content-type:{0}\nhost:{1}\nx-amz-content-sha256:{2}\nx-amz-date:{3}\n' \
.format(content_type, host, payload_hash, amz_date)
signed_headers = 'content-type;host;x-amz-content-sha256;x-amz-date'
canonical_request = '{0}\n{1}\n{2}\n{3}\n{4}\n{5}' \
.format(method, canonical_uri, canonical_querystring, canonical_headers,
signed_headers, payload_hash)
credential_scope = '{0}/{1}/{2}/aws4_request' \
.format(date_stamp, region, service)
string_to_sign = '{0}\n{1}\n{2}\n{3}' \
.format(algorithm, amz_date, credential_scope,
hashlib.sha256(canonical_request).hexdigest())
signing_key = getSignatureKey(secret_key, date_stamp, region, service)
signature = hmac.new(signing_key, (string_to_sign).encode('utf-8'),
hashlib.sha256).hexdigest()
authorization_header = \
'{0} Credential={1}/{2}, SignedHeaders={3}, Signature={4}' \
.format(algorithm, access_key, credential_scope, signed_headers, signature)
# The 'host' header is added automatically by the Python 'requests' library.
headers = {
'Content-Type': content_type,
'X-Amz-Content-Sha256': payload_hash,
'X-Amz-Date': amz_date,
'Authorization': authorization_header
}
endpoint = 'http://' + host + canonical_uri + '?' + canonical_querystring;
r = requests.post(endpoint, data=request_parameters, headers=headers)
print (r.text)

View File

@ -0,0 +1,20 @@
FROM ghcr.io/scality/federation/nodesvc-base:7.10.5.0
ENV UTAPI_CONFIG_FILE=${CONF_DIR}/config.json
WORKDIR ${HOME_DIR}/utapi
COPY ./package.json ./yarn.lock ${HOME_DIR}/utapi
# Remove when gitcache is sorted out
RUN rm /root/.gitconfig
RUN yarn install --production --frozen-lockfile --network-concurrency 1
COPY . ${HOME_DIR}/utapi
RUN chown -R ${USER} ${HOME_DIR}/utapi
USER ${USER}
CMD bash -c "source ${CONF_DIR}/env && export && supervisord -c ${CONF_DIR}/${SUPERVISORD_CONF}"

17
images/redis/Dockerfile Normal file
View File

@ -0,0 +1,17 @@
FROM redis:alpine
ENV S6_VERSION 2.0.0.1
ENV EXPORTER_VERSION 1.24.0
ENV S6_BEHAVIOUR_IF_STAGE2_FAILS 2
RUN wget https://github.com/just-containers/s6-overlay/releases/download/v${S6_VERSION}/s6-overlay-amd64.tar.gz -O /tmp/s6-overlay-amd64.tar.gz \
&& tar xzf /tmp/s6-overlay-amd64.tar.gz -C / \
&& rm -rf /tmp/s6-overlay-amd64.tar.gz
RUN wget https://github.com/oliver006/redis_exporter/releases/download/v${EXPORTER_VERSION}/redis_exporter-v${EXPORTER_VERSION}.linux-amd64.tar.gz -O redis_exporter.tar.gz \
&& tar xzf redis_exporter.tar.gz -C / \
&& cd .. \
&& mv /redis_exporter-v${EXPORTER_VERSION}.linux-amd64/redis_exporter /usr/local/bin/redis_exporter
ADD ./images/redis/s6 /etc
CMD /init

View File

@ -0,0 +1,4 @@
#!/usr/bin/with-contenv sh
echo "starting redis exporter"
exec redis_exporter

View File

@ -0,0 +1,4 @@
#!/usr/bin/with-contenv sh
echo "starting redis"
exec redis-server

View File

@ -0,0 +1,2 @@
standalone.host = 0.0.0.0
standalone.port = 4802

View File

@ -1,29 +1,56 @@
FROM warp10io/warp10:2.6.0
FROM golang:1.14-alpine as builder
ENV WARP10_EXPORTER_VERSION 2.7.5
RUN apk add zip unzip build-base \
&& wget -q -O exporter.zip https://github.com/centreon/warp10-sensision-exporter/archive/refs/heads/master.zip \
&& unzip exporter.zip \
&& cd warp10-sensision-exporter-master \
&& go mod download \
&& cd tools \
&& go run generate_sensision_metrics.go ${WARP10_EXPORTER_VERSION} \
&& cp sensision.go ../collector/ \
&& cd .. \
&& go build -a -o /usr/local/go/warp10_sensision_exporter
FROM ghcr.io/scality/utapi/warp10:2.8.1-95-g73e7de80
# Override baked in version
# Remove when updating to a numbered release
ENV WARP10_VERSION 2.8.1-95-g73e7de80
ENV S6_VERSION 2.0.0.1
ENV S6_BEHAVIOUR_IF_STAGE2_FAILS 2
ENV WARP10_CONF_TEMPLATES ${WARP10_HOME}/conf.templates/standalone
ENV SENSISION_DATA_DIR /data/sensision
ENV SENSISION_PORT 8082
# Modify Warp 10 default config
ENV standalone.host 0.0.0.0
ENV standalone.port 4802
ENV standalone.home /opt/warp10
ENV warpscript.repository.directory /usr/local/share/warpscript
ENV warp.token.file /static.tokens
ENV warpscript.extension.protobuf io.warp10.ext.protobuf.ProtobufWarpScriptExtension
ENV warpscript.extension.macrovalueencoder 'io.warp10.continuum.ingress.MacroValueEncoder$Extension'
ENV warpscript.extension.concurrent 'io.warp10.script.ext.concurrent.ConcurrentWarpScriptExtension'
# ENV warpscript.extension.debug io.warp10.script.ext.debug.DebugWarpScriptExtension
RUN wget https://github.com/just-containers/s6-overlay/releases/download/v${S6_VERSION}/s6-overlay-amd64.tar.gz -O /tmp/s6-overlay-amd64.tar.gz \
&& tar xzf /tmp/s6-overlay-amd64.tar.gz -C / \
&& rm -rf /tmp/s6-overlay-amd64.tar.gz
# Install jmx exporter
ADD https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.16.1/jmx_prometheus_javaagent-0.16.1.jar /opt/jmx_prom_agent.jar
ADD ./images/warp10/jmx_prom.yaml /opt/jmx_prom.yaml
# Install protobuf extestion
ADD https://dl.bintray.com/senx/maven/io/warp10/warp10-ext-protobuf/1.1.0-uberjar/warp10-ext-protobuf-1.1.0-uberjar.jar /opt/warp10/lib/
ADD ./images/warp10/warp10-ext-protobuf-1.2.2-uberjar.jar /opt/warp10/lib/
# Install Sensision exporter
COPY --from=builder /usr/local/go/warp10_sensision_exporter /usr/local/bin/warp10_sensision_exporter
ADD ./images/warp10/s6 /etc
ADD ./warpscript /usr/local/share/warpscript
ADD ./images/warp10/static.tokens /
ADD ./images/warp10/90-default-host-port.conf $WARP10_CONF_TEMPLATES/90-default-host-port.conf
CMD /init

View File

@ -0,0 +1,2 @@
---
startDelaySeconds: 0

View File

@ -2,10 +2,13 @@
set -eu
WARP10_JAR=${WARP10_HOME}/bin/warp10-${WARP10_VERSION}.jar
WARP10_CONFIG_DIR="$WARP10_DATA_DIR/conf"
WARP10_SECRETS="$WARP10_CONFIG_DIR/00-secrets.conf"
if [ ! -f "$WARP10_SECRETS" ]; then
cp "$WARP10_CONF_TEMPLATES/00-secrets.conf.template" "$WARP10_SECRETS"
python "${WARP10_HOME}/etc/generate_crypto_key.py" "$WARP10_SECRETS"
/usr/bin/java -cp ${WARP10_JAR} -Dfile.encoding=UTF-8 io.warp10.GenerateCryptoKey ${WARP10_SECRETS}
echo "warp10.manager.secret = scality" >> $WARP10_SECRETS
fi

View File

@ -1,28 +0,0 @@
#!/usr/bin/with-contenv sh
set -eu
JAVA="/usr/bin/java"
WARP10_JAR=${WARP10_HOME}/bin/warp10-${WARP10_VERSION}.jar
WARP10_CP="${WARP10_HOME}/etc:${WARP10_JAR}:${WARP10_HOME}/lib/*"
WARP10_CONFIG_DIR="$WARP10_DATA_DIR/conf"
INITIAL_TOKENS="$WARP10_CONFIG_DIR/initial.token"
if [ ! -f "$INITIAL_TOKENS" ]; then
CONFIG_FILES="$(find ${WARP10_CONFIG_DIR} -not -path "*/\.*" -name "*.conf" | sort | tr '\n' ' ' 2> /dev/null)"
# Look for a set token secret and use it for generation
secret=`${JAVA} -cp ${WARP10_CP} io.warp10.WarpConfig ${CONFIG_FILES} . 'token.secret' | sed -n 's/^@CONF@ //p' | sed -n 's/^token.secret[^=]*=//p'`
if [[ "${secret}" != "null" ]]; then
sed -i.bak -e "s|^{{secret}}|'"${secret}"'|" ${WARP10_HOME}/templates/warp10-tokengen.mc2
else
sed -i.bak -e "s|^{{secret}}||" ${WARP10_HOME}/templates/warp10-tokengen.mc2
fi
rm ${WARP10_HOME}/templates/warp10-tokengen.mc2.bak
# Generate read/write tokens valid for a period of 100 years. We use 'io.warp10.bootstrap' as application name.
${JAVA} -cp ${WARP10_JAR} io.warp10.worf.TokenGen ${CONFIG_FILES} ${WARP10_HOME}/templates/warp10-tokengen.mc2 $INITIAL_TOKENS
sed -i.bak 's/^.\{1\}//;$ s/.$//' $INITIAL_TOKENS # Remove first and last character
rm "${INITIAL_TOKENS}.bak"
fi

View File

@ -0,0 +1,12 @@
#!/usr/bin/with-contenv sh
EXPORTER_CMD="warp10_sensision_exporter --warp10.url=http://localhost:${SENSISION_PORT}/metrics"
if [ -f "/usr/local/bin/warp10_sensision_exporter" -a -n "$ENABLE_SENSISION" ]; then
echo "Starting Sensision exporter with $EXPORTER_CMD ..."
exec $EXPORTER_CMD
else
echo "Sensision is disabled. Not starting exporter."
# wait indefinitely
exec tail -f /dev/null
fi

View File

@ -3,9 +3,8 @@
JAVA="/usr/bin/java"
JAVA_OPTS=""
VERSION=1.0.21
SENSISION_CONFIG=${SENSISION_DATA_DIR}/conf/sensision.conf
SENSISION_JAR=${SENSISION_HOME}/bin/sensision-${VERSION}.jar
SENSISION_JAR=${SENSISION_HOME}/bin/sensision-${SENSISION_VERSION}.jar
SENSISION_CP=${SENSISION_HOME}/etc:${SENSISION_JAR}
SENSISION_CLASS=io.warp10.sensision.Main
export MALLOC_ARENA_MAX=1
@ -14,13 +13,13 @@ if [ -z "$SENSISION_HEAP" ]; then
SENSISION_HEAP=64m
fi
SENSISION_CMD="${JAVA} ${JAVA_OPTS} -Xmx${SENSISION_HEAP} -Dsensision.server.port=0 ${SENSISION_OPTS} -Dsensision.config=${SENSISION_CONFIG} -cp ${SENSISION_CP} ${SENSISION_CLASS}"
SENSISION_CMD="${JAVA} ${JAVA_OPTS} -Xmx${SENSISION_HEAP} -Dsensision.server.port=${SENSISION_PORT} ${SENSISION_OPTS} -Dsensision.config=${SENSISION_CONFIG} -cp ${SENSISION_CP} ${SENSISION_CLASS}"
if [ -n "$ENABLE_SENSISION" ]; then
echo "Starting Sensision with $SENSISION_CMD ..."
exec $SENSISION_CMD | tee -a ${SENSISION_HOME}/logs/sensision.log
else
echo "Sensision is disabled"
echo "Sensision is disabled. Not starting."
# wait indefinitely
exec tail -f /dev/null
fi

View File

@ -8,7 +8,7 @@ WARP10_JAR=${WARP10_HOME}/bin/warp10-${WARP10_VERSION}.jar
WARP10_CLASS=io.warp10.standalone.Warp
WARP10_CP="${WARP10_HOME}/etc:${WARP10_JAR}:${WARP10_HOME}/lib/*"
WARP10_CONFIG_DIR="$WARP10_DATA_DIR/conf"
CONFIG_FILES="$(find ${WARP10_CONFIG_DIR} -not -path "*/\.*" -name "*.conf" | sort | tr '\n' ' ' 2> /dev/null)"
CONFIG_FILES="$(find ${WARP10_CONFIG_DIR} -not -path '*/.*' -name '*.conf' | sort | tr '\n' ' ' 2> /dev/null)"
LOG4J_CONF=${WARP10_HOME}/etc/log4j.properties
if [ -z "$WARP10_HEAP" ]; then
@ -16,10 +16,10 @@ if [ -z "$WARP10_HEAP" ]; then
fi
if [ -z "$WARP10_HEAP_MAX" ]; then
WARP10_HEAP_MAX=1g
WARP10_HEAP_MAX=4g
fi
JAVA_OPTS="-Djava.awt.headless=true -Xms${WARP10_HEAP} -Xmx${WARP10_HEAP_MAX} -XX:+UseG1GC ${JAVA_OPTS}"
JAVA_OPTS="-Dlog4j.configuration=file:${LOG4J_CONF} ${JAVA__EXTRA_OPTS} -Djava.awt.headless=true -Xms${WARP10_HEAP} -Xmx${WARP10_HEAP_MAX} -XX:+UseG1GC"
SENSISION_OPTS=
if [ -n "$ENABLE_SENSISION" ]; then
@ -28,10 +28,16 @@ if [ -n "$ENABLE_SENSISION" ]; then
if [ -n "$SENSISION_LABELS" ]; then
_SENSISION_LABELS="-Dsensision.default.labels=$SENSISION_LABELS"
fi
SENSISION_OPTS="-Dsensision.server.port=0 ${_SENSISION_LABELS} -Dsensision.events.dir=/var/run/sensision/metrics -Dfile.encoding=UTF-8"
SENSISION_OPTS="${_SENSISION_LABELS} -Dsensision.events.dir=/var/run/sensision/metrics -Dfile.encoding=UTF-8 ${SENSISION_EXTRA_OPTS}"
fi
WARP10_CMD="${JAVA} -Dlog4j.configuration=file:${LOG4J_CONF} ${JAVA_OPTS} ${SENSISION_OPTS} -cp ${WARP10_CP} ${WARP10_CLASS} ${CONFIG_FILES}"
JMX_EXPORTER_OPTS=
if [ -n "$ENABLE_JMX_EXPORTER" ]; then
JMX_EXPORTER_OPTS="-javaagent:/opt/jmx_prom_agent.jar=4803:/opt/jmx_prom.yaml ${JMX_EXPORTER_EXTRA_OPTS}"
echo "Starting jmx exporter with Warp 10."
fi
WARP10_CMD="${JAVA} ${JMX_EXPORTER_OPTS} ${JAVA_OPTS} ${SENSISION_OPTS} -cp ${WARP10_CP} ${WARP10_CLASS} ${CONFIG_FILES}"
echo "Starting Warp 10 with $WARP10_CMD ..."
exec $WARP10_CMD | tee -a ${WARP10_HOME}/logs/warp10.log

View File

@ -1,5 +1,4 @@
/* eslint-disable global-require */
// eslint-disable-line strict
let toExport;

View File

@ -1,35 +1,13 @@
/* eslint-disable no-bitwise */
const assert = require('assert');
const fs = require('fs');
const path = require('path');
/**
* Reads from a config file and returns the content as a config object
*/
class Config {
constructor() {
/*
* By default, the config file is "config.json" at the root.
* It can be overridden using the UTAPI_CONFIG_FILE environment var.
*/
this._basePath = path.resolve(__dirname, '..');
this.path = `${this._basePath}/config.json`;
if (process.env.UTAPI_CONFIG_FILE !== undefined) {
this.path = process.env.UTAPI_CONFIG_FILE;
}
// Read config automatically
this._getConfig();
}
_getConfig() {
let config;
try {
const data = fs.readFileSync(this.path, { encoding: 'utf-8' });
config = JSON.parse(data);
} catch (err) {
throw new Error(`could not parse config file: ${err.message}`);
}
constructor(config) {
this.component = config.component;
this.port = 9500;
if (config.port !== undefined) {
@ -115,18 +93,26 @@ class Config {
}
}
this.vaultd = {};
if (config.vaultd) {
if (config.vaultd.port !== undefined) {
assert(Number.isInteger(config.vaultd.port)
&& config.vaultd.port > 0,
'bad config: vaultd port must be a positive integer');
this.vaultd.port = config.vaultd.port;
}
if (config.vaultd.host !== undefined) {
assert.strictEqual(typeof config.vaultd.host, 'string',
'bad config: vaultd host must be a string');
this.vaultd.host = config.vaultd.host;
if (config.vaultclient) {
// Instance passed from outside
this.vaultclient = config.vaultclient;
this.vaultd = null;
} else {
// Connection data
this.vaultclient = null;
this.vaultd = {};
if (config.vaultd) {
if (config.vaultd.port !== undefined) {
assert(Number.isInteger(config.vaultd.port)
&& config.vaultd.port > 0,
'bad config: vaultd port must be a positive integer');
this.vaultd.port = config.vaultd.port;
}
if (config.vaultd.host !== undefined) {
assert.strictEqual(typeof config.vaultd.host, 'string',
'bad config: vaultd host must be a string');
this.vaultd.host = config.vaultd.host;
}
}
}
@ -141,12 +127,11 @@ class Config {
const { key, cert, ca } = config.certFilePaths
? config.certFilePaths : {};
if (key && cert) {
const keypath = (key[0] === '/') ? key : `${this._basePath}/${key}`;
const certpath = (cert[0] === '/')
? cert : `${this._basePath}/${cert}`;
const keypath = key;
const certpath = cert;
let capath;
if (ca) {
capath = (ca[0] === '/') ? ca : `${this._basePath}/${ca}`;
capath = ca;
assert.doesNotThrow(() => fs.accessSync(capath, fs.F_OK | fs.R_OK),
`File not found or unreachable: ${capath}`);
}
@ -172,8 +157,13 @@ class Config {
+ 'expireMetrics must be a boolean');
this.expireMetrics = config.expireMetrics;
}
return config;
if (config.onlyCountLatestWhenObjectLocked !== undefined) {
assert(typeof config.onlyCountLatestWhenObjectLocked === 'boolean',
'bad config: onlyCountLatestWhenObjectLocked must be a boolean');
this.onlyCountLatestWhenObjectLocked = config.onlyCountLatestWhenObjectLocked;
}
}
}
module.exports = new Config();
module.exports = Config;

View File

@ -81,6 +81,17 @@ class Datastore {
return this._client.call((backend, done) => backend.incr(key, done), cb);
}
/**
* increment value of a key by the provided value
* @param {string} key - key holding the value
* @param {string} value - value containing the data
* @param {callback} cb - callback
* @return {undefined}
*/
incrby(key, value, cb) {
return this._client.call((backend, done) => backend.incrby(key, value, done), cb);
}
/**
* decrement value of a key by 1
* @param {string} key - key holding the value

View File

@ -6,8 +6,6 @@ const async = require('async');
const { errors } = require('arsenal');
const { getMetricFromKey, getKeys, generateStateKey } = require('./schema');
const s3metricResponseJSON = require('../models/s3metricResponse');
const config = require('./Config');
const Vault = require('./Vault');
const MAX_RANGE_MS = (((1000 * 60) * 60) * 24) * 30; // One month.
@ -23,7 +21,6 @@ class ListMetrics {
constructor(metric, component) {
this.metric = metric;
this.service = component;
this.vault = new Vault(config);
}
/**
@ -83,9 +80,10 @@ class ListMetrics {
const resources = validator.get(this.metric);
const timeRange = validator.get('timeRange');
const datastore = utapiRequest.getDatastore();
const vault = utapiRequest.getVault();
// map account ids to canonical ids
if (this.metric === 'accounts') {
return this.vault.getCanonicalIds(resources, log, (err, list) => {
return vault.getCanonicalIds(resources, log, (err, list) => {
if (err) {
return cb(err);
}
@ -124,7 +122,28 @@ class ListMetrics {
const fifteenMinutes = 15 * 60 * 1000; // In milliseconds
const timeRange = [start - fifteenMinutes, end];
const datastore = utapiRequest.getDatastore();
async.mapLimit(resources, 5, (resource, next) => this.getMetrics(resource, timeRange, datastore, log,
const vault = utapiRequest.getVault();
// map account ids to canonical ids
if (this.metric === 'accounts') {
return vault.getCanonicalIds(resources, log, (err, list) => {
if (err) {
return cb(err);
}
return async.mapLimit(list.message.body, 5,
(item, next) => this.getMetrics(item.canonicalId, timeRange,
datastore, log, (err, res) => {
if (err) {
return next(err);
}
return next(null, Object.assign({}, res,
{ accountId: item.accountId }));
}),
cb);
});
}
return async.mapLimit(resources, 5, (resource, next) => this.getMetrics(resource, timeRange, datastore, log,
next), cb);
}
@ -293,11 +312,10 @@ class ListMetrics {
});
if (!areMetricsPositive) {
return cb(errors.InternalError.customizeDescription(
'Utapi is in a transient state for this time period as '
+ 'metrics are being collected. Please try again in a few '
+ 'minutes.',
));
log.info('negative metric value found', {
error: resource,
method: 'ListMetrics.getMetrics',
});
}
/**
* Batch result is of the format

View File

@ -63,7 +63,6 @@ const methods = {
getObjectTagging: { method: '_genericPushMetric', changesData: false },
putObject: { method: '_genericPushMetricPutObject', changesData: true },
copyObject: { method: '_genericPushMetricPutObject', changesData: true },
putData: { method: '_genericPushMetricPutObject', changesData: true },
putObjectAcl: { method: '_genericPushMetric', changesData: true },
putObjectLegalHold: { method: '_genericPushMetric', changesData: true },
putObjectRetention: { method: '_genericPushMetric', changesData: true },
@ -91,12 +90,16 @@ const methods = {
},
putBucketObjectLock: { method: '_genericPushMetric', changesData: true },
getBucketObjectLock: { method: '_genericPushMetric', changesData: true },
replicateObject: { method: '_genericPushMetricPutObject', changesData: true },
replicateTags: { method: '_genericPushMetric', changesData: true },
replicateDelete: { method: '_pushMetricDeleteMarkerObject', changesData: true },
};
const metricObj = {
buckets: 'bucket',
accounts: 'accountId',
users: 'userId',
location: 'location',
};
class UtapiClient {
@ -120,13 +123,17 @@ class UtapiClient {
const api = (config || {}).logApi || werelogs;
this.log = new api.Logger('UtapiClient');
// By default, we push all resource types
this.metrics = ['buckets', 'accounts', 'users', 'service'];
this.metrics = ['buckets', 'accounts', 'users', 'service', 'location'];
this.service = 's3';
this.disableOperationCounters = false;
this.enabledOperationCounters = [];
this.disableClient = true;
if (config) {
if (config && !config.disableClient) {
this.disableClient = false;
this.expireMetrics = config.expireMetrics;
this.expireMetricsTTL = config.expireMetricsTTL || 0;
if (config.metrics) {
const message = 'invalid property in UtapiClient configuration';
assert(Array.isArray(config.metrics), `${message}: metrics `
@ -154,9 +161,6 @@ class UtapiClient {
if (config.enabledOperationCounters) {
this.enabledOperationCounters = config.enabledOperationCounters;
}
this.disableClient = false;
this.expireMetrics = config.expireMetrics;
this.expireMetricsTTL = config.expireMetricsTTL || 0;
}
}
@ -540,10 +544,13 @@ class UtapiClient {
const paramsArr = this._getParamsArr(params);
paramsArr.forEach(p => {
cmds.push(['incr', generateCounter(p, 'numberOfObjectsCounter')]);
if (this._isCounterEnabled('deleteObject')) {
cmds.push(['incr', generateKey(p, 'deleteObject', timestamp)]);
const counterAction = action === 'putDeleteMarkerObject' ? 'deleteObject' : action;
if (this._isCounterEnabled(counterAction)) {
cmds.push(['incr', generateKey(p, counterAction, timestamp)]);
}
cmds.push(['zrangebyscore', generateStateKey(p, 'storageUtilized'), timestamp, timestamp]);
});
return this.ds.batch(cmds, (err, results) => {
if (err) {
log.error('error pushing metric', {
@ -577,13 +584,48 @@ class UtapiClient {
// empty.
actionCounter = Number.isNaN(actionCounter)
|| actionCounter < 0 ? 1 : actionCounter;
if (Number.isInteger(params.byteLength)) {
/* byteLength is passed in from cloudserver under the follow conditions:
* - bucket versioning is suspended
* - object version id is null
* - the content length of the object exists
* In this case, the master key is deleted and replaced with a delete marker.
* The decrement accounts for the deletion of the master key when utapi reports
* on the number of objects.
*/
actionCounter -= 1;
}
const key = generateStateKey(p, 'numberOfObjects');
const byteArr = results[index + commandsGroupSize - 1][1];
const oldByteLength = byteArr ? parseInt(byteArr[0], 10) : 0;
const newByteLength = member.serialize(Math.max(0, oldByteLength - params.byteLength));
cmds2.push(
['zremrangebyscore', key, timestamp, timestamp],
['zadd', key, timestamp, member.serialize(actionCounter)],
);
if (Number.isInteger(params.byteLength)) {
cmds2.push(
['decr', generateCounter(p, 'numberOfObjectsCounter')],
['decrby', generateCounter(p, 'storageUtilizedCounter'), params.byteLength],
);
}
if (byteArr) {
cmds2.push(
['zremrangebyscore', generateStateKey(p, 'storageUtilized'), timestamp, timestamp],
['zadd', generateStateKey(p, 'storageUtilized'), timestamp, newByteLength],
);
}
return true;
});
if (noErr) {
return this.ds.batch(cmds2, cb);
}
@ -1025,10 +1067,10 @@ class UtapiClient {
storageUtilizedDelta],
[redisCmd, generateCounter(p, 'numberOfObjectsCounter')],
);
if (action !== 'putData' && this._isCounterEnabled(action)) {
if (this._isCounterEnabled(action)) {
cmds.push(['incr', generateKey(p, action, timestamp)]);
}
if (action === 'putObject' || action === 'putData') {
if (action === 'putObject' || action === 'replicateObject') {
cmds.push(
['incrby', generateKey(p, 'incomingBytes', timestamp),
newByteLength],
@ -1114,6 +1156,69 @@ class UtapiClient {
});
}
/**
*
* @param {string} location - name of data location
* @param {number} updateSize - size in bytes to update location metric by,
* could be negative, indicating deleted object
* @param {string} reqUid - Request Unique Identifier
* @param {function} callback - callback to call
* @return {undefined}
*/
pushLocationMetric(location, updateSize, reqUid, callback) {
const log = this.log.newRequestLoggerFromSerializedUids(reqUid);
const params = {
level: 'location',
service: 's3',
location,
};
this._checkMetricTypes(params);
const action = (updateSize < 0) ? 'decrby' : 'incrby';
const size = (updateSize < 0) ? -updateSize : updateSize;
return this.ds[action](generateKey(params, 'locationStorage'), size,
err => {
if (err) {
log.error('error pushing metric', {
method: 'UtapiClient.pushLocationMetric',
error: err,
});
return callback(errors.InternalError);
}
return callback();
});
}
/**
*
* @param {string} location - name of data backend to get metric for
* @param {string} reqUid - Request Unique Identifier
* @param {function} callback - callback to call
* @return {undefined}
*/
getLocationMetric(location, reqUid, callback) {
const log = this.log.newRequestLoggerFromSerializedUids(reqUid);
const params = {
level: 'location',
service: 's3',
location,
};
const redisKey = generateKey(params, 'locationStorage');
return this.ds.get(redisKey, (err, bytesStored) => {
if (err) {
log.error('error getting metric', {
method: 'UtapiClient: getLocationMetric',
error: err,
});
return callback(errors.InternalError);
}
// if err and bytesStored are null, key does not exist yet
if (bytesStored === null) {
return callback(null, 0);
}
return callback(null, bytesStored);
});
}
/**
* Get storage used by bucket/account/user/service
* @param {object} params - params for the metrics

View File

@ -12,16 +12,23 @@ const RedisClient = require('../libV2/redis');
const REINDEX_SCHEDULE = '0 0 * * Sun';
const REINDEX_LOCK_KEY = 's3:utapireindex:lock';
const REINDEX_LOCK_TTL = (60 * 60) * 24;
const REINDEX_PYTHON_INTERPRETER = process.env.REINDEX_PYTHON_INTERPRETER !== undefined
? process.env.REINDEX_PYTHON_INTERPRETER
: 'python3.7';
const EXIT_CODE_SENTINEL_CONNECTION = 100;
class UtapiReindex {
constructor(config) {
this._enabled = false;
this._schedule = REINDEX_SCHEDULE;
this._sentinel = {
host: '127.0.0.1',
port: 16379,
this._redis = {
name: 'scality-s3',
sentinelPassword: '',
sentinels: [{
host: '127.0.0.1',
port: 16379,
}],
};
this._bucketd = {
host: '127.0.0.1',
@ -39,14 +46,13 @@ class UtapiReindex {
if (config && config.password) {
this._password = config.password;
}
if (config && config.sentinel) {
if (config && config.redis) {
const {
host, port, name, sentinelPassword,
} = config.sentinel;
this._sentinel.host = host || this._sentinel.host;
this._sentinel.port = port || this._sentinel.port;
this._sentinel.name = name || this._sentinel.name;
this._sentinel.sentinelPassword = sentinelPassword || this._sentinel.sentinelPassword;
name, sentinelPassword, sentinels,
} = config.redis;
this._redis.name = name || this._redis.name;
this._redis.sentinelPassword = sentinelPassword || this._redis.sentinelPassword;
this._redis.sentinels = sentinels || this._redis.sentinels;
}
if (config && config.bucketd) {
const { host, port } = config.bucketd;
@ -58,17 +64,16 @@ class UtapiReindex {
this._log = new werelogs.Logger('UtapiReindex', { level, dump });
}
this._onlyCountLatestWhenObjectLocked = (config && config.onlyCountLatestWhenObjectLocked === true);
this._requestLogger = this._log.newRequestLogger();
}
_getRedisClient() {
const client = new RedisClient({
sentinels: [{
host: this._sentinel.host,
port: this._sentinel.port,
}],
name: this._sentinel.name,
sentinelPassword: this._sentinel.sentinelPassword,
sentinels: this._redis.sentinels,
name: this._redis.name,
sentinelPassword: this._redis.sentinelPassword,
password: this._password,
});
client.connect();
@ -83,17 +88,18 @@ class UtapiReindex {
return this.ds.del(REINDEX_LOCK_KEY);
}
_buildFlags() {
_buildFlags(sentinel) {
const flags = {
/* eslint-disable camelcase */
sentinel_ip: this._sentinel.host,
sentinel_port: this._sentinel.port,
sentinel_cluster_name: this._sentinel.name,
sentinel_ip: sentinel.host,
sentinel_port: sentinel.port,
sentinel_cluster_name: this._redis.name,
bucketd_addr: `http://${this._bucketd.host}:${this._bucketd.port}`,
};
if (this._sentinel.sentinelPassword) {
flags.redis_password = this._sentinel.sentinelPassword;
if (this._redis.sentinelPassword) {
flags.redis_password = this._redis.sentinelPassword;
}
/* eslint-enable camelcase */
const opts = [];
Object.keys(flags)
@ -102,17 +108,17 @@ class UtapiReindex {
opts.push(name);
opts.push(flags[flag]);
});
if (this._onlyCountLatestWhenObjectLocked) {
opts.push('--only-latest-when-locked');
}
return opts;
}
_runScript(path, done) {
const flags = this._buildFlags();
this._requestLogger.debug(`launching subprocess ${path} `
+ `with flags: ${flags}`);
const process = childProcess.spawn('python3.4', [
path,
...flags,
]);
_runScriptWithSentinels(path, remainingSentinels, done) {
const flags = this._buildFlags(remainingSentinels.shift());
this._requestLogger.debug(`launching subprocess ${path} with flags: ${flags}`);
const process = childProcess.spawn(REINDEX_PYTHON_INTERPRETER, [path, ...flags]);
process.stdout.on('data', data => {
this._requestLogger.info('received output from script', {
output: Buffer.from(data).toString(),
@ -137,6 +143,17 @@ class UtapiReindex {
statusCode: code,
script: path,
});
if (code === EXIT_CODE_SENTINEL_CONNECTION) {
if (remainingSentinels.length > 0) {
this._requestLogger.info('retrying with next sentinel host', {
script: path,
});
return this._runScriptWithSentinels(path, remainingSentinels, done);
}
this._requestLogger.error('no more sentinel host to try', {
script: path,
});
}
} else {
this._requestLogger.info('script exited successfully', {
statusCode: code,
@ -147,6 +164,11 @@ class UtapiReindex {
});
}
_runScript(path, done) {
const remainingSentinels = [...this._redis.sentinels];
this._runScriptWithSentinels(path, remainingSentinels, done);
}
_attemptLock(job) {
this._requestLogger.info('attempting to acquire the lock to begin job');
this._lock()

View File

@ -14,6 +14,15 @@ class UtapiRequest {
this._datastore = null;
this._requestQuery = null;
this._requestPath = null;
this._vault = null;
}
getVault() {
return this._vault;
}
setVault() {
return this._vault;
}
/**

View File

@ -1,16 +1,21 @@
import requests
import redis
import json
import argparse
import ast
import sys
import time
import urllib
from concurrent.futures import ThreadPoolExecutor
import json
import logging
import re
import redis
import requests
import sys
from threading import Thread
from concurrent.futures import ThreadPoolExecutor
import time
import urllib
import argparse
logging.basicConfig(level=logging.INFO)
_log = logging.getLogger('utapi-reindex:reporting')
SENTINEL_CONNECT_TIMEOUT_SECONDS = 10
EXIT_CODE_SENTINEL_CONNECTION_ERROR = 100
def get_options():
parser = argparse.ArgumentParser()
@ -29,8 +34,19 @@ class askRedis():
def __init__(self, ip="127.0.0.1", port="16379", sentinel_cluster_name="scality-s3", password=None):
self._password = password
r = redis.Redis(host=ip, port=port, db=0, password=password)
self._ip, self._port = r.sentinel_get_master_addr_by_name(sentinel_cluster_name)
r = redis.Redis(
host=ip,
port=port,
db=0,
password=password,
socket_connect_timeout=SENTINEL_CONNECT_TIMEOUT_SECONDS
)
try:
self._ip, self._port = r.sentinel_get_master_addr_by_name(sentinel_cluster_name)
except (redis.exceptions.ConnectionError, redis.exceptions.TimeoutError) as e:
_log.error(f'Failed to connect to redis sentinel at {ip}:{port}: {e}')
# use a specific error code to hint on retrying with another sentinel node
sys.exit(EXIT_CODE_SENTINEL_CONNECTION_ERROR)
def read(self, resource, name):
r = redis.Redis(host=self._ip, port=self._port, db=0, password=self._password)

View File

@ -1,5 +1,6 @@
import argparse
import concurrent.futures as futures
import functools
import itertools
import json
import logging
@ -8,9 +9,9 @@ import re
import sys
import time
import urllib
from pathlib import Path
from collections import defaultdict, namedtuple
from concurrent.futures import ThreadPoolExecutor
from pprint import pprint
import redis
import requests
@ -24,6 +25,9 @@ MPU_SHADOW_BUCKET_PREFIX = 'mpuShadowBucket'
ACCOUNT_UPDATE_CHUNKSIZE = 100
SENTINEL_CONNECT_TIMEOUT_SECONDS = 10
EXIT_CODE_SENTINEL_CONNECTION_ERROR = 100
def get_options():
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--sentinel-ip", default='127.0.0.1', help="Sentinel IP")
@ -31,9 +35,39 @@ def get_options():
parser.add_argument("-v", "--redis-password", default=None, help="Redis AUTH Password")
parser.add_argument("-n", "--sentinel-cluster-name", default='scality-s3', help="Redis cluster name")
parser.add_argument("-s", "--bucketd-addr", default='http://127.0.0.1:9000', help="URL of the bucketd server")
parser.add_argument("-w", "--worker", default=10, help="Number of workers")
parser.add_argument("-b", "--bucket", default=False, help="Bucket to be processed")
return parser.parse_args()
parser.add_argument("-w", "--worker", default=10, type=int, help="Number of workers")
parser.add_argument("-r", "--max-retries", default=2, type=int, help="Max retries before failing a bucketd request")
parser.add_argument("--only-latest-when-locked", action='store_true', help="Only index the latest version of a key when the bucket has a default object lock policy")
parser.add_argument("--debug", action='store_true', help="Enable debug logging")
parser.add_argument("--dry-run", action="store_true", help="Do not update redis")
group = parser.add_mutually_exclusive_group()
group.add_argument("-a", "--account", default=[], help="account canonical ID (all account buckets will be processed)", action="append", type=nonempty_string('account'))
group.add_argument("--account-file", default=None, help="file containing account canonical IDs, one ID per line", type=existing_file)
group.add_argument("-b", "--bucket", default=[], help="bucket name", action="append", type=nonempty_string('bucket'))
group.add_argument("--bucket-file", default=None, help="file containing bucket names, one bucket name per line", type=existing_file)
options = parser.parse_args()
if options.bucket_file:
with open(options.bucket_file) as f:
options.bucket = [line.strip() for line in f if line.strip()]
elif options.account_file:
with open(options.account_file) as f:
options.account = [line.strip() for line in f if line.strip()]
return options
def nonempty_string(flag):
def inner(value):
if not value.strip():
raise argparse.ArgumentTypeError("%s: value must not be empty"%flag)
return value
return inner
def existing_file(path):
path = Path(path).resolve()
if not path.exists():
raise argparse.ArgumentTypeError("File does not exist: %s"%path)
return path
def chunks(iterable, size):
it = iter(iterable)
@ -48,22 +82,38 @@ def _encoded(func):
return urllib.parse.quote(val.encode('utf-8'))
return inner
Bucket = namedtuple('Bucket', ['userid', 'name'])
Bucket = namedtuple('Bucket', ['userid', 'name', 'object_lock_enabled'])
MPU = namedtuple('MPU', ['bucket', 'key', 'upload_id'])
BucketContents = namedtuple('BucketContents', ['bucket', 'obj_count', 'total_size'])
class MaxRetriesReached(Exception):
def __init__(self, url):
super().__init__('Max retries reached for request to %s'%url)
class InvalidListing(Exception):
def __init__(self, bucket):
super().__init__('Invalid contents found while listing bucket %s'%bucket)
class BucketNotFound(Exception):
def __init__(self, bucket):
super().__init__('Bucket %s not found'%bucket)
class BucketDClient:
'''Performs Listing calls against bucketd'''
__url_format = '{addr}/default/bucket/{bucket}'
__url_attribute_format = '{addr}/default/attributes/{bucket}'
__url_bucket_format = '{addr}/default/bucket/{bucket}'
__headers = {"x-scal-request-uids": "utapi-reindex-list-buckets"}
def __init__(self, bucketd_addr=None):
def __init__(self, bucketd_addr=None, max_retries=2, only_latest_when_locked=False):
self._bucketd_addr = bucketd_addr
self._max_retries = max_retries
self._only_latest_when_locked = only_latest_when_locked
self._session = requests.Session()
def _do_req(self, url, check_500=True, **kwargs):
while True:
# Add 1 for the initial request
for x in range(self._max_retries + 1):
try:
resp = self._session.get(url, timeout=30, verify=False, headers=self.__headers, **kwargs)
if check_500 and resp.status_code == 500:
@ -76,6 +126,8 @@ class BucketDClient:
_log.error('Error during listing, sleeping 5 secs %s'%url)
time.sleep(5)
raise MaxRetriesReached(url)
def _list_bucket(self, bucket, **kwargs):
'''
Lists a bucket lazily until "empty"
@ -88,7 +140,7 @@ class BucketDClient:
parameters value. On the first request the function will be called with
`None` and should return its initial value. Return `None` for the param to be excluded.
'''
url = self.__url_format.format(addr=self._bucketd_addr, bucket=bucket)
url = self.__url_bucket_format.format(addr=self._bucketd_addr, bucket=bucket)
static_params = {k: v for k, v in kwargs.items() if not callable(v)}
dynamic_params = {k: v for k, v in kwargs.items() if callable(v)}
is_truncated = True # Set to True for first loop
@ -101,6 +153,9 @@ class BucketDClient:
_log.debug('listing bucket bucket: %s params: %s'%(
bucket, ', '.join('%s=%s'%p for p in params.items())))
resp = self._do_req(url, params=params)
if resp.status_code == 404:
_log.debug('Bucket not found bucket: %s'%bucket)
return
if resp.status_code == 200:
payload = resp.json()
except ValueError as e:
@ -108,6 +163,9 @@ class BucketDClient:
_log.error('Invalid listing response body! bucket:%s params:%s'%(
bucket, ', '.join('%s=%s'%p for p in params.items())))
continue
except MaxRetriesReached:
_log.error('Max retries reached listing bucket:%s'%bucket)
raise
except Exception as e:
_log.exception(e)
_log.error('Unhandled exception during listing! bucket:%s params:%s'%(
@ -119,7 +177,37 @@ class BucketDClient:
else:
is_truncated = len(payload) > 0
def list_buckets(self):
@functools.lru_cache(maxsize=16)
def _get_bucket_attributes(self, name):
url = self.__url_attribute_format.format(addr=self._bucketd_addr, bucket=name)
try:
resp = self._do_req(url)
if resp.status_code == 200:
return resp.json()
else:
_log.error('Error getting bucket attributes bucket:%s status_code:%s'%(name, resp.status_code))
raise BucketNotFound(name)
except ValueError as e:
_log.exception(e)
_log.error('Invalid attributes response body! bucket:%s'%name)
raise
except MaxRetriesReached:
_log.error('Max retries reached getting bucket attributes bucket:%s'%name)
raise
except Exception as e:
_log.exception(e)
_log.error('Unhandled exception getting bucket attributes bucket:%s'%name)
raise
def get_bucket_md(self, name):
md = self._get_bucket_attributes(name)
canonId = md.get('owner')
if canonId is None:
_log.error('No owner found for bucket %s'%name)
raise InvalidListing(name)
return Bucket(canonId, name, md.get('objectLockEnabled', False))
def list_buckets(self, account=None):
def get_next_marker(p):
if p is None:
@ -131,13 +219,24 @@ class BucketDClient:
'maxKeys': 1000,
'marker': get_next_marker
}
if account is not None:
params['prefix'] = '%s..|..' % account
for _, payload in self._list_bucket(USERS_BUCKET, **params):
buckets = []
for result in payload['Contents']:
for result in payload.get('Contents', []):
match = re.match("(\w+)..\|..(\w+.*)", result['key'])
buckets.append(Bucket(*match.groups()))
yield buckets
bucket = Bucket(*match.groups(), False)
# We need to get the attributes for each bucket to determine if it is locked
if self._only_latest_when_locked:
bucket_attrs = self._get_bucket_attributes(bucket.name)
object_lock_enabled = bucket_attrs.get('objectLockEnabled', False)
bucket = bucket._replace(object_lock_enabled=object_lock_enabled)
buckets.append(bucket)
if buckets:
yield buckets
def list_mpus(self, bucket):
_bucket = MPU_SHADOW_BUCKET_PREFIX + bucket.name
@ -174,15 +273,12 @@ class BucketDClient:
upload_id=key['value']['UploadId']))
return keys
def _sum_objects(self, listing):
def _sum_objects(self, bucket, listing, only_latest_when_locked = False):
count = 0
total_size = 0
last_master = None
last_size = None
for _, payload in listing:
contents = payload['Contents'] if isinstance(payload, dict) else payload
for obj in contents:
count += 1
last_key = None
try:
for obj in listing:
if isinstance(obj['value'], dict):
# bucketd v6 returns a dict:
data = obj.get('value', {})
@ -190,40 +286,52 @@ class BucketDClient:
else:
# bucketd v7 returns an encoded string
data = json.loads(obj['value'])
size = data["content-length"]
size = data.get('content-length', 0)
is_latest = obj['key'] != last_key
last_key = obj['key']
if only_latest_when_locked and bucket.object_lock_enabled and not is_latest:
_log.debug('Skipping versioned key: %s'%obj['key'])
continue
count += 1
total_size += size
# If versioned, subtract the size of the master to avoid double counting
if last_master is not None and obj['key'].startswith(last_master + '\x00'):
_log.info('Detected versioned key: %s - subtracting master size: %i'% (
obj['key'],
last_size,
))
total_size -= last_size
count -= 1
last_master = None
# Only save master versions
elif '\x00' not in obj['key']:
last_master = obj['key']
last_size = size
except InvalidListing:
_log.error('Invalid contents in listing. bucket:%s'%bucket.name)
raise InvalidListing(bucket.name)
return count, total_size
def _extract_listing(self, key, listing):
for status_code, payload in listing:
contents = payload[key] if isinstance(payload, dict) else payload
if contents is None:
raise InvalidListing('')
for obj in contents:
yield obj
def count_bucket_contents(self, bucket):
def get_next_marker(p):
if p is None or len(p) == 0:
def get_key_marker(p):
if p is None:
return ''
return p[-1].get('key', '')
return p.get('NextKeyMarker', '')
def get_vid_marker(p):
if p is None:
return ''
return p.get('NextVersionIdMarker', '')
params = {
'listingType': 'Basic',
'listingType': 'DelimiterVersions',
'maxKeys': 1000,
'gt': get_next_marker,
'keyMarker': get_key_marker,
'versionIdMarker': get_vid_marker,
}
count, total_size = self._sum_objects(self._list_bucket(bucket.name, **params))
listing = self._list_bucket(bucket.name, **params)
count, total_size = self._sum_objects(bucket, self._extract_listing('Versions', listing), self._only_latest_when_locked)
return BucketContents(
bucket=bucket,
obj_count=count,
@ -231,7 +339,8 @@ class BucketDClient:
)
def count_mpu_parts(self, mpu):
_bucket = MPU_SHADOW_BUCKET_PREFIX + mpu.bucket.name
shadow_bucket_name = MPU_SHADOW_BUCKET_PREFIX + mpu.bucket.name
shadow_bucket = mpu.bucket._replace(name=shadow_bucket_name)
def get_prefix(p):
if p is None:
@ -251,30 +360,53 @@ class BucketDClient:
'listingType': 'Delimiter',
}
count, total_size = self._sum_objects(self._list_bucket(_bucket, **params))
listing = self._list_bucket(shadow_bucket_name, **params)
count, total_size = self._sum_objects(shadow_bucket, self._extract_listing('Contents', listing))
return BucketContents(
bucket=mpu.bucket._replace(name=_bucket),
bucket=shadow_bucket,
obj_count=0, # MPU parts are not counted towards numberOfObjects
total_size=total_size
)
def list_all_buckets(bucket_client):
return bucket_client.list_buckets()
def list_specific_accounts(bucket_client, accounts):
for account in accounts:
yield from bucket_client.list_buckets(account=account)
def list_specific_buckets(bucket_client, buckets):
batch = []
for bucket in buckets:
try:
batch.append(bucket_client.get_bucket_md(bucket))
except BucketNotFound:
_log.error('Failed to list bucket %s. Removing from results.'%bucket)
continue
yield batch
def index_bucket(client, bucket):
'''
Takes an instance of BucketDClient and a bucket name, and returns a
tuple of BucketContents for the passed bucket and its mpu shadow bucket.
'''
bucket_total = client.count_bucket_contents(bucket)
mpus = client.list_mpus(bucket)
if not mpus:
return bucket_total
try:
bucket_total = client.count_bucket_contents(bucket)
mpus = client.list_mpus(bucket)
if not mpus:
return bucket_total
total_size = bucket_total.total_size
mpu_totals = [client.count_mpu_parts(m) for m in mpus]
for mpu in mpu_totals:
total_size += mpu.total_size
total_size = bucket_total.total_size
mpu_totals = [client.count_mpu_parts(m) for m in mpus]
for mpu in mpu_totals:
total_size += mpu.total_size
return bucket_total._replace(total_size=total_size)
return bucket_total._replace(total_size=total_size)
except Exception as e:
_log.exception(e)
_log.error('Error during listing. Removing from results bucket:%s'%bucket.name)
raise InvalidListing(bucket.name)
def update_report(report, key, obj_count, total_size):
'''Convenience function to update the report dicts'''
@ -292,9 +424,16 @@ def get_redis_client(options):
host=options.sentinel_ip,
port=options.sentinel_port,
db=0,
password=options.redis_password
password=options.redis_password,
socket_connect_timeout=SENTINEL_CONNECT_TIMEOUT_SECONDS
)
ip, port = sentinel.sentinel_get_master_addr_by_name(options.sentinel_cluster_name)
try:
ip, port = sentinel.sentinel_get_master_addr_by_name(options.sentinel_cluster_name)
except (redis.exceptions.ConnectionError, redis.exceptions.TimeoutError) as e:
_log.error(f'Failed to connect to redis sentinel at {options.sentinel_ip}:{options.sentinel_port}: {e}')
# use a specific error code to hint on retrying with another sentinel node
sys.exit(EXIT_CODE_SENTINEL_CONNECTION_ERROR)
return redis.Redis(
host=ip,
port=port,
@ -328,54 +467,120 @@ def log_report(resource, name, obj_count, total_size):
if __name__ == '__main__':
options = get_options()
bucket_client = BucketDClient(options.bucketd_addr)
if options.debug:
_log.setLevel(logging.DEBUG)
bucket_client = BucketDClient(options.bucketd_addr, options.max_retries, options.only_latest_when_locked)
redis_client = get_redis_client(options)
account_reports = {}
observed_buckets = set()
failed_accounts = set()
if options.account:
batch_generator = list_specific_accounts(bucket_client, options.account)
elif options.bucket:
batch_generator = list_specific_buckets(bucket_client, options.bucket)
else:
batch_generator = list_all_buckets(bucket_client)
with ThreadPoolExecutor(max_workers=options.worker) as executor:
for batch in bucket_client.list_buckets():
for batch in batch_generator:
bucket_reports = {}
jobs = [executor.submit(index_bucket, bucket_client, b) for b in batch]
for job in futures.as_completed(jobs):
total = job.result() # Summed bucket and shadowbucket totals
jobs = { executor.submit(index_bucket, bucket_client, b): b for b in batch }
for job in futures.as_completed(jobs.keys()):
try:
total = job.result() # Summed bucket and shadowbucket totals
except InvalidListing:
_bucket = jobs[job]
_log.error('Failed to list bucket %s. Removing from results.'%_bucket.name)
# Add the bucket to observed_buckets anyway to avoid clearing existing metrics
observed_buckets.add(_bucket.name)
# If we can not list one of an account's buckets we can not update its total
failed_accounts.add(_bucket.userid)
continue
observed_buckets.add(total.bucket.name)
update_report(bucket_reports, total.bucket.name, total.obj_count, total.total_size)
update_report(account_reports, total.bucket.userid, total.obj_count, total.total_size)
# Bucket reports can be updated as we get them
pipeline = redis_client.pipeline(transaction=False) # No transaction to reduce redis load
for bucket, report in bucket_reports.items():
update_redis(pipeline, 'buckets', bucket, report['obj_count'], report['total_size'])
log_report('buckets', bucket, report['obj_count'], report['total_size'])
if options.dry_run:
for bucket, report in bucket_reports.items():
_log.info(
"DryRun: resource buckets [%s] would be updated with obj_count %i and total_size %i" % (
bucket, report['obj_count'], report['total_size']
)
)
else:
pipeline = redis_client.pipeline(transaction=False) # No transaction to reduce redis load
for bucket, report in bucket_reports.items():
update_redis(pipeline, 'buckets', bucket, report['obj_count'], report['total_size'])
log_report('buckets', bucket, report['obj_count'], report['total_size'])
pipeline.execute()
stale_buckets = set()
recorded_buckets = set(get_resources_from_redis(redis_client, 'buckets'))
if options.bucket:
stale_buckets = { b for b in options.bucket if b not in observed_buckets }
elif options.account:
_log.warning('Stale buckets will not be cleared when using the --account or --account-file flags')
else:
stale_buckets = recorded_buckets.difference(observed_buckets)
_log.info('Found %s stale buckets' % len(stale_buckets))
if options.dry_run:
_log.info("DryRun: not updating stale buckets")
else:
for chunk in chunks(stale_buckets, ACCOUNT_UPDATE_CHUNKSIZE):
pipeline = redis_client.pipeline(transaction=False) # No transaction to reduce redis load
for bucket in chunk:
update_redis(pipeline, 'buckets', bucket, 0, 0)
log_report('buckets', bucket, 0, 0)
pipeline.execute()
# Update total account reports in chunks
for chunk in chunks(account_reports.items(), ACCOUNT_UPDATE_CHUNKSIZE):
pipeline = redis_client.pipeline(transaction=False) # No transaction to reduce redis load
for userid, report in chunk:
update_redis(pipeline, 'accounts', userid, report['obj_count'], report['total_size'])
log_report('accounts', userid, report['obj_count'], report['total_size'])
pipeline.execute()
# Account metrics are not updated if a bucket is specified
if options.bucket:
_log.warning('Account metrics will not be updated when using the --bucket or --bucket-file flags')
else:
# Don't update any accounts with failed listings
without_failed = filter(lambda x: x[0] not in failed_accounts, account_reports.items())
if options.dry_run:
for userid, report in account_reports.items():
_log.info(
"DryRun: resource account [%s] would be updated with obj_count %i and total_size %i" % (
userid, report['obj_count'], report['total_size']
)
)
else:
# Update total account reports in chunks
for chunk in chunks(without_failed, ACCOUNT_UPDATE_CHUNKSIZE):
pipeline = redis_client.pipeline(transaction=False) # No transaction to reduce redis load
for userid, report in chunk:
update_redis(pipeline, 'accounts', userid, report['obj_count'], report['total_size'])
log_report('accounts', userid, report['obj_count'], report['total_size'])
pipeline.execute()
observed_accounts = set(account_reports.keys())
recorded_accounts = set(get_resources_from_redis(redis_client, 'accounts'))
recorded_buckets = set(get_resources_from_redis(redis_client, 'buckets'))
if options.account:
for account in options.account:
if account in failed_accounts:
_log.error("No metrics updated for account %s, one or more buckets failed" % account)
# Stale accounts and buckets are ones that do not appear in the listing, but have recorded values
stale_accounts = recorded_accounts.difference(observed_accounts)
_log.info('Found %s stale accounts' % len(stale_accounts))
for chunk in chunks(stale_accounts, ACCOUNT_UPDATE_CHUNKSIZE):
pipeline = redis_client.pipeline(transaction=False) # No transaction to reduce redis load
for account in chunk:
update_redis(pipeline, 'accounts', account, 0, 0)
log_report('accounts', account, 0, 0)
pipeline.execute()
# Include failed_accounts in observed_accounts to avoid clearing metrics
observed_accounts = failed_accounts.union(set(account_reports.keys()))
recorded_accounts = set(get_resources_from_redis(redis_client, 'accounts'))
stale_buckets = recorded_buckets.difference(observed_buckets)
_log.info('Found %s stale buckets' % len(stale_buckets))
for chunk in chunks(stale_buckets, ACCOUNT_UPDATE_CHUNKSIZE):
pipeline = redis_client.pipeline(transaction=False) # No transaction to reduce redis load
for bucket in chunk:
update_redis(pipeline, 'buckets', bucket, 0, 0)
log_report('buckets', bucket, 0, 0)
pipeline.execute()
if options.account:
stale_accounts = { a for a in options.account if a not in observed_accounts }
else:
# Stale accounts and buckets are ones that do not appear in the listing, but have recorded values
stale_accounts = recorded_accounts.difference(observed_accounts)
_log.info('Found %s stale accounts' % len(stale_accounts))
if options.dry_run:
_log.info("DryRun: not updating stale accounts")
else:
for chunk in chunks(stale_accounts, ACCOUNT_UPDATE_CHUNKSIZE):
pipeline = redis_client.pipeline(transaction=False) # No transaction to reduce redis load
for account in chunk:
update_redis(pipeline, 'accounts', account, 0, 0)
log_report('accounts', account, 0, 0)
pipeline.execute()

View File

@ -52,6 +52,9 @@ const keys = {
getObjectRetention: prefix => `${prefix}GetObjectRetention`,
putObjectLegalHold: prefix => `${prefix}PutObjectLegalHold`,
getObjectLegalHold: prefix => `${prefix}GetObjectLegalHold`,
replicateObject: prefix => `${prefix}ReplicateObject`,
replicateTags: prefix => `${prefix}ReplicateTags`,
replicateDelete: prefix => `${prefix}ReplicateDelete`,
incomingBytes: prefix => `${prefix}incomingBytes`,
outgoingBytes: prefix => `${prefix}outgoingBytes`,
};
@ -65,10 +68,10 @@ const keys = {
*/
function getSchemaPrefix(params, timestamp) {
const {
bucket, accountId, userId, level, service,
bucket, accountId, userId, level, service, location,
} = params;
// `service` property must remain last because other objects also include it
const id = bucket || accountId || userId || service;
const id = bucket || accountId || userId || location || service;
const prefix = timestamp ? `${service}:${level}:${timestamp}:${id}:`
: `${service}:${level}:${id}:`;
return prefix;
@ -83,9 +86,13 @@ function getSchemaPrefix(params, timestamp) {
*/
function generateKey(params, metric, timestamp) {
const prefix = getSchemaPrefix(params, timestamp);
if (params.location) {
return `${prefix}locationStorage`;
}
return keys[metric](prefix);
}
/**
* Returns a list of the counters for a metric type
* @param {object} params - object with metric type and id as a property

View File

@ -7,7 +7,6 @@ const { Clustering, errors, ipCheck } = require('arsenal');
const arsenalHttps = require('arsenal').https;
const { Logger } = require('werelogs');
const config = require('./Config');
const routes = require('../router/routes');
const Route = require('../router/Route');
const Router = require('../router/Router');
@ -28,7 +27,12 @@ class UtapiServer {
constructor(worker, port, datastore, logger, config) {
this.worker = worker;
this.port = port;
this.router = new Router(config);
this.vault = config.vaultclient;
if (!this.vault) {
const Vault = require('./Vault');
this.vault = new Vault(config);
}
this.router = new Router(config, this.vault);
this.logger = logger;
this.datastore = datastore;
this.server = null;
@ -71,6 +75,7 @@ class UtapiServer {
req.socket.setNoDelay();
const { query, path, pathname } = url.parse(req.url, true);
const utapiRequest = new UtapiRequest()
.setVault(this.vault)
.setRequest(req)
.setLog(this.logger.newRequestLogger())
.setResponse(res)
@ -214,8 +219,7 @@ class UtapiServer {
* @property {object} params.log - logger configuration
* @return {undefined}
*/
function spawn(params) {
Object.assign(config, params);
function spawn(config) {
const {
workers, redis, log, port,
} = config;

View File

@ -23,7 +23,7 @@ class CacheClient {
async pushMetric(metric) {
const shard = shardFromTimestamp(metric.timestamp);
if (!this._cacheBackend.addToShard(shard, metric)) {
if (!(await this._cacheBackend.addToShard(shard, metric))) {
return false;
}
await this._counterBackend.updateCounters(metric);

View File

@ -8,10 +8,16 @@ const needle = require('needle');
const levelup = require('levelup');
const memdown = require('memdown');
const encode = require('encoding-down');
const { UtapiMetric } = require('../models');
const { LoggerContext, asyncOrCallback } = require('../utils');
/* eslint-enable import/no-extraneous-dependencies */
const { UtapiMetric } = require('../models');
const {
LoggerContext,
logEventFilter,
asyncOrCallback,
buildFilterChain,
} = require('../utils');
const moduleLogger = new LoggerContext({
module: 'client',
});
@ -70,13 +76,24 @@ class UtapiClient {
constructor(config) {
this._host = (config && config.host) || 'localhost';
this._port = (config && config.port) || '8100';
this._tls = (config && config.tls) || {};
this._transport = (config && config.tls) ? 'https' : 'http';
this._logger = (config && config.logger) || moduleLogger;
this._maxCachedMetrics = (config && config.maxCachedMetrics) || 200000; // roughly 100MB
this._numCachedMetrics = 0;
this._retryCache = levelup(encode(memdown(), { valueEncoding: 'json' }));
this._disableRetryCache = config && config.disableRetryCache;
this._retryCache = this._disableRetryCache
? null
: levelup(encode(memdown(), { valueEncoding: 'json' }));
this._drainTimer = null;
this._drainCanSchedule = true;
this._drainDelay = (config && config.drainDelay) || 30000;
this._suppressedEventFields = (config && config.suppressedEventFields) || null;
const eventFilters = (config && config.filter) || {};
this._shouldPushMetric = buildFilterChain(eventFilters);
if (Object.keys(eventFilters).length !== 0) {
logEventFilter((...args) => moduleLogger.info(...args), 'utapi event filter enabled', eventFilters);
}
}
async join() {
@ -87,9 +104,9 @@ class UtapiClient {
async _pushToUtapi(metrics) {
const resp = await needle(
'post',
`http://${this._host}:${this._port}/v2/ingest`,
`${this._transport}://${this._host}:${this._port}/v2/ingest`,
metrics.map(metric => metric.getValue()),
{ json: true },
{ json: true, ...this._tls },
);
if (resp.statusCode !== 200) {
throw Error('failed to push metric, server returned non 200 status code',
@ -151,7 +168,8 @@ class UtapiClient {
try {
const resp = await needle(
'get',
`http://${this._host}:${this._port}/_/healthcheck`,
`${this._transport}://${this._host}:${this._port}/_/healthcheck`,
this._tls,
);
return resp.statusCode === 200;
} catch (error) {
@ -228,10 +246,15 @@ class UtapiClient {
}
async _pushMetric(data) {
const metric = data instanceof UtapiMetric
let metric = data instanceof UtapiMetric
? data
: new UtapiMetric(data);
// If this event has been filtered then exit early
if (!this._shouldPushMetric(metric)) {
return;
}
// Assign a uuid if one isn't passed
if (!metric.uuid) {
metric.uuid = uuid.v4();
@ -242,12 +265,26 @@ class UtapiClient {
metric.timestamp = new Date().getTime();
}
if (this._suppressedEventFields !== null) {
const filteredData = Object.entries(metric.getValue())
.filter(([key]) => !this._suppressedEventFields.includes(key))
.reduce((obj, [key, value]) => {
obj[key] = value;
return obj;
}, {});
metric = new UtapiMetric(filteredData);
}
try {
await this._pushToUtapi([metric]);
} catch (error) {
this._logger.error('unable to push metric, adding to retry cache', { error });
if (!await this._addToRetryCache(metric)) {
throw new Error('unable to store metric');
if (!this._disableRetryCache) {
this._logger.error('unable to push metric, adding to retry cache', { error });
if (!await this._addToRetryCache(metric)) {
throw new Error('unable to store metric');
}
} else {
this._logger.debug('unable to push metric. retry cache disabled, not retrying ingestion.', { error });
}
}
}
@ -275,7 +312,9 @@ class UtapiClient {
return asyncOrCallback(async () => {
const resp = await needle(
'get',
`http://${this._host}:${this._port}/v2/storage/${level}/${resource}`,
`${this._transport}://${this._host}:${this._port}/v2/storage/${level}/${resource}`,
this._tls,
);
if (resp.statusCode !== 200) {

View File

@ -15,15 +15,14 @@
},
"warp10": {
"host": "127.0.0.1",
"port": 4802
"port": 4802,
"nodeId": "single_node",
"requestTimeout": 60000,
"connectTimeout": 60000
},
"healthChecks": {
"allowFrom": ["127.0.0.1/8", "::1"]
},
"vaultd": {
"host": "127.0.0.1",
"port": 8500
},
"cacheBackend": "memory",
"development": false,
"nodeId": "single_node",
@ -34,9 +33,32 @@
"snapshotSchedule": "5 0 * * * *",
"repairSchedule": "0 */5 * * * *",
"reindexSchedule": "0 0 0 * * Sun",
"diskUsageSchedule": "0 */15 * * * *",
"bucketd": [ "localhost:9000" ],
"reindex": {
"enabled": true,
"schedule": "0 0 0 * * 6"
},
"diskUsage": {
"retentionDays": 45,
"expirationEnabled": false
},
"serviceUser": {
"arn": "arn:aws:iam::000000000000:user/scality-internal/service-utapi-user",
"enabled": false
},
"filter": {
"allow": {},
"deny": {}
},
"metrics" : {
"enabled": false,
"host": "localhost",
"ingestPort": 10902,
"checkpointPort": 10903,
"snapshotPort": 10904,
"diskUsagePort": 10905,
"reindexPort": 10906,
"repairPort": 10907
}
}

View File

@ -2,27 +2,47 @@ const fs = require('fs');
const path = require('path');
const Joi = require('@hapi/joi');
const assert = require('assert');
const defaults = require('./defaults.json');
const werelogs = require('werelogs');
const { truthy, envNamespace } = require('../constants');
const {
truthy, envNamespace, allowedFilterFields, allowedFilterStates,
} = require('../constants');
const configSchema = require('./schema');
// We need to require the specific file rather than the parent module to avoid a circular require
const { parseDiskSizeSpec } = require('../utils/disk');
function _splitTrim(char, text) {
return text.split(char).map(v => v.trim());
}
function _splitServer(text) {
assert.notStrictEqual(text.indexOf(':'), -1);
const [host, port] = text.split(':').map(v => v.trim());
const [host, port] = _splitTrim(':', text);
return {
host,
port: Number.parseInt(port, 10),
};
}
function _splitNode(text) {
assert.notStrictEqual(text.indexOf('='), -1);
const [nodeId, hostname] = _splitTrim('=', text);
return {
nodeId,
..._splitServer(hostname),
};
}
const _typeCasts = {
bool: val => truthy.has(val.toLowerCase()),
int: val => parseInt(val, 10),
list: val => val.split(',').map(v => v.trim()),
serverList: val => val.split(',').map(v => v.trim()).map(_splitServer),
list: val => _splitTrim(',', val),
serverList: val => _splitTrim(',', val).map(_splitServer),
nodeList: val => _splitTrim(',', val).map(_splitNode),
diskSize: parseDiskSizeSpec,
};
function _definedInEnv(key) {
return process.env[`${envNamespace}_${key}`] !== undefined;
}
@ -53,7 +73,6 @@ class Config {
constructor(overrides) {
this._basePath = path.join(__dirname, '../../');
this._configPath = _loadFromEnv('CONFIG_FILE', defaultConfigPath);
this._defaultsPath = path.join(__dirname, 'defaults.json');
this.host = undefined;
this.port = undefined;
@ -71,6 +90,11 @@ class Config {
parsedConfig = this._recursiveUpdate(parsedConfig, overrides);
}
Object.assign(this, parsedConfig);
werelogs.configure({
level: Config.logging.level,
dump: Config.logging.dumpLevel,
});
}
static _readFile(path, encoding = 'utf-8') {
@ -95,7 +119,7 @@ class Config {
}
_loadDefaults() {
return Config._readJSON(this._defaultsPath);
return defaults;
}
_loadUserConfig() {
@ -159,38 +183,37 @@ class Config {
return this._recursiveUpdateObject(defaultConf, userConf);
}
static _parseRedisConfig(config) {
const redisConf = {};
if (config.sentinels || _definedInEnv('REDIS_SENTINELS')) {
redisConf.name = _loadFromEnv('REDIS_NAME', config.name);
const sentinels = _loadFromEnv(
'REDIS_SENTINELS',
static _parseRedisConfig(prefix, config) {
const redisConf = {
retry: config.retry,
};
if (config.sentinels || _definedInEnv(`${prefix}_SENTINELS`)) {
redisConf.name = _loadFromEnv(`${prefix}_NAME`, config.name);
redisConf.sentinels = _loadFromEnv(
`${prefix}_SENTINELS`,
config.sentinels,
_typeCasts.list,
_typeCasts.serverList,
);
redisConf.sentinels = sentinels.map(v => {
if (typeof v === 'string') {
const [host, port] = v.split(':');
return { host, port: Number.parseInt(port, 10) };
}
return v;
});
redisConf.sentinelPassword = _loadFromEnv(
'REDIS_SENTINEL_PASSWORD',
`${prefix}_SENTINEL_PASSWORD`,
config.sentinelPassword,
);
redisConf.password = _loadFromEnv(
`${prefix}_PASSWORD`,
config.password,
);
} else {
redisConf.host = _loadFromEnv(
'REDIS_HOST',
`${prefix}_HOST`,
config.host,
);
redisConf.port = _loadFromEnv(
'REDIS_PORT',
`${prefix}_PORT`,
config.port,
_typeCasts.int,
);
redisConf.password = _loadFromEnv(
'REDIS_PASSWORD',
`${prefix}_PASSWORD`,
config.password,
);
}
@ -216,6 +239,28 @@ class Config {
return certs;
}
static _parseResourceFilters(config) {
const resourceFilters = {};
allowedFilterFields.forEach(
field => allowedFilterStates.forEach(
state => {
const configResources = (config[state] && config[state][field]) || null;
const envVar = `FILTER_${field.toUpperCase()}_${state.toUpperCase()}`;
const resources = _loadFromEnv(envVar, configResources, _typeCasts.list);
if (resources) {
if (resourceFilters[field]) {
throw new Error('You can not define both an allow and a deny list for an event field.');
}
resourceFilters[field] = { [state]: new Set(resources) };
}
},
),
);
return resourceFilters;
}
_parseConfig(config) {
const parsedConfig = {};
@ -246,25 +291,30 @@ class Config {
throw new Error('bad config: both certFilePaths.key and certFilePaths.cert must be defined');
}
parsedConfig.redis = Config._parseRedisConfig(config.redis);
parsedConfig.redis = Config._parseRedisConfig('REDIS', config.redis);
parsedConfig.cache = Config._parseRedisConfig(config.localCache);
parsedConfig.cache = Config._parseRedisConfig('REDIS_CACHE', config.localCache);
parsedConfig.cache.backend = _loadFromEnv('CACHE_BACKEND', config.cacheBackend);
const warp10Conf = {
readToken: _loadFromEnv('WARP10_READ_TOKEN', config.warp10.readToken),
writeToken: _loadFromEnv('WARP10_WRITE_TOKEN', config.warp10.writeToken),
requestTimeout: _loadFromEnv('WARP10_REQUEST_TIMEOUT', config.warp10.requestTimeout, _typeCasts.int),
connectTimeout: _loadFromEnv('WARP10_CONNECT_TIMEOUT', config.warp10.connectTimeout, _typeCasts.int),
};
parsedConfig.warp10 = warp10Conf;
if (Array.isArray(config.warp10.hosts) || _definedInEnv('WARP10_HOSTS')) {
warp10Conf.hosts = _loadFromEnv('WARP10_HOSTS', config.warp10.hosts, _typeCasts.serverList);
warp10Conf.hosts = _loadFromEnv('WARP10_HOSTS', config.warp10.hosts, _typeCasts.nodeList);
} else {
warp10Conf.host = _loadFromEnv('WARP10_HOST', config.warp10.host);
warp10Conf.port = _loadFromEnv('WARP10_PORT', config.warp10.port, _typeCasts.int);
warp10Conf.hosts = [{
host: _loadFromEnv('WARP10_HOST', config.warp10.host),
port: _loadFromEnv('WARP10_PORT', config.warp10.port, _typeCasts.int),
nodeId: _loadFromEnv('WARP10_NODE_ID', config.warp10.nodeId),
}];
}
parsedConfig.warp10 = warp10Conf;
parsedConfig.logging = {
level: parsedConfig.development
? 'debug'
@ -280,6 +330,7 @@ class Config {
parsedConfig.snapshotSchedule = _loadFromEnv('SNAPSHOT_SCHEDULE', config.snapshotSchedule);
parsedConfig.repairSchedule = _loadFromEnv('REPAIR_SCHEDULE', config.repairSchedule);
parsedConfig.reindexSchedule = _loadFromEnv('REINDEX_SCHEDULE', config.reindexSchedule);
parsedConfig.diskUsageSchedule = _loadFromEnv('DISK_USAGE_SCHEDULE', config.diskUsageSchedule);
parsedConfig.ingestionLagSeconds = _loadFromEnv(
'INGESTION_LAG_SECONDS',
@ -292,6 +343,34 @@ class Config {
_typeCasts.int,
);
const diskUsage = {
path: _loadFromEnv('DISK_USAGE_PATH', (config.diskUsage || {}).path),
hardLimit: _loadFromEnv('DISK_USAGE_HARD_LIMIT', (config.diskUsage || {}).hardLimit),
retentionDays: _loadFromEnv(
'METRIC_RETENTION_PERIOD',
(config.diskUsage || {}).retentionDays, _typeCasts.int,
),
expirationEnabled: _loadFromEnv(
'METRIC_EXPIRATION_ENABLED',
(config.diskUsage || {}).expirationEnabled, _typeCasts.bool,
),
};
if (diskUsage.hardLimit !== undefined) {
diskUsage.hardLimit = parseDiskSizeSpec(diskUsage.hardLimit);
}
if (!diskUsage.path && diskUsage.hardLimit !== undefined) {
throw Error('You must specify diskUsage.path to monitor for disk usage');
} else if (diskUsage.path && diskUsage.hardLimit === undefined) {
throw Error('diskUsage.hardLimit must be specified');
} else if (diskUsage.expirationEnabled && diskUsage.retentionDays === undefined) {
throw Error('diskUsage.retentionDays must be specified');
}
diskUsage.enabled = diskUsage.path !== undefined;
parsedConfig.diskUsage = diskUsage;
parsedConfig.vaultd = {
host: _loadFromEnv('VAULT_HOST', config.vaultd.host),
port: _loadFromEnv('VAULT_PORT', config.vaultd.port),
@ -299,6 +378,24 @@ class Config {
parsedConfig.bucketd = _loadFromEnv('BUCKETD_BOOTSTRAP', config.bucketd, _typeCasts.serverList);
parsedConfig.serviceUser = {
arn: _loadFromEnv('SERVICE_USER_ARN', config.serviceUser.arn),
enabled: _loadFromEnv('SERVICE_USER_ENABLED', config.serviceUser.enabled, _typeCasts.bool),
};
parsedConfig.filter = Config._parseResourceFilters(config.filter);
parsedConfig.metrics = {
enabled: _loadFromEnv('METRICS_ENABLED', config.metrics.enabled, _typeCasts.bool),
host: _loadFromEnv('METRICS_HOST', config.metrics.host),
ingestPort: _loadFromEnv('METRICS_PORT_INGEST', config.metrics.ingestPort, _typeCasts.int),
checkpointPort: _loadFromEnv('METRICS_PORT_CHECKPOINT', config.metrics.checkpointPort, _typeCasts.int),
snapshotPort: _loadFromEnv('METRICS_PORT_SNAPSHOT', config.metrics.snapshotPort, _typeCasts.int),
diskUsagePort: _loadFromEnv('METRICS_PORT_DISK_USAGE', config.metrics.diskUsagePort, _typeCasts.int),
reindexPort: _loadFromEnv('METRICS_PORT_REINDEX', config.metrics.reindexPort, _typeCasts.int),
repairPort: _loadFromEnv('METRICS_PORT_REPAIR', config.metrics.repairPort, _typeCasts.int),
};
return parsedConfig;
}

View File

@ -1,9 +1,23 @@
const Joi = require('@hapi/joi');
const { allowedFilterFields, allowedFilterStates } = require('../constants');
const backoffSchema = Joi.object({
min: Joi.number(),
max: Joi.number(),
deadline: Joi.number(),
jitter: Joi.number(),
factor: Joi.number(),
});
const redisRetrySchema = Joi.object({
connectBackoff: backoffSchema,
});
const redisServerSchema = Joi.object({
host: Joi.string(),
port: Joi.number(),
password: Joi.string().allow(''),
retry: redisRetrySchema,
});
const redisSentinelSchema = Joi.object({
@ -14,6 +28,7 @@ const redisSentinelSchema = Joi.object({
})),
password: Joi.string().default('').allow(''),
sentinelPassword: Joi.string().default('').allow(''),
retry: redisRetrySchema,
});
const warp10SingleHost = Joi.object({
@ -27,6 +42,7 @@ const warp10MultiHost = Joi.object({
hosts: Joi.array().items(Joi.object({
host: Joi.alternatives(Joi.string().hostname(), Joi.string().ip()),
port: Joi.number().port(),
nodeId: Joi.string(),
})),
readToken: Joi.string(),
writeToken: Joi.string(),
@ -77,7 +93,38 @@ const schema = Joi.object({
snapshotSchedule: Joi.string(),
repairSchedule: Joi.string(),
reindexSchedule: Joi.string(),
diskUsageSchedule: Joi.string(),
diskUsage: Joi.object({
path: Joi.string(),
retentionDays: Joi.number().greater(0),
expirationEnabled: Joi.boolean(),
hardLimit: Joi.string(),
}),
serviceUser: Joi.object({
arn: Joi.string(),
enabled: Joi.boolean(),
}),
filter: Joi.object(allowedFilterStates.reduce(
(filterObj, state) => {
filterObj[state] = allowedFilterFields.reduce(
(stateObj, field) => {
stateObj[field] = Joi.array().items(Joi.string());
return stateObj;
}, {},
);
return filterObj;
}, {},
)),
metrics: {
enabled: Joi.boolean(),
host: Joi.string(),
ingestPort: Joi.number().port(),
checkpointPort: Joi.number().port(),
snapshotPort: Joi.number().port(),
diskUsagePort: Joi.number().port(),
reindexPort: Joi.number().port(),
repairPort: Joi.number().port(),
},
});
module.exports = schema;

View File

@ -19,17 +19,23 @@ const constants = {
'createBucket',
'deleteBucket',
'deleteBucketCors',
'deleteBucketEncryption',
'deleteBucketLifecycle',
'deleteBucketReplication',
'deleteBucketTagging',
'deleteBucketWebsite',
'deleteObject',
'deleteObjectTagging',
'getBucketAcl',
'getBucketCors',
'getBucketEncryption',
'getBucketLifecycle',
'getBucketLocation',
'getBucketNotification',
'getBucketObjectLock',
'getBucketReplication',
'getBucketVersioning',
'getBucketTagging',
'getBucketWebsite',
'getObject',
'getObjectAcl',
@ -45,18 +51,23 @@ const constants = {
'multiObjectDelete',
'putBucketAcl',
'putBucketCors',
'putBucketEncryption',
'putBucketLifecycle',
'putBucketNotification',
'putBucketObjectLock',
'putBucketReplication',
'putBucketVersioning',
'putBucketTagging',
'putBucketWebsite',
'putData',
'putDeleteMarkerObject',
'putObject',
'putObjectAcl',
'putObjectLegalHold',
'putObjectRetention',
'putObjectTagging',
'replicateDelete',
'replicateObject',
'replicateTags',
'uploadPart',
'uploadPartCopy',
],
@ -96,6 +107,21 @@ const constants = {
counterBaseValueExpiration: 86400, // 24hrs
keyVersionSplitter: String.fromCharCode(0),
migrationChunksize: 500,
migrationOpTranslationMap: {
listBucketMultipartUploads: 'listMultipartUploads',
},
ingestionOpTranslationMap: {
putDeleteMarkerObject: 'deleteObject',
},
expirationChunkDuration: 900000000, // 15 minutes in microseconds
allowedFilterFields: [
'operationId',
'location',
'account',
'user',
'bucket',
],
allowedFilterStates: ['allow', 'deny'],
};
constants.operationToResponse = constants.operations

View File

@ -1,7 +1,7 @@
{
"AccessDenied": {
"code": 403,
"description": "Access denied"
"description": "Access Denied"
},
"InternalError": {
"code": 500,

View File

@ -1,5 +1,5 @@
const BucketClientInterface = require('arsenal/lib/storage/metadata/bucketclient/BucketClientInterface');
const bucketclient = require('bucketclient');
const { BucketClientInterface } = require('arsenal').storage.metadata.bucketclient;
const config = require('../config');
const { LoggerContext } = require('../utils');
@ -10,7 +10,7 @@ const moduleLogger = new LoggerContext({
const params = {
bucketdBootstrap: config.bucketd,
https: config.https,
https: config.tls,
};
module.exports = new BucketClientInterface(params, bucketclient, moduleLogger);

View File

@ -1,16 +1,25 @@
/* eslint-disable no-restricted-syntax */
const { usersBucket, splitter: mdKeySplitter, mpuBucketPrefix } = require('arsenal').constants;
const arsenal = require('arsenal');
const async = require('async');
const metadata = require('./client');
const { LoggerContext, logger } = require('../utils');
const { keyVersionSplitter } = require('../constants');
const { usersBucket, splitter: mdKeySplitter, mpuBucketPrefix } = arsenal.constants;
const { BucketInfo } = arsenal.models;
const moduleLogger = new LoggerContext({
module: 'metadata.client',
});
const ebConfig = {
times: 10,
interval: retryCount => 50 * (2 ** retryCount),
};
const PAGE_SIZE = 1000;
function _listingWrapper(bucket, params) {
async function _listingWrapper(bucket, params) {
return new Promise(
(resolve, reject) => metadata.listObject(
bucket,
@ -37,7 +46,7 @@ function _listObject(bucket, prefix, hydrateFunc) {
try {
// eslint-disable-next-line no-await-in-loop
res = await _listingWrapper(bucket, { ...listingParams, gt });
res = await async.retryable(ebConfig, _listingWrapper)(bucket, { ...listingParams, gt });
} catch (error) {
moduleLogger.error('Error during listing', { error });
throw error;
@ -99,7 +108,7 @@ function bucketExists(bucket) {
bucket,
logger.newRequestLogger(),
err => {
if (err && !err.NoSuchBucket) {
if (err && (!err.is || !err.is.NoSuchBucket)) {
reject(err);
return;
}
@ -108,9 +117,25 @@ function bucketExists(bucket) {
));
}
function getBucket(bucket) {
return new Promise((resolve, reject) => {
metadata.getBucketAttributes(
bucket,
logger.newRequestLogger(), (err, data) => {
if (err) {
reject(err);
return;
}
resolve(BucketInfo.fromObj(data));
},
);
});
}
module.exports = {
listBuckets,
listObjects,
listMPUs,
bucketExists,
getBucket,
};

View File

@ -3,6 +3,7 @@ const Joi = require('@hapi/joi');
const { buildModel } = require('./Base');
const { apiOperations } = require('../server/spec');
const ResponseContainer = require('./ResponseContainer');
const { httpRequestDurationSeconds } = require('../server/metrics');
const apiTags = Object.keys(apiOperations);
const apiOperationIds = Object.values(apiOperations)
@ -21,6 +22,7 @@ const contextSchema = {
logger: Joi.any(),
request: Joi.any(),
results: Joi.any(),
requestTimer: Joi.any(),
};
const RequestContextModel = buildModel('RequestContext', contextSchema);
@ -34,6 +36,10 @@ class RequestContext extends RequestContextModel {
const tag = request.swagger.operation['x-router-controller'];
const { operationId } = request.swagger.operation;
const requestTimer = tag !== 'internal'
? httpRequestDurationSeconds.startTimer({ action: operationId })
: null;
request.logger.logger.addDefaultFields({
tag,
operationId,
@ -50,6 +56,7 @@ class RequestContext extends RequestContextModel {
encrypted,
results: new ResponseContainer(),
logger: request.logger,
requestTimer,
});
}

View File

@ -2,9 +2,12 @@ const EventEmitter = require('events');
const { callbackify, promisify } = require('util');
const IORedis = require('ioredis');
const { jsutil } = require('arsenal');
const BackOff = require('backo');
const { whilst } = require('async');
const errors = require('./errors');
const { LoggerContext, asyncOrCallback } = require('./utils');
const { LoggerContext } = require('./utils/log');
const { asyncOrCallback } = require('./utils/func');
const moduleLogger = new LoggerContext({
module: 'redis',
@ -51,7 +54,10 @@ class RedisClient extends EventEmitter {
Object.values(this._inFlightTimeouts)
.forEach(clearTimeout);
}
await this._redis.quit();
if (this._redis !== null) {
await this._redis.quit();
this._redis = null;
}
}, callback);
}
@ -65,6 +71,7 @@ class RedisClient extends EventEmitter {
this._redis.off('connect', this._onConnect);
this._redis.off('ready', this._onReady);
this._redis.off('error', this._onError);
this._redis.disconnect();
}
this._isConnected = false;
this._isReady = false;
@ -99,53 +106,96 @@ class RedisClient extends EventEmitter {
}
_onError(error) {
this._isReady = false;
moduleLogger.error('error connecting to redis', { error });
this.emit('error', error);
if (this.listenerCount('error') > 0) {
this.emit('error', error);
}
}
_createCommandTimeout() {
let timer;
let onTimeout;
const cancelTimeout = jsutil.once(() => {
clearTimeout(timer);
this.off('timeout', onTimeout);
this._inFlightTimeouts.delete(timer);
});
const timeout = new Promise((_, reject) => {
timer = setTimeout(
() => {
this.emit('timeout');
this._initClient();
},
COMMAND_TIMEOUT,
);
timer = setTimeout(this.emit.bind(this, 'timeout'), COMMAND_TIMEOUT);
this._inFlightTimeouts.add(timer);
this.once('timeout', () => {
onTimeout = () => {
moduleLogger.warn('redis command timed out');
cancelTimeout();
this._initClient();
reject(errors.OperationTimedOut);
});
};
this.once('timeout', onTimeout);
});
return { timeout, cancelTimeout };
}
async _call(asyncFunc) {
const funcPromise = asyncFunc(this._redis);
if (!this._useTimeouts) {
// If timeouts are disabled simply return the Promise
return funcPromise;
}
const start = Date.now();
const { connectBackoff } = this._redisOptions.retry || {};
const backoff = new BackOff(connectBackoff);
const timeoutMs = (connectBackoff || {}).deadline || 2000;
let retried = false;
const { timeout, cancelTimeout } = this._createCommandTimeout();
return new Promise((resolve, reject) => {
whilst(
next => { // WARNING: test is asynchronous in `async` v3
if (!connectBackoff && !this.isReady) {
moduleLogger.warn('redis not ready and backoff is not configured');
}
process.nextTick(next, null, !!connectBackoff && !this.isReady);
},
next => {
retried = true;
try {
// timeout always rejects so we can just return
return await Promise.race([funcPromise, timeout]);
} finally {
cancelTimeout();
}
if ((Date.now() - start) > timeoutMs) {
moduleLogger.error('redis still not ready after max wait, giving up', { timeoutMs });
return next(errors.InternalError.customizeDescription(
'redis client is not ready',
));
}
const backoffDurationMs = backoff.duration();
moduleLogger.error('redis not ready, retrying', { backoffDurationMs });
return setTimeout(next, backoffDurationMs);
},
err => {
if (err) {
return reject(err);
}
if (retried) {
moduleLogger.info('redis connection recovered', {
recoveryOverheadMs: Date.now() - start,
});
}
const funcPromise = asyncFunc(this._redis);
if (!this._useTimeouts) {
// If timeouts are disabled simply return the Promise
return resolve(funcPromise);
}
const { timeout, cancelTimeout } = this._createCommandTimeout();
try {
// timeout always rejects so we can just return
return resolve(Promise.race([funcPromise, timeout]));
} finally {
cancelTimeout();
}
},
);
});
}
call(func, callback) {

View File

@ -0,0 +1,14 @@
const { collectDefaultMetrics, register } = require('prom-client');
collectDefaultMetrics({
timeout: 10000,
gcDurationBuckets: [0.001, 0.01, 0.1, 1, 2, 5],
});
async function prometheusMetrics(ctx) {
// eslint-disable-next-line no-param-reassign
ctx.results.statusCode = 200;
ctx.results.body = await register.metrics();
}
module.exports = prometheusMetrics;

View File

@ -1,9 +1,8 @@
const errors = require('../../../errors');
const { serviceToWarp10Label } = require('../../../constants');
const { client: warp10 } = require('../../../warp10');
const { clients: warp10Clients } = require('../../../warp10');
const { client: cache } = require('../../../cache');
const { now } = require('../../../utils');
const config = require('../../../config');
const { now, iterIfError } = require('../../../utils');
/**
*
@ -30,15 +29,18 @@ async function getStorage(ctx, params) {
} else {
const labelName = serviceToWarp10Label[params.level];
const labels = { [labelName]: resource };
const options = {
params: {
end: now(),
labels,
node: config.nodeId,
},
macro: 'utapi/getMetricsAt',
};
const res = await warp10.exec(options);
const res = await iterIfError(warp10Clients, warp10 => {
const options = {
params: {
end: now(),
labels,
node: warp10.nodeId,
},
macro: 'utapi/getMetricsAt',
};
return warp10.exec(options);
}, error => ctx.logger.error('error while fetching metrics', { error }));
if (res.result.length === 0) {
ctx.logger.error('unable to retrieve metrics', { level, resource });
@ -52,7 +54,7 @@ async function getStorage(ctx, params) {
ctx.results.statusCode = 200;
ctx.results.body = {
storageUtilized,
storageUtilized: Math.max(storageUtilized, 0),
resource,
level,
};

View File

@ -2,6 +2,7 @@ const errors = require('../../../errors');
const { UtapiMetric } = require('../../../models');
const { client: cacheClient } = require('../../../cache');
const { convertTimestamp } = require('../../../utils');
const { ingestionOpTranslationMap } = require('../../../constants');
async function ingestMetric(ctx, params) {
let metrics;
@ -9,6 +10,7 @@ async function ingestMetric(ctx, params) {
metrics = params.body.map(m => new UtapiMetric({
...m,
timestamp: convertTimestamp(m.timestamp),
operationId: ingestionOpTranslationMap[m.operationId] || m.operationId,
}));
} catch (error) {
throw errors.InvalidRequest;

View File

@ -1,8 +1,7 @@
const errors = require('../../../errors');
const { serviceToWarp10Label, operationToResponse } = require('../../../constants');
const { convertTimestamp } = require('../../../utils');
const { client: warp10 } = require('../../../warp10');
const config = require('../../../config');
const { convertTimestamp, iterIfError } = require('../../../utils');
const { clients: warp10Clients } = require('../../../warp10');
const emptyOperationsResponse = Object.values(operationToResponse)
.reduce((prev, key) => {
@ -17,37 +16,74 @@ const metricResponseKeys = {
service: 'serviceName',
};
function positiveOrZero(value) {
return Math.max(value, 0);
}
async function listMetric(ctx, params) {
const labelName = serviceToWarp10Label[params.level];
const resources = params.body[params.level];
const [start, end] = params.body.timeRange
.map(convertTimestamp)
.map(v => v.toString());
let [start, end] = params.body.timeRange;
if (end === undefined) {
end = Date.now();
}
let results;
try {
// A separate request will be made to warp 10 per requested resource
results = await Promise.all(
resources.map(async ({ resource, id }) => {
const labels = { [labelName]: id };
const res = await iterIfError(warp10Clients, warp10 => {
const options = {
params: {
start: convertTimestamp(start).toString(),
end: convertTimestamp(end).toString(),
labels,
node: warp10.nodeId,
},
macro: 'utapi/getMetrics',
};
return warp10.exec(options);
}, error => ctx.logger.error('error during warp 10 request', {
error,
requestParams: {
start,
end,
labels,
},
}));
if (res.result.length === 0) {
ctx.logger.error('unable to retrieve metrics', { resource, type: params.level });
throw errors.InternalError;
}
const rawMetrics = JSON.parse(res.result[0]);
// Due to various error cases it is possible for metrics in utapi to go negative.
// As this is nonsensical to the user we replace any negative values with zero.
const metrics = {
storageUtilized: rawMetrics.storageUtilized.map(positiveOrZero),
numberOfObjects: rawMetrics.numberOfObjects.map(positiveOrZero),
incomingBytes: positiveOrZero(rawMetrics.incomingBytes),
outgoingBytes: positiveOrZero(rawMetrics.outgoingBytes),
operations: rawMetrics.operations,
};
return {
resource,
metrics,
};
}),
);
} catch (error) {
ctx.logger.error('error fetching metrics from warp10', { error });
throw errors.InternalError;
}
// A separate request will be made to warp 10 per requested resource
const results = await Promise.all(
resources.map(async resource => {
const labels = { [labelName]: resource };
const options = {
params: {
start,
end,
labels,
node: config.nodeId,
},
macro: 'utapi/getMetrics',
};
const res = await warp10.exec(options);
if (res.result.length === 0) {
ctx.logger.error('unable to retrieve metrics', { resource, type: params.level });
throw errors.InternalError;
}
return {
resource,
metrics: JSON.parse(res.result[0]),
};
}),
);
// Convert the results from warp10 into the expected response format
const resp = results
@ -60,7 +96,7 @@ async function listMetric(ctx, params) {
const metric = {
...result.metrics,
timeRange: params.body.timeRange,
timeRange: [start, end],
operations: {
...emptyOperationsResponse,
...operations,

View File

@ -28,6 +28,7 @@ class UtapiServer extends Process {
app.use(middleware.loggerMiddleware);
await initializeOasTools(spec, app);
app.use(middleware.errorMiddleware);
app.use(middleware.httpMetricsMiddleware);
app.use(middleware.responseLoggerMiddleware);
return app;
}
@ -35,7 +36,7 @@ class UtapiServer extends Process {
static _createHttpsAgent() {
const conf = {
ciphers: ciphers.ciphers,
dhparam,
dhparam: dhparam.dhparam,
cert: config.tls.cert,
key: config.tls.key,
ca: config.tls.ca ? [config.tls.ca] : null,

20
libV2/server/metrics.js Normal file
View File

@ -0,0 +1,20 @@
const promClient = require('prom-client');
const httpRequestsTotal = new promClient.Counter({
name: 's3_utapi_http_requests_total',
help: 'Total number of HTTP requests',
labelNames: ['action', 'code'],
});
const httpRequestDurationSeconds = new promClient.Histogram({
name: 's3_utapi_http_request_duration_seconds',
help: 'Duration of HTTP requests in seconds',
labelNames: ['action', 'code'],
// buckets for response time from 0.1ms to 60s
buckets: [0.0001, 0.005, 0.015, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 1.0, 5.0, 15.0, 30.0, 60.0],
});
module.exports = {
httpRequestDurationSeconds,
httpRequestsTotal,
};

View File

@ -5,7 +5,8 @@ const { ipCheck } = require('arsenal');
const config = require('../config');
const { logger, buildRequestLogger } = require('../utils');
const errors = require('../errors');
const { authenticateRequest, vault } = require('../vault');
const { translateAndAuthorize } = require('../vault');
const metricHandlers = require('./metrics');
const oasOptions = {
controllers: path.join(__dirname, './API/'),
@ -44,41 +45,62 @@ function loggerMiddleware(req, res, next) {
return next();
}
// next is purposely not called as all error responses are handled here
// eslint-disable-next-line no-unused-vars
function errorMiddleware(err, req, res, next) {
let code = err.code || 500;
let message = err.message || 'Internal Error';
// failed request validation by oas-tools
if (err.failedValidation) {
// You can't actually use destructing here
/* eslint-disable prefer-destructuring */
code = errors.InvalidRequest.code;
message = errors.InvalidRequest.message;
/* eslint-enable prefer-destructuring */
}
if (!err.utapiError && !config.development) {
// Make sure internal errors don't leak when not in development
message = 'Internal Error';
}
res.status(code).send({
error: {
code: code.toString(),
message,
},
});
}
function responseLoggerMiddleware(req, res, next) {
const info = {
httpCode: res.statusCode,
httpMessage: res.statusMessage,
};
req.logger.end('finished handling request', info);
return next();
if (next !== undefined) {
next();
}
}
function httpMetricsMiddleware(request, response, next) {
// If the request.ctx is undefined then this is an internal oasTools request (/_/docs)
// No metrics should be pushed
if (config.metrics.enabled && request.ctx && request.ctx.tag !== 'internal') {
metricHandlers.httpRequestsTotal
.labels({
action: request.ctx.operationId,
code: response.statusCode,
}).inc(1);
request.ctx.requestTimer({ code: response.statusCode });
}
if (next) {
next();
}
}
// next is purposely not called as all error responses are handled here
// eslint-disable-next-line no-unused-vars
function errorMiddleware(err, req, res, next) {
let statusCode = err.code || 500;
let code = err.message || 'InternalError';
let message = err.description || 'Internal Error';
// failed request validation by oas-tools
if (err.failedValidation) {
// You can't actually use destructing here
/* eslint-disable prefer-destructuring */
statusCode = errors.InvalidRequest.code;
code = errors.InvalidRequest.message;
message = errors.InvalidRequest.description;
/* eslint-enable prefer-destructuring */
}
if (!err.utapiError && !config.development) {
// Make sure internal errors don't leak when not in development
code = 'InternalError';
message = 'Internal Error';
}
res.status(statusCode).send({
code,
message,
});
responseLoggerMiddleware(req, res, () => httpMetricsMiddleware(req, res));
}
// eslint-disable-next-line no-unused-vars
@ -95,7 +117,7 @@ async function authV4Middleware(request, response, params) {
switch (request.ctx.operationId) {
case 'listMetrics':
requestedResources = params.body[params.level];
action = params.Action.value;
action = params.Action;
break;
default:
@ -111,9 +133,13 @@ async function authV4Middleware(request, response, params) {
let authorizedResources;
try {
[passed, authorizedResources] = await authenticateRequest(request, action, params.level, requestedResources);
[passed, authorizedResources] = await translateAndAuthorize(request, action, params.level, requestedResources);
} catch (error) {
request.logger.error('error during authentication', { error });
// rethrow any access denied errors
if ((error.is && error.is.AccessDenied) || (error.utapiError && error.AccessDenied)) {
throw error;
}
throw errors.InternalError;
}
@ -122,17 +148,14 @@ async function authV4Middleware(request, response, params) {
throw errors.AccessDenied;
}
if (params.level === 'accounts') {
request.logger.debug('converting account ids to canonical ids');
authorizedResources = await vault.getCanonicalIds(
authorizedResources,
request.logger.logger,
);
}
// authorizedResources is only defined on non-account credentials
if (request.ctx.operationId === 'listMetrics' && authorizedResources !== undefined) {
switch (request.ctx.operationId) {
case 'listMetrics':
params.body[params.level] = authorizedResources;
break;
default:
[params.resource] = authorizedResources;
break;
}
}
@ -153,5 +176,6 @@ module.exports = {
responseLoggerMiddleware,
authV4Middleware,
clientIpLimitMiddleware,
httpMetricsMiddleware,
},
};

View File

@ -1,11 +1,12 @@
const assert = require('assert');
const cron = require('node-schedule');
const cronparser = require('cron-parser');
const promClient = require('prom-client');
const { DEFAULT_METRICS_ROUTE } = require('arsenal').network.probe.ProbeServer;
const config = require('../config');
const { client: cacheClient } = require('../cache');
const Process = require('../process');
const { LoggerContext } = require('../utils');
const { Warp10Client } = require('../warp10');
const { LoggerContext, iterIfError, startProbeServer } = require('../utils');
const logger = new LoggerContext({
module: 'BaseTask',
@ -16,30 +17,104 @@ class Now {}
class BaseTask extends Process {
constructor(options) {
super();
assert.notStrictEqual(options, undefined);
assert(Array.isArray(options.warp10), 'you must provide an array of warp 10 clients');
this._cache = cacheClient;
this._warp10 = new Warp10Client({
...config.warp10,
...((options && options.warp10) || {}),
});
this._warp10Clients = options.warp10;
this._scheduler = null;
this._defaultSchedule = Now;
this._defaultLag = 0;
this._nodeId = config.nodeId;
this._enableMetrics = options.enableMetrics || false;
this._metricsHost = options.metricsHost || 'localhost';
this._metricsPort = options.metricsPort || 9001;
this._metricsHandlers = null;
this._probeServer = null;
}
async _setup() {
this._program
.option('-n, --now', 'Execute the task immediately and then exit. Overrides --schedule.')
.option(
'-s, --schedule <crontab>',
'Execute task using this crontab. Overrides configured schedule',
value => {
cronparser.parseExpression(value);
return value;
},
)
.option('-l, --lag <lag>', 'Set a custom lag time in seconds', v => parseInt(v, 10))
.option('-n, --node-id <id>', 'Set a custom node id');
async _setup(includeDefaultOpts = true) {
if (includeDefaultOpts) {
this._program
.option('-n, --now', 'Execute the task immediately and then exit. Overrides --schedule.')
.option(
'-s, --schedule <crontab>',
'Execute task using this crontab. Overrides configured schedule',
value => {
cronparser.parseExpression(value);
return value;
},
)
.option('-l, --lag <lag>', 'Set a custom lag time in seconds', v => parseInt(v, 10))
.option('-n, --node-id <id>', 'Set a custom node id');
}
if (this._enableMetrics) {
promClient.collectDefaultMetrics({
timeout: 10000,
gcDurationBuckets: [0.001, 0.01, 0.1, 1, 2, 5],
});
this._metricsHandlers = {
...this._registerDefaultMetricHandlers(),
...this._registerMetricHandlers(),
};
await this._createProbeServer();
}
}
_registerDefaultMetricHandlers() {
const taskName = this.constructor.name;
// Get the name of our subclass in snake case format eg BaseClass => _base_class
const taskNameSnake = taskName.replace(/[A-Z]/g, letter => `_${letter.toLowerCase()}`);
const executionDuration = new promClient.Gauge({
name: `s3_utapi${taskNameSnake}_duration_seconds`,
help: `Execution time of the ${taskName} task`,
labelNames: ['origin', 'containerName'],
});
const executionAttempts = new promClient.Counter({
name: `s3_utapi${taskNameSnake}_attempts_total`,
help: `Total number of attempts to execute the ${taskName} task`,
labelNames: ['origin', 'containerName'],
});
const executionFailures = new promClient.Counter({
name: `s3_utapi${taskNameSnake}_failures_total`,
help: `Total number of failures executing the ${taskName} task`,
labelNames: ['origin', 'containerName'],
});
return {
executionDuration,
executionAttempts,
executionFailures,
};
}
// eslint-disable-next-line class-methods-use-this
_registerMetricHandlers() {
return {};
}
async _createProbeServer() {
this._probeServer = await startProbeServer({
bindAddress: this._metricsHost,
port: this._metricsPort,
});
this._probeServer.addHandler(
DEFAULT_METRICS_ROUTE,
(res, log) => {
log.debug('metrics requested');
res.writeHead(200, {
'Content-Type': promClient.register.contentType,
});
promClient.register.metrics().then(metrics => {
res.end(metrics);
});
},
);
}
get schedule() {
@ -59,13 +134,6 @@ class BaseTask extends Process {
return this._defaultLag;
}
get nodeId() {
if (this._program.nodeId) {
return this._program.nodeId;
}
return this._nodeId;
}
async _start() {
await this._cache.connect();
if (this.schedule === Now) {
@ -87,12 +155,23 @@ class BaseTask extends Process {
}
async execute() {
let endTimer;
if (this._enableMetrics) {
endTimer = this._metricsHandlers.executionDuration.startTimer();
this._metricsHandlers.executionAttempts.inc(1);
}
try {
const timestamp = new Date() * 1000; // Timestamp in microseconds;
const laggedTimestamp = timestamp - (this.lag * 1000000);
await this._execute(laggedTimestamp);
} catch (error) {
logger.error('Error during task execution', { error });
this._metricsHandlers.executionFailures.inc(1);
}
if (this._enableMetrics) {
endTimer();
}
}
@ -102,8 +181,28 @@ class BaseTask extends Process {
}
async _join() {
if (this._probeServer !== null) {
this._probeServer.stop();
}
return this._cache.disconnect();
}
withWarp10(func, onError) {
return iterIfError(this._warp10Clients, func, error => {
if (onError) {
onError(error);
} else {
const {
name, code, message, stack,
} = error;
logger.error('error during warp 10 request', {
error: {
name, code, errmsg: message, stack: name !== 'RequestError' ? stack : undefined,
},
});
}
});
}
}
module.exports = BaseTask;

View File

@ -1,3 +1,4 @@
const promClient = require('prom-client');
const BaseTask = require('./BaseTask');
const config = require('../config');
const { checkpointLagSecs, indexedEventFields } = require('../constants');
@ -10,30 +11,103 @@ const logger = new LoggerContext({
class CreateCheckpoint extends BaseTask {
constructor(options) {
super({
warp10: {
requestTimeout: 30000,
connectTimeout: 30000,
},
enableMetrics: config.metrics.enabled,
metricsHost: config.metrics.host,
metricsPort: config.metrics.checkpointPort,
...options,
});
this._defaultSchedule = config.checkpointSchedule;
this._defaultLag = checkpointLagSecs;
}
// eslint-disable-next-line class-methods-use-this
_registerMetricHandlers() {
const created = new promClient.Counter({
name: 's3_utapi_create_checkpoint_created_total',
help: 'Total number of checkpoints created',
labelNames: ['origin', 'containerName'],
});
const getLastCheckpoint = this._getLastCheckpoint.bind(this);
const lastCheckpoint = new promClient.Gauge({
name: 's3_utapi_create_checkpoint_last_checkpoint_seconds',
help: 'Timestamp of the last successfully created checkpoint',
labelNames: ['origin', 'containerName'],
async collect() {
try {
const timestamp = await getLastCheckpoint();
if (timestamp !== null) {
this.set(timestamp);
}
} catch (error) {
logger.error('error during metric collection', { error });
}
},
});
return {
created,
lastCheckpoint,
};
}
/**
* Metrics for CreateCheckpoint
* @typedef {Object} CreateCheckpointMetrics
* @property {number} created - Number of checkpoints created
*/
/**
*
* @param {CreateCheckpointMetrics} metrics - Metric values to push
* @returns {undefined}
*/
_pushMetrics(metrics) {
if (!this._enableMetrics) {
return;
}
if (metrics.created !== undefined) {
this._metricsHandlers.created.inc(metrics.created);
}
}
async _getLastCheckpoint() {
const resp = await this.withWarp10(async warp10 => warp10.fetch({
className: 'utapi.checkpoint.master',
labels: {
node: warp10.nodeId,
},
start: 'now',
stop: -1,
}));
if (!resp.result || (resp.result.length === 0 || resp.result[0] === '' || resp.result[0] === '[]')) {
return null;
}
const result = JSON.parse(resp.result[0])[0];
const timestamp = result.v[0][0];
return timestamp / 1000000;// Convert timestamp from microseconds to seconds
}
async _execute(timestamp) {
logger.debug('creating checkpoints', { checkpointTimestamp: timestamp });
const params = {
params: {
nodeId: this.nodeId,
end: timestamp.toString(),
fields: indexedEventFields,
},
macro: 'utapi/createCheckpoint',
};
const status = await this._warp10.exec(params);
const status = await this.withWarp10(async warp10 => {
const params = {
params: {
nodeId: warp10.nodeId,
end: timestamp.toString(),
fields: indexedEventFields,
},
macro: 'utapi/createCheckpoint',
};
return warp10.exec(params);
});
if (status.result[0]) {
logger.info(`created ${status.result[0] || 0} checkpoints`);
this._pushMetrics({ created: status.result[0] });
}
}
}

View File

@ -1,3 +1,4 @@
const promClient = require('prom-client');
const BaseTask = require('./BaseTask');
const config = require('../config');
const { snapshotLagSecs } = require('../constants');
@ -10,29 +11,103 @@ const logger = new LoggerContext({
class CreateSnapshot extends BaseTask {
constructor(options) {
super({
warp10: {
requestTimeout: 30000,
connectTimeout: 30000,
},
enableMetrics: config.metrics.enabled,
metricsHost: config.metrics.host,
metricsPort: config.metrics.snapshotPort,
...options,
});
this._defaultSchedule = config.snapshotSchedule;
this._defaultLag = snapshotLagSecs;
}
// eslint-disable-next-line class-methods-use-this
_registerMetricHandlers() {
const created = new promClient.Counter({
name: 's3_utapi_create_snapshot_created_total',
help: 'Total number of snapshots created',
labelNames: ['origin', 'containerName'],
});
const getLastSnapshot = this._getLastSnapshot.bind(this);
const lastSnapshot = new promClient.Gauge({
name: 's3_utapi_create_snapshot_last_snapshot_seconds',
help: 'Timestamp of the last successfully created snapshot',
labelNames: ['origin', 'containerName'],
async collect() {
try {
const timestamp = await getLastSnapshot();
if (timestamp !== null) {
this.set(timestamp);
}
} catch (error) {
logger.error('error during metric collection', { error });
}
},
});
return {
created,
lastSnapshot,
};
}
/**
* Metrics for CreateSnapshot
* @typedef {Object} CreateSnapshotMetrics
* @property {number} created - Number of snapshots created
*/
/**
*
* @param {CreateSnapshotMetrics} metrics - Metric values to push
* @returns {undefined}
*/
_pushMetrics(metrics) {
if (!this._enableMetrics) {
return;
}
if (metrics.created !== undefined) {
this._metricsHandlers.created.inc(metrics.created);
}
}
async _getLastSnapshot() {
const resp = await this.withWarp10(async warp10 => warp10.fetch({
className: 'utapi.snapshot.master',
labels: {
node: warp10.nodeId,
},
start: 'now',
stop: -1,
}));
if (!resp.result || (resp.result.length === 0 || resp.result[0] === '' || resp.result[0] === '[]')) {
return null;
}
const result = JSON.parse(resp.result[0])[0];
const timestamp = result.v[0][0];
return timestamp / 1000000;// Convert timestamp from microseconds to seconds
}
async _execute(timestamp) {
logger.debug('creating snapshots', { snapshotTimestamp: timestamp });
const params = {
params: {
nodeId: this.nodeId,
end: timestamp.toString(),
},
macro: 'utapi/createSnapshot',
};
const status = await this._warp10.exec(params);
const status = await this.withWarp10(async warp10 => {
const params = {
params: {
nodeId: warp10.nodeId,
end: timestamp.toString(),
},
macro: 'utapi/createSnapshot',
};
return warp10.exec(params);
});
if (status.result[0]) {
logger.info(`created ${status.result[0]} snapshots`);
this._pushMetrics({ created: status.result[0] });
}
}
}

300
libV2/tasks/DiskUsage.js Normal file
View File

@ -0,0 +1,300 @@
const async = require('async');
const Path = require('path');
const fs = require('fs');
const promClient = require('prom-client');
const BaseTask = require('./BaseTask');
const config = require('../config');
const { expirationChunkDuration } = require('../constants');
const {
LoggerContext, getFolderSize, formatDiskSize, sliceTimeRange,
} = require('../utils');
const moduleLogger = new LoggerContext({
module: 'MonitorDiskUsage',
path: config.diskUsage.path,
});
const WARN_THRESHOLD = 0.8;
const ACTION_THRESHOLD = 0.95;
class MonitorDiskUsage extends BaseTask {
constructor(options) {
super({
enableMetrics: config.metrics.enabled,
metricsHost: config.metrics.host,
metricsPort: config.metrics.diskUsagePort,
...options,
});
this._defaultSchedule = config.diskUsageSchedule;
this._defaultLag = 0;
this._path = config.diskUsage.path;
this._enabled = config.diskUsage.enabled;
this._expirationEnabled = config.diskUsage.expirationEnabled;
this._metricRetentionMicroSecs = config.diskUsage.retentionDays * 24 * 60 * 60 * 1000000;
this._hardLimit = config.diskUsage.hardLimit || null;
}
async _setup() {
await super._setup();
this._program
.option('--leader', 'Mark this process as the leader for metric expiration.')
.option(
'--lock',
'Manually trigger a lock of the warp 10 database. This will cause all other options to be ignored.',
)
.option(
'--unlock',
'Manually trigger an unlock of the warp 10 database. This will cause all other options to be ignored.',
);
}
// eslint-disable-next-line class-methods-use-this
_registerMetricHandlers() {
const isLocked = new promClient.Gauge({
name: 's3_utapi_monitor_disk_usage_is_locked',
help: 'Indicates whether the monitored warp 10 has had writes disabled',
labelNames: ['origin', 'containerName'],
});
const leveldbBytes = new promClient.Gauge({
name: 's3_utapi_monitor_disk_usage_leveldb_bytes',
help: 'Total bytes used by warp 10 leveldb',
labelNames: ['origin', 'containerName'],
});
const datalogBytes = new promClient.Gauge({
name: 's3_utapi_monitor_disk_usage_datalog_bytes',
help: 'Total bytes used by warp 10 datalog',
labelNames: ['origin', 'containerName'],
});
const hardLimitRatio = new promClient.Gauge({
name: 's3_utapi_monitor_disk_usage_hard_limit_ratio',
help: 'Percent of the hard limit used by warp 10',
labelNames: ['origin', 'containerName'],
});
const hardLimitSetting = new promClient.Gauge({
name: 's3_utapi_monitor_disk_usage_hard_limit_bytes',
help: 'The hard limit setting in bytes',
labelNames: ['origin', 'containerName'],
});
return {
isLocked,
leveldbBytes,
datalogBytes,
hardLimitRatio,
hardLimitSetting,
};
}
/**
* Metrics for MonitorDiskUsage
* @typedef {Object} MonitorDiskUsageMetrics
* @property {boolean} isLocked - Indicates if writes have been disabled for the monitored warp10
* @property {number} leveldbBytes - Total bytes used by warp 10 leveldb
* @property {number} datalogBytes - Total bytes used by warp 10 datalog
* @property {number} hardLimitRatio - Percent of the hard limit used by warp 10
* @property {number} hardLimitSetting - The hard limit setting in bytes
*/
/**
*
* @param {MonitorDiskUsageMetrics} metrics - Metric values to push
* @returns {undefined}
*/
_pushMetrics(metrics) {
if (!this._enableMetrics) {
return;
}
if (metrics.isLocked !== undefined) {
this._metricsHandlers.isLocked.set(metrics.isLocked ? 1 : 0);
}
if (metrics.leveldbBytes !== undefined) {
this._metricsHandlers.leveldbBytes.set(metrics.leveldbBytes);
}
if (metrics.datalogBytes !== undefined) {
this._metricsHandlers.datalogBytes.set(metrics.datalogBytes);
}
if (metrics.hardLimitRatio !== undefined) {
this._metricsHandlers.hardLimitRatio.set(metrics.hardLimitRatio);
}
if (metrics.hardLimitSetting !== undefined) {
this._metricsHandlers.hardLimitSetting.set(metrics.hardLimitSetting);
}
}
get isLeader() {
return this._program.leader !== undefined;
}
get isManualUnlock() {
return this._program.unlock !== undefined;
}
get isManualLock() {
return this._program.lock !== undefined;
}
// eslint-disable-next-line class-methods-use-this
async _getUsage(path) {
moduleLogger.debug(`calculating disk usage for ${path}`);
if (!fs.existsSync(path)) {
throw Error(`failed to calculate usage for non-existent path ${path}`);
}
return getFolderSize(path);
}
async _expireMetrics(timestamp) {
const resp = await this.withWarp10(async warp10 =>
warp10.exec({
macro: 'utapi/findOldestRecord',
params: {
class: '~.*',
labels: {},
},
}));
if (!resp.result || resp.result.length !== 1) {
moduleLogger.error('failed to fetch oldest record timestamp. expiration failed');
return;
}
const oldestTimestamp = resp.result[0];
if (oldestTimestamp === -1) {
moduleLogger.info('No records found, nothing to delete.');
return;
}
const endTimestamp = timestamp - this._metricRetentionMicroSecs;
if (oldestTimestamp > endTimestamp) {
moduleLogger.info('No records exceed retention period, nothing to delete.');
return;
}
await async.eachSeries(
sliceTimeRange(oldestTimestamp - 1, endTimestamp, expirationChunkDuration),
async ([start, end]) => {
moduleLogger.info('deleting metrics',
{ start, end });
return this.withWarp10(async warp10 =>
warp10.delete({
className: '~.*',
start,
end,
}));
},
);
}
_checkHardLimit(size, nodeId) {
const hardPercentage = parseFloat((size / this._hardLimit).toFixed(2));
const hardLimitHuman = formatDiskSize(this._hardLimit);
const hardLogger = moduleLogger.with({
size,
sizeHuman: formatDiskSize(size),
hardPercentage,
hardLimit: this._hardLimit,
hardLimitHuman,
nodeId,
});
this._pushMetrics({ hardLimitRatio: hardPercentage });
const msg = `Using ${hardPercentage * 100}% of the ${hardLimitHuman} hard limit on ${nodeId}`;
if (hardPercentage < WARN_THRESHOLD) {
hardLogger.debug(msg);
} else if (hardPercentage >= WARN_THRESHOLD && hardPercentage < ACTION_THRESHOLD) {
hardLogger.warn(msg);
} else {
hardLogger.error(msg);
return true;
}
return false;
}
async _disableWarp10Updates() {
return this.withWarp10(async warp10 =>
warp10.exec({
script: `
DROP DROP
'Hard limit has been reached. Further updates have been disabled.'
'scality'
UPDATEOFF`,
params: {},
}));
}
async _enableWarp10Updates() {
return this.withWarp10(async warp10 =>
warp10.exec({
script: "DROP DROP 'scality' UPDATEON",
params: {},
}));
}
async _execute(timestamp) {
if (this.isManualUnlock) {
moduleLogger.info('manually unlocking warp 10', { nodeId: this.nodeId });
await this._enableWarp10Updates();
this._pushMetrics({ isLocked: false });
return;
}
if (this.isManualLock) {
moduleLogger.info('manually locking warp 10', { nodeId: this.nodeId });
await this._disableWarp10Updates();
this._pushMetrics({ isLocked: true });
return;
}
if (this._expirationEnabled && this.isLeader) {
moduleLogger.info(`expiring metrics older than ${config.diskUsage.retentionDays} days`);
await this._expireMetrics(timestamp);
return;
}
if (!this._enabled) {
moduleLogger.debug('disk usage monitoring not enabled, skipping check');
return;
}
let leveldbBytes = null;
let datalogBytes = null;
try {
leveldbBytes = await this._getUsage(Path.join(this._path, 'leveldb'));
datalogBytes = await this._getUsage(Path.join(this._path, 'datalog'));
} catch (error) {
moduleLogger.error(`error calculating disk usage for ${this._path}`, { error });
return;
}
this._pushMetrics({ leveldbBytes, datalogBytes });
const size = leveldbBytes + datalogBytes;
if (this._hardLimit !== null) {
moduleLogger.info(`warp 10 using ${formatDiskSize(size)} of disk space`, { leveldbBytes, datalogBytes });
const shouldLock = this._checkHardLimit(size, this.nodeId);
if (shouldLock) {
moduleLogger.warn('hard limit exceeded, disabling writes to warp 10', { nodeId: this.nodeId });
await this._disableWarp10Updates();
} else {
moduleLogger.info('usage below hard limit, ensuring writes to warp 10 are enabled',
{ nodeId: this.nodeId });
await this._enableWarp10Updates();
}
this._pushMetrics({ isLocked: shouldLock, hardLimitSetting: this._hardLimit });
}
}
}
module.exports = MonitorDiskUsage;

View File

@ -1,41 +0,0 @@
const BaseTask = require('./BaseTask');
const config = require('../config');
const { LoggerContext } = require('../utils');
const { downsampleLagSecs, indexedEventFields } = require('../constants');
const logger = new LoggerContext({
module: 'Repair',
});
class DownsampleTask extends BaseTask {
constructor(options) {
super({
warp10: {
requestTimeout: 30000,
connectTimeout: 30000,
},
...options,
});
this._defaultSchedule = config.downsampleSchedule;
this._defaultLag = downsampleLagSecs;
}
async _execute(timestamp) {
logger.debug('Downsampling records', { timestamp, nodeId: this.nodeId });
const params = {
params: {
nodeId: this.nodeId,
end: timestamp.toString(),
fields: indexedEventFields,
},
macro: 'utapi/repairRecords',
};
const status = await this._warp10.exec(params);
if (status.result[0]) {
logger.info(`created ${status.result[0]} corrections`);
}
}
}
module.exports = DownsampleTask;

View File

@ -1,24 +1,113 @@
const assert = require('assert');
const async = require('async');
const promClient = require('prom-client');
const BaseTask = require('./BaseTask');
const { UtapiMetric } = require('../models');
const config = require('../config');
const {
LoggerContext, shardFromTimestamp, convertTimestamp, InterpolatedClock,
} = require('../utils');
const { checkpointLagSecs } = require('../constants');
const {
LoggerContext, shardFromTimestamp, convertTimestamp, InterpolatedClock, now,
} = require('../utils');
const logger = new LoggerContext({
module: 'IngestShard',
});
const now = () => convertTimestamp(new Date().getTime());
const checkpointLagMicroseconds = convertTimestamp(checkpointLagSecs);
class IngestShardTask extends BaseTask {
constructor(...options) {
super(...options);
constructor(options) {
super({
enableMetrics: config.metrics.enabled,
metricsHost: config.metrics.host,
metricsPort: config.metrics.ingestPort,
...options,
});
this._defaultSchedule = config.ingestionSchedule;
this._defaultLag = config.ingestionLagSeconds;
this._stripEventUUID = options.stripEventUUID !== undefined ? options.stripEventUUID : true;
}
// eslint-disable-next-line class-methods-use-this
_registerMetricHandlers() {
const ingestedTotal = new promClient.Counter({
name: 's3_utapi_ingest_shard_task_ingest_total',
help: 'Total number of metrics ingested',
labelNames: ['origin', 'containerName'],
});
const ingestedSlow = new promClient.Counter({
name: 's3_utapi_ingest_shard_task_slow_total',
help: 'Total number of slow metrics ingested',
labelNames: ['origin', 'containerName'],
});
const ingestedShards = new promClient.Counter({
name: 's3_utapi_ingest_shard_task_shard_ingest_total',
help: 'Total number of metric shards ingested',
labelNames: ['origin', 'containerName'],
});
const shardAgeTotal = new promClient.Counter({
name: 's3_utapi_ingest_shard_task_shard_age_total',
help: 'Total aggregated age of shards',
labelNames: ['origin', 'containerName'],
});
return {
ingestedTotal,
ingestedSlow,
ingestedShards,
shardAgeTotal,
};
}
/**
* Metrics for IngestShardTask
* @typedef {Object} IngestShardMetrics
* @property {number} ingestedTotal - Number of events ingested
* @property {number} ingestedSlow - Number of slow events ingested
* @property {number} ingestedShards - Number of metric shards ingested
* @property {number} shardAgeTotal - Aggregated age of shards
*/
/**
*
* @param {IngestShardMetrics} metrics - Metric values to push
* @returns {undefined}
*/
_pushMetrics(metrics) {
if (!this._enableMetrics) {
return;
}
if (metrics.ingestedTotal !== undefined) {
this._metricsHandlers.ingestedTotal.inc(metrics.ingestedTotal);
}
if (metrics.ingestedSlow !== undefined) {
this._metricsHandlers.ingestedSlow.inc(metrics.ingestedSlow);
}
if (metrics.ingestedShards !== undefined) {
this._metricsHandlers.ingestedShards.inc(metrics.ingestedShards);
}
if (metrics.shardAgeTotal !== undefined) {
this._metricsHandlers.shardAgeTotal.inc(metrics.shardAgeTotal);
}
}
_hydrateEvent(data, stripTimestamp = false) {
const event = JSON.parse(data);
if (this._stripEventUUID) {
delete event.uuid;
}
if (stripTimestamp) {
delete event.timestamp;
}
return new UtapiMetric(event);
}
async _execute(timestamp) {
@ -35,41 +124,61 @@ class IngestShardTask extends BaseTask {
return;
}
await Promise.all(toIngest.map(
let shardAgeTotal = 0;
let ingestedShards = 0;
await async.eachLimit(toIngest, 10,
async shard => {
if (await this._cache.shardExists(shard)) {
const metrics = await this._cache.getMetricsForShard(shard);
if (metrics.length > 0) {
logger.info(`Ingesting ${metrics.length} events from shard`, { shard });
const shardAge = now() - shard;
let metricClass;
let records;
if (shardAge < checkpointLagMicroseconds) {
metricClass = 'utapi.event';
records = metrics
.map(m => new UtapiMetric(JSON.parse(m)));
} else {
const areSlowEvents = shardAge >= checkpointLagMicroseconds;
const metricClass = areSlowEvents ? 'utapi.repair.event' : 'utapi.event';
if (areSlowEvents) {
logger.info('Detected slow records, ingesting as repair');
metricClass = 'utapi.repair.event';
const clock = new InterpolatedClock();
records = metrics
.map(data => {
const metric = JSON.parse(data);
metric.timestamp = clock.getTs();
return new UtapiMetric(metric);
});
}
const status = await this._warp10.ingest({ className: metricClass }, records);
const records = metrics.map(m => this._hydrateEvent(m, areSlowEvents));
records.sort((a, b) => a.timestamp - b.timestamp);
const clock = new InterpolatedClock();
records.forEach(r => {
r.timestamp = clock.getTs(r.timestamp);
});
let ingestedIntoNodeId;
const status = await this.withWarp10(async warp10 => {
// eslint-disable-next-line prefer-destructuring
ingestedIntoNodeId = warp10.nodeId;
return warp10.ingest(
{
className: metricClass,
labels: { origin: config.nodeId },
}, records,
);
});
assert.strictEqual(status, records.length);
await this._cache.deleteShard(shard);
logger.info(`ingested ${status} records from ${config.nodeId} into ${ingestedIntoNodeId}`);
shardAgeTotal += shardAge;
ingestedShards += 1;
this._pushMetrics({ ingestedTotal: records.length });
if (areSlowEvents) {
this._pushMetrics({ ingestedSlow: records.length });
}
} else {
logger.debug('No events found in shard, cleaning up');
}
} else {
logger.warn('shard does not exist', { shard });
}
},
));
});
const shardAgeTotalSecs = shardAgeTotal / 1000000;
this._pushMetrics({ shardAgeTotal: shardAgeTotalSecs, ingestedShards });
}
}

View File

@ -0,0 +1,85 @@
const async = require('async');
const BaseTask = require('./BaseTask');
const UtapiClient = require('../client');
const { LoggerContext } = require('../utils');
const logger = new LoggerContext({
module: 'ManualAdjust',
});
function collectArgs(arg, prev) {
return prev.concat([arg]);
}
class ManualAdjust extends BaseTask {
async _setup() {
// Don't include default flags
await super._setup(false);
this._program
.option('-h, --host <host>', 'Utapi server host', 'localhost')
.option('-p, --port <port>', 'Utapi server port', '8100', parseInt)
.option('-b, --bucket <buckets...>', 'target these buckets', collectArgs, [])
.option('-a, --account <accounts...>', 'target these accounts', collectArgs, [])
.option('-u, --user <users...>', 'target these users', collectArgs, [])
.requiredOption('-o, --objects <adjustment>', 'adjust numberOfObjects by this amount', parseInt)
.requiredOption('-s, --storage <adjustment>', 'adjust storageUtilized by this amount', parseInt);
}
async _start() {
this._utapiClient = new UtapiClient({
host: this._program.host,
port: this._program.port,
disableRetryCache: true,
});
await super._start();
}
async _pushAdjustmentMetric(metric) {
logger.info('pushing adjustment metric', { metric });
await this._utapiClient.pushMetric(metric);
}
async _execute() {
const timestamp = Date.now();
const objectDelta = this._program.objects;
const sizeDelta = this._program.storage;
if (!this._program.bucket.length && !this._program.account.length && !this._program.user.length) {
throw Error('You must provided at least one of --bucket, --account or --user');
}
logger.info('writing adjustments');
if (this._program.bucket.length) {
logger.info('adjusting buckets');
await async.eachSeries(
this._program.bucket,
async bucket => this._pushAdjustmentMetric({
bucket, objectDelta, sizeDelta, timestamp,
}),
);
}
if (this._program.account.length) {
logger.info('adjusting accounts');
await async.eachSeries(
this._program.account,
async account => this._pushAdjustmentMetric({
account, objectDelta, sizeDelta, timestamp,
}),
);
}
if (this._program.user.length) {
logger.info('adjusting users');
await async.eachSeries(
this._program.user,
async user => this._pushAdjustmentMetric({
user, objectDelta, sizeDelta, timestamp,
}),
);
}
}
}
module.exports = ManualAdjust;

View File

@ -6,7 +6,12 @@ const { UtapiRecord } = require('../models');
const config = require('../config');
const errors = require('../errors');
const RedisClient = require('../redis');
const { warp10RecordType, operations: operationIds, serviceToWarp10Label } = require('../constants');
const {
warp10RecordType,
operations: operationIds,
serviceToWarp10Label,
migrationOpTranslationMap,
} = require('../constants');
const {
LoggerContext,
now,
@ -37,14 +42,7 @@ const LEVELS_TO_MIGRATE = [
class MigrateTask extends BaseTask {
constructor(options) {
super({
warp10: {
requestTimeout: 30000,
connectTimeout: 30000,
},
...options,
});
super(options);
this._failedCorrections = [];
this._redis = new RedisClient(config.redis);
}
@ -141,7 +139,20 @@ class MigrateTask extends BaseTask {
timestamp,
timestamp,
));
const numberOfObjects = MigrateTask._parseMetricValue(numberOfObjectsResp[0]);
let numberOfObjects;
if (numberOfObjectsResp.length === 1) {
numberOfObjects = MigrateTask._parseMetricValue(numberOfObjectsResp[0]);
} else {
numberOfObjects = numberOfObjectsOffset;
logger.warn('Could not retrieve value for numberOfObjects, falling back to last seen value',
{
metricLevel: level,
resource,
metricTimestamp: timestamp,
lastSeen: numberOfObjectsOffset,
});
}
let incomingBytes = 0;
let outgoingBytes = 0;
@ -154,6 +165,8 @@ class MigrateTask extends BaseTask {
outgoingBytes = apiOp.count;
} else if (operationIds.includes(apiOp.op)) {
operations[apiOp.op] = apiOp.count;
} else if (migrationOpTranslationMap[apiOp.op] !== undefined) {
operations[migrationOpTranslationMap[apiOp.op]] = apiOp.count;
} else {
logger.warn('dropping unknown operation', { apiOp });
}
@ -174,16 +187,16 @@ class MigrateTask extends BaseTask {
}
async _findLatestSnapshot(level, resource) {
const resp = await this._warp10.fetch({
const resp = await this.withWarp10(async warp10 => warp10.fetch({
className: 'utapi.snapshot',
labels: {
[serviceToWarp10Label[level]]: resource,
},
start: 'now',
stop: -1,
});
}));
if (resp.result && (resp.result.length === 0 || resp.result[0] === '')) {
if (resp.result && (resp.result.length === 0 || resp.result[0] === '' || resp.result[0] === '[]')) {
return null;
}
@ -195,14 +208,14 @@ class MigrateTask extends BaseTask {
let pos = beginTimestamp;
// eslint-disable-next-line no-constant-condition
while (true) {
const resp = await this._warp10.fetch({
const resp = await this.withWarp10(async warp10 => warp10.fetch({
className: 'utapi.snapshot',
labels: {
[serviceToWarp10Label[level]]: resource,
},
start: pos - 1,
stop: -WARP10_SCAN_SIZE,
});
}));
if (resp.result && resp.result.length === 0) {
return pos;
}
@ -218,7 +231,7 @@ class MigrateTask extends BaseTask {
}
async _migrateMetric(className, level, resource, metric) {
return this._warp10.ingest(
return this.withWarp10(async warp10 => warp10.ingest(
{
className,
labels: {
@ -227,7 +240,7 @@ class MigrateTask extends BaseTask {
valueType: warp10RecordType,
},
[metric],
);
));
}
static _sumRecord(a, b) {

View File

@ -7,7 +7,12 @@ const config = require('../config');
const metadata = require('../metadata');
const { serviceToWarp10Label, warp10RecordType } = require('../constants');
const { LoggerContext, convertTimestamp } = require('../utils');
const {
LoggerContext,
logEventFilter,
convertTimestamp,
buildFilterChain,
} = require('../utils');
const logger = new LoggerContext({
module: 'ReindexTask',
@ -16,14 +21,35 @@ const logger = new LoggerContext({
class ReindexTask extends BaseTask {
constructor(options) {
super({
warp10: {
requestTimeout: 30000,
connectTimeout: 30000,
},
enableMetrics: config.metrics.enabled,
metricsHost: config.metrics.host,
metricsPort: config.metrics.reindexPort,
...options,
});
this._defaultSchedule = config.reindexSchedule;
this._defaultLag = 0;
const eventFilters = (config && config.filter) || {};
this._shouldReindex = buildFilterChain((config && config.filter) || {});
// exponential backoff: max wait = 50 * 2 ^ 10 milliseconds ~= 51 seconds
this.ebConfig = {
times: 10,
interval: retryCount => 50 * (2 ** retryCount),
};
if (Object.keys(eventFilters).length !== 0) {
logEventFilter((...args) => logger.info(...args), 'reindex resource filtering enabled', eventFilters);
}
}
async _setup(includeDefaultOpts = true) {
await super._setup(includeDefaultOpts);
this._program.option(
'--bucket <bucket>',
'Manually specify a bucket to reindex. Can be used multiple times.',
(bucket, previous) => previous.concat([bucket]),
[],
);
}
static async _indexBucket(bucket) {
@ -33,16 +59,23 @@ class ReindexTask extends BaseTask {
let lastMasterSize = null;
for await (const obj of metadata.listObjects(bucket)) {
if (obj.value.isDeleteMarker) {
if (obj.value.isDeleteMarker || obj.value.isPHD) {
// eslint-disable-next-line no-continue
continue;
}
if (!Number.isInteger(obj.value['content-length'])) {
logger.debug('object missing content-length, not including in count');
// eslint-disable-next-line no-continue
continue;
}
count += 1;
size += obj.value['content-length'];
// If versioned, subtract the size of the master to avoid double counting
if (lastMaster && obj.name === lastMaster) {
logger.debug('Detected versioned key, subtracting master size', { lastMasterSize, key: obj.name });
logger.debug('Detected versioned key. subtracting master size', { lastMasterSize, key: obj.name });
size -= lastMasterSize;
count -= 1;
lastMaster = null;
@ -67,29 +100,46 @@ class ReindexTask extends BaseTask {
async _fetchCurrentMetrics(level, resource) {
const timestamp = convertTimestamp(new Date().getTime());
const options = {
params: {
start: timestamp,
end: timestamp,
node: this._nodeId,
labels: {
[level]: resource,
const res = await this.withWarp10(warp10 => {
const options = {
params: {
end: timestamp,
node: warp10.nodeId,
labels: {
[level]: resource,
},
// eslint-disable-next-line camelcase
no_reindex: true,
},
},
macro: 'utapi/getMetrics',
macro: 'utapi/getMetricsAt',
};
return warp10.exec(options);
});
const [value] = res.result || [];
if (!value) {
throw new Error('unable to fetch current metrics from warp10');
}
if (!Number.isInteger(value.objD) || !Number.isInteger(value.sizeD)) {
logger.error('invalid values returned from warp 10', { response: res });
throw new Error('invalid values returned from warp 10');
}
return {
timestamp,
value,
};
const res = await this._warp10.exec(options);
return { timestamp, value: JSON.parse(res.result[0]) };
}
async _updateMetric(level, resource, total) {
const { timestamp, value } = await this._fetchCurrentMetrics(level, resource);
const objectDelta = total.count - value.numberOfObjects[0];
const sizeDelta = total.size - value.storageUtilized[0];
const objectDelta = total.count - value.objD;
const sizeDelta = total.size - value.sizeD;
if (objectDelta !== 0 || sizeDelta !== 0) {
logger.info('discrepancy detected in metrics, writing corrective record',
logger.info('discrepancy detected in metrics. writing corrective record',
{ [level]: resource, objectDelta, sizeDelta });
const record = new UtapiRecord({
@ -97,8 +147,7 @@ class ReindexTask extends BaseTask {
sizeDelta,
timestamp,
});
await this._warp10.ingest(
await this.withWarp10(warp10 => warp10.ingest(
{
className: 'utapi.repair.reindex',
labels: {
@ -107,29 +156,46 @@ class ReindexTask extends BaseTask {
valueType: warp10RecordType,
},
[record],
);
));
}
}
get targetBuckets() {
if (this._program.bucket.length) {
return this._program.bucket.map(name => ({ name }));
}
return metadata.listBuckets();
}
async _execute() {
logger.debug('reindexing objects');
logger.info('started reindex task');
const accountTotals = {};
const ignoredAccounts = new Set();
await async.eachLimit(this.targetBuckets, 5, async bucket => {
if (!this._shouldReindex({ bucket: bucket.name, account: bucket.account })) {
logger.debug('skipping excluded bucket', { bucket: bucket.name, account: bucket.account });
return;
}
await async.eachLimit(metadata.listBuckets(), 5, async bucket => {
logger.trace('starting reindex of bucket', { bucket: bucket.name });
logger.info('started bucket reindex', { bucket: bucket.name });
const mpuBucket = `${mpuBucketPrefix}${bucket.name}`;
let bktTotal;
let mpuTotal;
try {
bktTotal = await async.retryable(ReindexTask._indexBucket)(bucket.name);
mpuTotal = await async.retryable(ReindexTask._indexMpuBucket)(mpuBucket);
bktTotal = await async.retryable(this.ebConfig, ReindexTask._indexBucket)(bucket.name);
mpuTotal = await async.retryable(this.ebConfig, ReindexTask._indexMpuBucket)(mpuBucket);
} catch (error) {
logger.error('failed to reindex bucket, ignoring associated account', { error, bucket: bucket.name });
ignoredAccounts.add(bucket.account);
logger.error(
'failed bucket reindex. any associated account will be skipped',
{ error, bucket: bucket.name },
);
// buckets passed with `--bucket` won't have an account property
if (bucket.account) {
ignoredAccounts.add(bucket.account);
}
return;
}
@ -138,33 +204,45 @@ class ReindexTask extends BaseTask {
count: bktTotal.count,
};
if (accountTotals[bucket.account]) {
accountTotals[bucket.account].size += total.size;
accountTotals[bucket.account].count += total.count;
} else {
accountTotals[bucket.account] = { ...total };
// buckets passed with `--bucket` won't have an account property
if (bucket.account) {
if (accountTotals[bucket.account]) {
accountTotals[bucket.account].size += total.size;
accountTotals[bucket.account].count += total.count;
} else {
accountTotals[bucket.account] = { ...total };
}
}
logger.trace('finished indexing bucket', { bucket: bucket.name });
logger.info('finished bucket reindex', { bucket: bucket.name });
await this._updateMetric(
serviceToWarp10Label.buckets,
bucket.name,
total,
);
try {
await this._updateMetric(
serviceToWarp10Label.buckets,
bucket.name,
total,
);
} catch (error) {
logger.error('error updating metrics for bucket', { error, bucket: bucket.name });
}
});
const toUpdate = Object.entries(accountTotals)
.filter(([account]) => !ignoredAccounts.has(account));
await async.eachLimit(toUpdate, 5, async ([account, total]) =>
this._updateMetric(
serviceToWarp10Label.accounts,
account,
total,
));
await async.eachLimit(toUpdate, 5, async ([account, total]) => {
try {
await this._updateMetric(
serviceToWarp10Label.accounts,
account,
total,
);
} catch (error) {
logger.error('error updating metrics for account', { error, account });
}
});
logger.debug('finished reindexing');
logger.info('finished reindex task');
}
}

View File

@ -1,3 +1,4 @@
const promClient = require('prom-client');
const BaseTask = require('./BaseTask');
const config = require('../config');
const { LoggerContext } = require('../utils');
@ -10,30 +11,67 @@ const logger = new LoggerContext({
class RepairTask extends BaseTask {
constructor(options) {
super({
warp10: {
requestTimeout: 30000,
connectTimeout: 30000,
},
enableMetrics: config.metrics.enabled,
metricsHost: config.metrics.host,
metricsPort: config.metrics.repairPort,
...options,
});
this._defaultSchedule = config.repairSchedule;
this._defaultLag = repairLagSecs;
}
// eslint-disable-next-line class-methods-use-this
_registerMetricHandlers() {
const created = new promClient.Counter({
name: 's3_utapi_repair_task_created_total',
help: 'Total number of repair records created',
labelNames: ['origin', 'containerName'],
});
return {
created,
};
}
/**
* Metrics for RepairTask
* @typedef {Object} RepairMetrics
* @property {number} created - Number of repair records created
*/
/**
*
* @param {RepairMetrics} metrics - Metric values to push
* @returns {undefined}
*/
_pushMetrics(metrics) {
if (!this._enableMetrics) {
return;
}
if (metrics.created !== undefined) {
this._metricsHandlers.created.inc(metrics.created);
}
}
async _execute(timestamp) {
logger.debug('Checking for repairs', { timestamp, nodeId: this.nodeId });
const params = {
params: {
nodeId: this.nodeId,
end: timestamp.toString(),
fields: indexedEventFields,
},
macro: 'utapi/repairRecords',
};
const status = await this._warp10.exec(params);
const status = await this.withWarp10(warp10 => {
const params = {
params: {
nodeId: warp10.nodeId,
end: timestamp.toString(),
fields: indexedEventFields,
},
macro: 'utapi/repairRecords',
};
return warp10.exec(params);
});
if (status.result[0]) {
logger.info(`created ${status.result[0]} corrections`);
this._pushMetrics({ created: status.result[0] });
}
}
}

View File

@ -5,6 +5,8 @@ const CreateSnapshot = require('./CreateSnapshot');
const RepairTask = require('./Repair');
const ReindexTask = require('./Reindex');
const MigrateTask = require('./Migrate');
const MonitorDiskUsage = require('./DiskUsage');
const ManualAdjust = require('./ManualAdjust');
module.exports = {
IngestShard,
@ -14,4 +16,6 @@ module.exports = {
RepairTask,
ReindexTask,
MigrateTask,
MonitorDiskUsage,
ManualAdjust,
};

53
libV2/utils/disk.js Normal file
View File

@ -0,0 +1,53 @@
const { promisify } = require('util');
const getFolderSize = require('get-folder-size');
const byteSize = require('byte-size');
const diskSpecRegex = /(\d+)([bkmgtpxz])(i?b)?/;
const suffixToExp = {
b: 0,
k: 1,
m: 2,
g: 3,
t: 4,
p: 5,
x: 6,
z: 7,
};
/**
* Converts a string specifying disk size into its value in bytes
* Supported formats:
* 1b/1B - Directly specify a byte size
* 1K/1MB/1GiB - Specify a number of bytes using IEC or common suffixes
*
* Suffixes are case insensitive.
* All suffixes are considered IEC standard with 1 kibibyte being 2^10 bytes.
*
* @param {String} spec - string for conversion
* @returns {Integer} - disk size in bytes
*/
function parseDiskSizeSpec(spec) {
const normalized = spec.toLowerCase();
if (!diskSpecRegex.test(normalized)) {
throw Error('Format does not match a known suffix');
}
const match = diskSpecRegex.exec(normalized);
const size = parseInt(match[1], 10);
const exponent = suffixToExp[match[2]];
return size * (1024 ** exponent);
}
function _formatFunc() {
return `${this.value}${this.unit}`;
}
function formatDiskSize(value) {
return byteSize(value, { units: 'iec', toStringFn: _formatFunc }).toString();
}
module.exports = {
parseDiskSizeSpec,
getFolderSize: promisify(getFolderSize),
formatDiskSize,
};

47
libV2/utils/filter.js Normal file
View File

@ -0,0 +1,47 @@
const assert = require('assert');
/**
* filterObject
*
* Constructs a function meant for filtering Objects by the value of a key
* Returned function returns a boolean with false meaning the object was present
* in the filter allowing the function to be passed directly to Array.filter etc.
*
* @param {string} key - Object key to inspect
* @param {Object} filter
* @param {Set} [filter.allow] - Set containing keys to include
* @param {Set} [filter.deny] - Set containing keys to not include
* @returns {function(Object): bool}
*/
function filterObject(obj, key, { allow, deny }) {
if (allow && deny) {
throw new Error('You can not define both an allow and a deny list.');
}
if (!allow && !deny) {
throw new Error('You must define either an allow or a deny list.');
}
if (allow) {
assert(allow instanceof Set);
return obj[key] === undefined || allow.has(obj[key]);
}
assert(deny instanceof Set);
return obj[key] === undefined || !deny.has(obj[key]);
}
/**
* buildFilterChain
*
* Constructs a function from a map of key names and allow/deny filters.
* The returned function returns a boolean with false meaning the object was present
* in one of the filters allowing the function to be passed directly to Array.filter etc.
*
* @param {Object<string, Object<string, Set>} filters
* @returns {function(Object): bool}
*/
function buildFilterChain(filters) {
return obj => Object.entries(filters).every(([key, filter]) => filterObject(obj, key, filter));
}
module.exports = { filterObject, buildFilterChain };

View File

@ -30,7 +30,37 @@ function comprehend(data, func) {
}, {});
}
/**
* Calls func with items in sequence, advancing if an error is thrown.
* The result from the first successful call is returned.
*
* onError, if passed, is called on every error thrown by func;
*
* @param {Array} items - items to iterate
* @param {AsyncFunction} func - function to apply to each item
* @param {Function|undefined} onError - optional function called if an error is thrown
* @returns {*} -
*/
async function iterIfError(items, func, onError) {
let error;
// eslint-disable-next-line no-restricted-syntax
for (const item of items) {
try {
// eslint-disable-next-line no-await-in-loop
const resp = await func(item);
return resp;
} catch (_error) {
if (onError) {
onError(_error);
}
error = _error;
}
}
throw error || new Error('unable to complete request');
}
module.exports = {
asyncOrCallback,
comprehend,
iterIfError,
};

View File

@ -2,10 +2,16 @@ const log = require('./log');
const shard = require('./shard');
const timestamp = require('./timestamp');
const func = require('./func');
const disk = require('./disk');
const filter = require('./filter');
const probe = require('./probe');
module.exports = {
...log,
...shard,
...timestamp,
...func,
...disk,
...filter,
...probe,
};

View File

@ -1,12 +1,5 @@
const werelogs = require('werelogs');
const config = require('../config');
const loggerConfig = {
level: config.logging.level,
dump: config.logging.dumpLevel,
};
werelogs.configure(loggerConfig);
const { comprehend } = require('./func');
const rootLogger = new werelogs.Logger('Utapi');
@ -77,8 +70,6 @@ class LoggerContext {
}
}
rootLogger.debug('logger initialized', { loggerConfig });
function buildRequestLogger(req) {
let reqUids = [];
if (req.headers['x-scal-request-uids'] !== undefined) {
@ -102,8 +93,26 @@ function buildRequestLogger(req) {
return new LoggerContext({}, reqLogger);
}
function logEventFilter(logger, msg, eventFilters) {
const filterLog = comprehend(
eventFilters,
(level, rules) => ({
key: level,
value: comprehend(
rules,
(rule, values) => ({
key: rule,
value: Array.from(values),
}),
),
}),
);
logger(msg, { filters: filterLog });
}
module.exports = {
logger: rootLogger,
buildRequestLogger,
LoggerContext,
logEventFilter,
};

32
libV2/utils/probe.js Normal file
View File

@ -0,0 +1,32 @@
const { ProbeServer } = require('arsenal').network.probe.ProbeServer;
/**
* Configure probe servers
* @typedef {Object} ProbeServerConfig
* @property {string} bindAddress - Address to bind probe server to
* @property {number} port - Port to bind probe server to
*/
/**
* Start an empty probe server
* @async
* @param {ProbeServerConfig} config - Configuration for probe server
* @returns {Promise<ProbeServer>} - Instance of ProbeServer
*/
async function startProbeServer(config) {
if (!config) {
throw new Error('configuration for probe server is missing');
}
return new Promise((resolve, reject) => {
const probeServer = new ProbeServer(config);
probeServer.onListening(() => resolve(probeServer));
probeServer.onError(err => reject(err));
probeServer.start();
});
}
module.exports = {
startProbeServer,
};

View File

@ -19,16 +19,16 @@ class InterpolatedClock {
this._step = 1;
}
getTs() {
const ts = new Date().now();
getTs(timestamp) {
const ts = timestamp !== undefined ? timestamp : Date.now();
if (ts === this._now) {
// If this is the same millisecond as the last call
this._step += 1;
return ts * 1000 + (this._step - 1);
return convertTimestamp(ts) + (this._step - 1);
}
this._now = ts;
this._step = 1;
return ts * 1000;
return convertTimestamp(ts);
}
}
@ -42,8 +42,37 @@ function now() {
return Date.now() * 1000;
}
/**
* Slice the time range represented by the passed timestamps
* into slices of at most `step` duration.
*
* Both `start` and `end` are included in the returned slices.
* Slice timestamps are inclusive and non overlapping.
*
* For example sliceTimeRange(0, 5, 2) will yield
* [0, 1]
* [2, 3]
* [4, 5]
*
* @param {Number} start
* @param {Number} end
* @param {Number} step
*/
function* sliceTimeRange(start, end, step) {
let spos = start;
let epos = start + step - 1;
while (epos < end) {
yield [spos, epos];
spos += step;
epos += step;
}
yield [spos, end];
}
module.exports = {
convertTimestamp,
InterpolatedClock,
now,
sliceTimeRange,
};

View File

@ -1,151 +0,0 @@
const assert = require('assert');
const { auth, policies } = require('arsenal');
const vaultclient = require('vaultclient');
const config = require('./config');
/**
@class Vault
* Creates a vault instance for authentication and authorization
*/
class Vault {
constructor(options) {
const { host, port } = options.vaultd;
if (options.https) {
const { key, cert, ca } = options.https;
this._client = new vaultclient.Client(host, port, true, key, cert,
ca);
} else {
this._client = new vaultclient.Client(host, port);
}
}
/** authenticateV4Request
*
* @param {object} params - the authentication parameters as returned by
* auth.extractParams
* @param {number} params.version - shall equal 4
* @param {string} params.data.accessKey - the user's accessKey
* @param {string} params.data.signatureFromRequest - the signature read from
* the request
* @param {string} params.data.region - the AWS region
* @param {string} params.data.stringToSign - the stringToSign
* @param {string} params.data.scopeDate - the timespan to allow the request
* @param {string} params.data.authType - the type of authentication
* (query or header)
* @param {string} params.data.signatureVersion - the version of the
* signature (AWS or AWS4)
* @param {number} params.data.signatureAge - the age of the signature in ms
* @param {string} params.data.log - the logger object
* @param {RequestContext []} requestContexts - an array of
* RequestContext instances which contain information
* for policy authorization check
* @param {function} callback - cb(err)
* @return {undefined}
*/
authenticateV4Request(params, requestContexts, callback) {
const {
accessKey, signatureFromRequest, region, scopeDate,
stringToSign,
} = params.data;
const { log } = params;
log.debug('authenticating V4 request');
const serializedRCs = requestContexts.map(rc => rc.serialize());
this._client.verifySignatureV4(
stringToSign, signatureFromRequest,
accessKey, region, scopeDate,
{ reqUid: log.getSerializedUids(), requestContext: serializedRCs },
(err, authInfo) => {
if (err) {
log.trace('error from vault', { error: err });
return callback(err);
}
return callback(null,
authInfo.message.body.authorizationResults);
},
);
}
/**
* Returns canonical Ids for a given list of account Ids
* @param {string[]} accountIds - list of account ids
* @param {object} log - Werelogs request logger
* @return {Promise} -
*/
getCanonicalIds(accountIds, log) {
log.debug('retrieving canonical ids for account ids', {
method: 'Vault.getCanonicalIds',
});
return new Promise((resolve, reject) =>
this._client.getCanonicalIdsByAccountIds(accountIds,
{ reqUid: log.getSerializedUids(), logger: log }, (err, res) => {
if (err) {
reject(err);
return;
}
resolve(res);
}));
}
}
const vault = new Vault(config);
auth.setHandler(vault);
async function authenticateRequest(request, action, level, resources) {
const policyContext = new policies.RequestContext(
request.headers,
request.query,
level,
resources,
request.ip,
request.ctx.encrypted,
action,
'utapi',
);
return new Promise((resolve, reject) => {
auth.server.doAuth(request, request.logger.logger, (err, res) => {
if (err && (err.InvalidAccessKeyId || err.AccessDenied)) {
resolve([false]);
return;
}
if (err) {
reject(err);
return;
}
// Will only have res if request is from a user rather than an account
if (res) {
try {
const authorizedResources = (res || [])
.reduce((authed, result) => {
if (result.isAllowed) {
// result.arn should be of format:
// arn:scality:utapi:::resourcetype/resource
assert(typeof result.arn === 'string');
assert(result.arn.indexOf('/') > -1);
const resource = result.arn.split('/')[1];
authed.push(resource);
request.logger.trace('access granted for resource', { resource });
}
return authed;
}, []);
resolve([
authorizedResources.length !== 0,
authorizedResources,
]);
} catch (err) {
reject(err);
}
} else {
request.logger.trace('granted access to all resources');
resolve([true]);
}
}, 's3', [policyContext]);
});
}
module.exports = {
authenticateRequest,
Vault,
vault,
};

135
libV2/vault/client.js Normal file
View File

@ -0,0 +1,135 @@
const assert = require('assert');
const { auth, policies } = require('arsenal');
const config = require('../config');
const errors = require('../errors');
/**
@class Vault
* Creates a vault instance for authentication and authorization
*/
class VaultWrapper extends auth.Vault {
create(config) {
if (config.vaultd.host) {
return new VaultWrapper(config);
}
return null;
}
constructor(options) {
let client;
const { host, port } = options.vaultd;
const vaultclient = require('vaultclient');
if (options.tls) {
const { key, cert, ca } = options.tls;
client = new vaultclient.Client(host, port, true, key, cert,
ca);
} else {
client = new vaultclient.Client(host, port);
}
super(client, 'vault');
}
/**
* Returns canonical Ids for a given list of account Ids
* @param {string[]} accountIds - list of account ids
* @param {object} log - Werelogs request logger
* @return {Promise} -
*/
getCanonicalIds(accountIds, log) {
log.debug('retrieving canonical ids for account ids', {
method: 'Vault.getCanonicalIds',
accountIds,
});
return new Promise((resolve, reject) =>
this.client.getCanonicalIdsByAccountIds(accountIds,
{ reqUid: log.getSerializedUids(), logger: log }, (err, res) => {
if (err) {
reject(err);
return;
}
if (!res.message || !res.message.body) {
reject(errors.InternalError);
return;
}
resolve(res.message.body.map(acc => ({
resource: acc.accountId,
id: acc.canonicalId,
})));
}));
}
// eslint-disable-next-line class-methods-use-this
authenticateRequest(request, action, level, resources) {
const policyContext = new policies.RequestContext(
request.headers,
request.query,
level,
resources,
request.ip,
request.ctx.encrypted,
action,
'utapi',
);
return new Promise((resolve, reject) => {
auth.server.doAuth(
request,
request.logger.logger,
(err, authInfo, authRes) => {
if (err && err.is && (err.is.InvalidAccessKeyId || err.is.AccessDenied)) {
resolve({ authed: false });
return;
}
if (err) {
reject(err);
return;
}
// Only IAM users will return authorizedResources
let authorizedResources = resources;
if (authRes) {
authorizedResources = authRes
.filter(resource => resource.isAllowed)
.map(resource => {
// resource.arn should be of format:
// arn:scality:utapi:::resourcetype/resource
assert(typeof resource.arn === 'string');
assert(resource.arn.indexOf('/') > -1);
return resource.arn.split('/')[1];
});
}
resolve({ authed: true, authInfo, authorizedResources });
}, 's3', [policyContext],
);
});
}
getUsersById(userIds, log) {
log.debug('retrieving user arns for user ids', {
method: 'Vault.getUsersById',
userIds,
});
return new Promise((resolve, reject) =>
this.client.getUsersById(userIds,
{ reqUid: log.getSerializedUids(), logger: log }, (err, res) => {
if (err) {
reject(err);
return;
}
if (!res.message || !res.message.body) {
reject(errors.InternalError);
return;
}
resolve(res.message.body);
}));
}
}
const vault = VaultWrapper.create(config);
auth.setHandler(vault);
module.exports = {
VaultWrapper,
vault,
};

177
libV2/vault/index.js Normal file
View File

@ -0,0 +1,177 @@
const { vault } = require('./client');
const metadata = require('../metadata');
const errors = require('../errors');
const config = require('../config');
async function authorizeAccountAccessKey(authInfo, level, resources, log) {
let authed = false;
let authedRes = [];
log.trace('Authorizing account', { resources });
switch (level) {
// Account keys can only query metrics their own account metrics
// So we can short circuit the auth to ->
// Did they request their account? Then authorize ONLY their account
case 'accounts':
authed = resources.some(r => r === authInfo.getShortid());
authedRes = authed ? [{ resource: authInfo.getShortid(), id: authInfo.getCanonicalID() }] : [];
break;
// Account keys are allowed access to any of their child users metrics
case 'users': {
let users;
try {
users = await vault.getUsersById(resources, log.logger);
} catch (error) {
log.error('failed to fetch user', { error });
throw errors.AccessDenied;
}
authedRes = users
.filter(user => user.parentId === authInfo.getShortid())
.map(user => ({ resource: user.id, id: user.id }));
authed = authedRes.length !== 0;
break;
}
// Accounts are only allowed access if they are the owner of the bucket
case 'buckets': {
const buckets = await Promise.all(
resources.map(async bucket => {
try {
const bucketMD = await metadata.getBucket(bucket);
return bucketMD;
} catch (error) {
log.error('failed to fetch metadata for bucket', { error, bucket });
throw errors.AccessDenied;
}
}),
);
authedRes = buckets
.filter(bucket => bucket.getOwner() === authInfo.getCanonicalID())
.map(bucket => ({ resource: bucket.getName(), id: bucket.getName() }));
authed = authedRes.length !== 0;
break;
}
// Accounts can not access service resources
case 'services':
break;
default:
log.error('Unknown metric level', { level });
throw new Error(`Unknown metric level ${level}`);
}
return [authed, authedRes];
}
async function authorizeUserAccessKey(authInfo, level, resources, log) {
let authed = false;
let authedRes = [];
log.trace('Authorizing IAM user', { resources });
// If no resources were authorized by Vault then no further checking is required
if (resources.length === 0) {
return [false, []];
}
// Get the parent account id from the user's arn
const parentAccountId = authInfo.getArn().split(':')[4];
// All users require an attached policy to query metrics
// Additional filtering is performed here to limit access to the user's account
switch (level) {
// User keys can only query metrics their own account metrics
// So we can short circuit the auth to ->
// Did they request their account? Then authorize ONLY their account
case 'accounts': {
authed = resources.some(r => r === parentAccountId);
authedRes = authed ? [{ resource: parentAccountId, id: authInfo.getCanonicalID() }] : [];
break;
}
// Users can query other user's metrics if they are under the same account
case 'users': {
let users;
try {
users = await vault.getUsersById(resources, log.logger);
} catch (error) {
log.error('failed to fetch user', { error });
throw errors.AccessDenied;
}
authedRes = users
.filter(user => user.parentId === parentAccountId)
.map(user => ({ resource: user.id, id: user.id }));
authed = authedRes.length !== 0;
break;
}
// Users can query bucket metrics if they are owned by the same account
case 'buckets': {
let buckets;
try {
buckets = await Promise.all(
resources.map(bucket => metadata.getBucket(bucket)),
);
} catch (error) {
log.error('failed to fetch metadata for bucket', { error });
throw error;
}
authedRes = buckets
.filter(bucket => bucket.getOwner() === authInfo.getCanonicalID())
.map(bucket => ({ resource: bucket.getName(), id: bucket.getName() }));
authed = authedRes.length !== 0;
break;
}
case 'services':
break;
default:
log.error('Unknown metric level', { level });
throw new Error(`Unknown metric level ${level}`);
}
return [authed, authedRes];
}
async function authorizeServiceUser(authInfo, level, resources, log) {
log.trace('Authorizing service user', { resources, arn: authInfo.getArn() });
// The service user is allowed access to any resource so no checking is done
if (level === 'accounts') {
const canonicalIds = await vault.getCanonicalIds(resources, log.logger);
return [canonicalIds.length !== 0, canonicalIds];
}
return [resources.length !== 0, resources.map(resource => ({ resource, id: resource }))];
}
async function translateAndAuthorize(request, action, level, resources) {
const {
authed,
authInfo,
authorizedResources,
} = await vault.authenticateRequest(request, action, level, resources);
if (!authed) {
return [false, []];
}
if (config.serviceUser.enabled && authInfo.getArn() === config.serviceUser.arn) {
return authorizeServiceUser(authInfo, level, authorizedResources, request.logger);
}
if (authInfo.isRequesterAnIAMUser()) {
return authorizeUserAccessKey(authInfo, level, authorizedResources, request.logger);
}
return authorizeAccountAccessKey(authInfo, level, authorizedResources, request.logger);
}
module.exports = {
translateAndAuthorize,
vault,
};

View File

@ -1,50 +1,29 @@
const { Warp10 } = require('@senx/warp10');
const needle = require('needle');
const assert = require('assert');
const { eventFieldsToWarp10, warp10EventType } = require('./constants');
const _config = require('./config');
const { LoggerContext } = require('./utils');
const errors = require('./errors');
const moduleLogger = new LoggerContext({
module: 'warp10',
});
class Warp10Client {
constructor(config) {
this._writeToken = (config && config.writeToken) || 'writeTokenCI';
this._readToken = (config && config.readToken) || 'readTokenCI';
this._nodeId = (config && config.nodeId) || _config.nodeId;
this._writeToken = (config && config.writeToken) || 'writeTokenStatic';
this._readToken = (config && config.readToken) || 'readTokenStatic';
this.nodeId = (config && config.nodeId) || _config.nodeId;
const proto = (config && config.tls) ? 'https' : 'http';
const requestTimeout = (config && config.requestTimeout) || 10000;
const connectTimeout = (config && config.connectTimeout) || 10000;
if (config && config.hosts) {
this._clients = config.hosts
.map(({ host, port }) => new Warp10(
`${proto}://${host}:${port}`,
requestTimeout,
connectTimeout,
));
} else {
const host = (config && config.host) || 'localhost';
const port = (config && config.port) || 4802;
this._clients = [new Warp10(`${proto}://${host}:${port}`, requestTimeout, connectTimeout)];
}
this._requestTimeout = (config && config.requestTimeout) || 30000;
this._connectTimeout = (config && config.connectTimeout) || 30000;
const host = (config && config.host) || 'localhost';
const port = (config && config.port) || 4802;
this._client = new Warp10(`${proto}://${host}:${port}`, this._requestTimeout, this._connectTimeout);
}
async _wrapCall(func, params) {
// eslint-disable-next-line no-restricted-syntax
for (const client of this._clients) {
try {
// eslint-disable-next-line no-await-in-loop
return await func(client, ...params);
} catch (error) {
moduleLogger.warn('error during warp10 operation, failing over to next host',
{ statusCode: error.statusCode, statusMessage: error.statusMessage, error });
}
}
moduleLogger.error('no remaining warp10 hosts to try, unable to complete request');
throw errors.InternalError;
async update(payload) {
return this._client.update(this._writeToken, payload);
}
static _packEvent(valueType, event) {
@ -58,12 +37,12 @@ class Warp10Client {
}
_buildGTSEntry(className, valueType, labels, event) {
const _labels = this._clients[0].formatLabels({ node: this._nodeId, ...labels });
const _labels = this._client.formatLabels({ node: this.nodeId, ...labels });
const packed = Warp10Client._packEvent(valueType, event);
return `${event.timestamp}// ${className}${_labels} ${packed}`;
}
async _ingest(warp10, metadata, events) {
async ingest(metadata, events) {
const { className, valueType, labels } = metadata;
assert.notStrictEqual(className, undefined, 'you must provide a className');
const payload = events.map(
@ -74,17 +53,10 @@ class Warp10Client {
ev,
),
);
const res = await warp10.update(this._writeToken, payload);
const res = await this.update(payload);
return res.count;
}
ingest(...params) {
return this._wrapCall(
this._ingest.bind(this),
params,
);
}
_buildScriptEntry(params) {
const authInfo = {
read: this._readToken,
@ -104,21 +76,15 @@ class Warp10Client {
return payload.join('\n');
}
async _exec(warp10, params) {
async exec(params) {
const payload = this._buildExecPayload(params);
const resp = await warp10.exec(payload);
const resp = await this._client.exec(payload);
moduleLogger.info('warpscript executed', { stats: resp.meta });
return resp;
}
exec(...params) {
return this._wrapCall(
this._exec.bind(this),
params,
);
}
async _fetch(warp10, params) {
const resp = await warp10.fetch(
async fetch(params) {
const resp = await this._client.fetch(
this._readToken,
params.className,
params.labels || {},
@ -130,15 +96,46 @@ class Warp10Client {
return resp;
}
fetch(...params) {
return this._wrapCall(
this._fetch.bind(this),
params,
async delete(params) {
const {
className, labels, start, end,
} = params;
assert.notStrictEqual(className, undefined, 'you must provide a className');
assert.notStrictEqual(start, undefined, 'you must provide a start timestamp');
assert.notStrictEqual(end, undefined, 'you must provide a end timestamp');
const query = new URLSearchParams([]);
query.set('selector', encodeURIComponent(className) + this._client.formatLabels(labels || {}));
query.set('start', start.toString());
query.set('end', end.toString());
const response = await needle(
'get',
`${this._client.url}/api/v0/delete?${query.toString()}`,
{
// eslint-disable-next-line camelcase
open_timeout: this._connectTimeout,
// eslint-disable-next-line camelcase
response_timeout: this._requestTimeout,
headers: {
'Content-Type': 'text/plain; charset=UTF-8',
'X-Warp10-Token': this._writeToken,
},
},
);
return { result: response.body };
}
}
const clients = _config.warp10.hosts.map(
val => new Warp10Client({
readToken: _config.warp10.readToken,
writeToken: _config.warp10.writeToken,
connectTimeout: _config.warp10.connectTimeout,
requestTimeout: _config.warp10.requestTimeout,
...val,
}),
);
module.exports = {
Warp10Client,
client: new Warp10Client(_config.warp10),
clients,
};

View File

@ -46,6 +46,9 @@
"s3:PutObjectRetention": 0,
"s3:GetObjectRetention": 0,
"s3:PutObjectLegalHold": 0,
"s3:GetObjectLegalHold": 0
"s3:GetObjectLegalHold": 0,
"s3:ReplicateObject": 0,
"s3:ReplicateTags": 0,
"s3:ReplicateDelete": 0
}
}

View File

@ -101,6 +101,12 @@ components:
type: string
level:
type: string
utapi-get-prometheus-metrics:
description: metrics to be ingested by prometheus
content:
text/plain:
schema:
type: string
parameters:
level:
in: path
@ -133,6 +139,16 @@ paths:
$ref: '#/components/responses/json-error'
200:
description: Service is healthy
/_/metrics:
get:
x-router-controller: internal
x-iplimit: true
operationId: prometheusMetrics
responses:
default:
$ref: '#/components/responses/json-error'
200:
$ref: '#/components/responses/utapi-get-prometheus-metrics'
/v2/ingest:
post:
x-router-controller: metrics

Some files were not shown because too many files have changed in this diff Show More