Skip to content

JCS-14392 - Issue with volume attachments on scale-out #245

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Mar 22, 2024

Conversation

roberto-sanchez-herrera
Copy link
Member

  • Make the keys of the maps of compute and volumes resources to have 2 digits at the end, to conserve the iteration order, which is lexicographical, to prevent volume attachments from being reassigned to other instances because of the iteration order in the list of compute instances

Tests:

  • Created a non-JRF stack with new VCN, and two nodes
  • Scaled up the stack to 4 nodes, verified the apply job completed successfully and that all servers were added.
  • Scaled up the stack to 10 nodes, and verified the same points above
  • Scaled up the stack to 11 nodes, and made the same verifications above, and verified that the existing block volume attachments and block volumes where not affected
  • Scaled up the stack to 20 nodes, and made the same verifications above
  • Scaled up the stack to 30 nodes, and made the same verifications above
  • Scaled down the stack to 10 nodes. Verified that only the artifacts 29 to 10 are deleted, and the rest of the servers are still running

- Make the keys of the maps of compute and volumes resources to
  have 2 digits at the end, to conserve the iteration order, which
  is lexicographical, to prevent volume attachments from being
  reassigned to other instances because of the iteration order in the
  list of compute instances
@@ -18,7 +18,7 @@ module "middleware-volume" {

module "data-volume" {
source = "../volume"
bv_params = { for x in range(var.num_vm_instances) : "${var.resource_name_prefix}-data-block-${x}" => {
bv_params = { for x in range(var.num_vm_instances) : "${var.resource_name_prefix}-data-block-${format("%02d", x)}" => {
ad = var.use_regional_subnet ? local.ad_names[(x + local.admin_ad_index) % length(local.ad_names)] : var.availability_domain
compartment_id = var.compartment_id
display_name = "${var.resource_name_prefix}-data-block-${x}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change looks good to me.

nitpick:

I noticed you chose to keep the display name single digit (for example - display name is "-1" for data-block "-01"). This is not a functional problem.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good observation. It was made on purpose, I wanted to make the least possible changes. There might be code that checks the server names, VM display names, etc. I am not sure and I do not want to break such code, if any
The key of the maps used to create the resource collections are not used in other code that is not terraform, that I am sure. That

@roberto-sanchez-herrera roberto-sanchez-herrera merged commit 2cfb830 into development Mar 22, 2024
@roberto-sanchez-herrera roberto-sanchez-herrera deleted the topic_robesanc_JCS-14389 branch March 22, 2024 15:46
skommala pushed a commit that referenced this pull request Mar 23, 2024
- Make the keys of the maps of compute and volumes resources to have 2
digits at the end, to conserve the iteration order, which is
lexicographical, to prevent volume attachments from being reassigned to
other instances because of the iteration order in the list of compute
instances

Tests:
- Created a non-JRF stack with new VCN, and two nodes
- Scaled up the stack to 4 nodes, verified the apply job completed
successfully and that all servers were added.
- Scaled up the stack to 10 nodes, and verified the same points above
- Scaled up the stack to 11 nodes, and made the same verifications
above, and verified that the existing block volume attachments and block
volumes where not affected
- Scaled up the stack to 20 nodes, and made the same verifications above
- Scaled up the stack to 30 nodes, and made the same verifications above
- Scaled down the stack to 10 nodes. Verified that only the artifacts 29
to 10 are deleted, and the rest of the servers are still running

(cherry picked from commit 2cfb830)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants