Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upfeat: Ability to manage worker groups as maps #858
Conversation
|
Thanks @grzegorzlisowski for opening this PR. Actually, this is something we want to add into this module and totally drop loops with count in favor of for/for_each. With that said, it would be nice to split the feature into a new submodule. We started some discussion at #774. @js-timbirkett is maybe busy actually, so if you can open a PR to address #774 it would be very appreciated. |
| @@ -38,11 +37,27 @@ locals { | |||
| } | |||
| ] | |||
|
|
|||
| auth_launch_template_worker_roles_ext = [ | |||
| for k, v in local.worker_group_launch_template_configurations_ext : { | |||
| worker_role_arn = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/${var.manage_worker_iam_resources ? aws_iam_instance_profile.workers_launch_template_ext[k].role : data.aws_iam_instance_profile.custom_worker_group_launch_template_iam_instance_profile_ext[k].role_name}" | |||
mbarrien
May 6, 2020
Need to replace the :aws: portion of the ARN with :${data.aws_partition.current.partition}: (same in auth_worker_roles_ext below)
Need to replace the :aws: portion of the ARN with :${data.aws_partition.current.partition}: (same in auth_worker_roles_ext below)
grzegorzlisowski
May 6, 2020
Author
fixed
fixed
| name = each.value["iam_instance_profile_name"] | ||
| } | ||
|
|
||
| data "aws_region" "current" {} |
mbarrien
May 6, 2020
Doesn't seem to be used. Remove?
Doesn't seem to be used. Remove?
grzegorzlisowski
May 6, 2020
Author
fixed
fixed
9820bb7
to
695030c
|
|
||
| dynamic mixed_instances_policy { | ||
| iterator = item | ||
| for_each = ((lookup(var.worker_groups[each.key], "override_instance_types", null) != null) || (lookup(var.worker_groups[each.key], "on_demand_allocation_strategy", null) != null)) ? list(each.value) : [] |
barryib
May 8, 2020
Member
does var.worker_groups[each.key] different from each.value ? If not, I'll prefer to use that. It'll more consistant for readability.
does var.worker_groups[each.key] different from each.value ? If not, I'll prefer to use that. It'll more consistant for readability.
grzegorzlisowski
May 8, 2020
Author
Unfortunately, it is different according to my analysis. This is only to avoid changing old logic. See here: https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/workers_launch_template.tf#L93
Unfortunately, it is different according to my analysis. This is only to avoid changing old logic. See here: https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/workers_launch_template.tf#L93
grzegorzlisowski
May 8, 2020
•
Author
Sorry, responded too early.
This is what came as an argument from user: var.worker_groups_launch_template[count.index] while "each.value" represents what is already a merge with module defaults. Which IMHO could change original behaviour
Sorry, responded too early.
This is what came as an argument from user: var.worker_groups_launch_template[count.index] while "each.value" represents what is already a merge with module defaults. Which IMHO could change original behaviour
|
|
||
| dynamic launch_template { | ||
| iterator = item | ||
| for_each = ((lookup(var.worker_groups[each.key], "override_instance_types", null) != null) || (lookup(var.worker_groups[each.key], "on_demand_allocation_strategy", null) != null)) ? [] : list(each.value) |
barryib
May 8, 2020
Member
does var.worker_groups[each.key] different from each.value ?
does var.worker_groups[each.key] different from each.value ?
grzegorzlisowski
May 8, 2020
Author
This is the same as the previous case above
This is the same as the previous case above
I was wondering if we shouldn't drop this directly. Because for existing worker group, users will have to move resources in the state. So why not move them directly into a map ? For locals, shouldn't we move https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/local.tf#L34-L100 and into the submodule ? |
|
My assumption was that users could apply the new module and add new worker groups using maps then migrate K8S resources to new nodes and then remove old groups simply by removing the list. Those locals I have left in the original place as I assumed we still use them in "legacy" worker definitions. If we will just drop old worker_groups definitions then those should be moved. |
695030c
to
d5582b6
|
Do we know what is blocking this? |
|
I presume review and approval? I'm not sure why this "Semantic Pull Request" check is not executing |
|
we can use worker_groups as a list of maps with unique key name.. instead of map so that it would be more cleaner and we can do ..
|
| } | ||
|
|
||
| data "aws_ami" "eks_worker" { | ||
| filter { |
venky999
Jun 4, 2020
•
can we use for_each and add support for worker_node_version for each workers_group and the default value will be cluster_version if worker_node_version not specified
can we use for_each and add support for worker_node_version for each workers_group and the default value will be cluster_version if worker_node_version not specified
grzegorzlisowski
Jun 23, 2020
Author
What do you mean by 'worker_node_version'?
What do you mean by 'worker_node_version'?
It could be done but I presume it might make sense to keep the same approach as for |
bf35ebe
to
8d8bd48
|
Error sometimes appear randomly without changing anything in code.
|
Issues happen when adding a second group to |
|
@ZeroDeth |
1cbf970
to
87edcca
87edcca
to
da785fc


PR o'clock
Description
Resolves #774
The change is intended to improve the ability to manage worker groups using maps. Which should allow to more flexibly add/remove worker groups (improve this: https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#how-do-i-safely-remove-old-worker-groups).
The change includes suggested in #774 changes which include:
Change to the worker groups definitions:
worker_groups = {}
Checklist