Skip to content

fix: name space imports should not change project #1515

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jun 9, 2025

Conversation

matttrach
Copy link
Collaborator

@matttrach matttrach commented May 23, 2025

Problem

Importing a namespace to your terraform state should never result in a change to that namespace.

Solution

Remove the code that moves the namespace between projects if a different project is specified.
Instead, return an error specifying that the project specified doesn't match the actual resource.

Testing

The majority of the work in this PR is in the testing.
Testing terraform import can be a bit difficult, it requires orchestrating multiple terraform states. I chose to orchestrate terraform with terraform. In the example I generate a Rancher node, then a downstream cluster, and finally a namespace.
I created a local module named "deploy" which will deploy a Terraform module in its own state. This can actually be pretty helpful if you need to break down complex or large builds into separate states. The deploy module deploys a local module named "import" which imports the namespace and the downstream cluster. This verifies that the namespace can still be imported even with the removal of the project change.

I tested this using the included run_tests.sh in the ./scripts directory. ./scripts/run_tests.sh -t TestDownstreamImport.
The script will build the provider locally and ensure that Terraform uses it when executing the import example, it does this with a .terraformrc that the script builds on the fly. The run_tests script assumes you have some dependencies and environment variables set, which are found in the flake.nix and .envrc files in the repo.

A big part of the run_tests.sh script is the amount of work I put into making sure that it can clean up if there are any problems with the build. Using an opensource tool called "leftovers" which is designed exactly for this purpose I am able to find and remove any objects in AWS which may be left over after a failed test run.

QA Testing Considerations

I am not sure why the project is linked to the namespace so tightly. I deployed a rancher_cluster_v2, but had to get the project_id from a data_rancher_cluster resource in order to import the namespace.
Testing Terraform imports automatically is tricky, and there are some weird things I did to avoid dynamically writing Terraform code to validate the error. Instead of adding dynamic resources or overrides, I added a variable "project_mismatch" which, when set to true, will trigger the failure path. I expect the error in Terratest and fail the test if the error isn't returned.

Signed-off-by: matttrach <matt.trachier@suse.com>
(cherry picked from commit 73050d6)
matttrach added 2 commits May 27, 2025 17:19
Signed-off-by: matttrach <matt.trachier@suse.com>
Signed-off-by: matttrach <matt.trachier@suse.com>
@matttrach matttrach marked this pull request as ready for review May 27, 2025 22:38
@matttrach matttrach added this to the May Release milestone May 27, 2025
Signed-off-by: matttrach <matt.trachier@suse.com>
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The most relevant change is here

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here are the docs that go along with the change.

gpg --verify "${SHASUM_FILE}.sig" "${SHASUM_FILE}"
if [ $? -eq 0 ]; then

if ! gpg --verify "${SHASUM_FILE}.sig" "${SHASUM_FILE}"; then
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was a shellcheck suggestion.

Copy link
Member

@jiaqiluo jiaqiluo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Just a minor nit, but I’ll go ahead and approve so it doesn’t block merging.


client, err := meta.(*Config).ClusterClient(clusterID)
if err != nil {
log.Printf("[INFO] Problem getting cluster client for cluster with id \"%s\"", clusterID)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit

Suggested change
log.Printf("[INFO] Problem getting cluster client for cluster with id \"%s\"", clusterID)
log.Printf("[ERROR] Problem getting cluster client for cluster with id \"%s\"", clusterID)

@jiaqiluo
Copy link
Member

jiaqiluo commented Jun 5, 2025

I'd suggest making an issue to track the changes and linking it to the PR if there is not already one.
This can be considered a breaking change, so it should be called out in the release note.

Copy link
Collaborator

@HarrisonWAffel HarrisonWAffel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! lgtm. I agree with Jiaqi though, we might want to make an issue to make this change more visible to others when updating their provider versions.

@matttrach matttrach merged commit ab77bd4 into rancher:master Jun 9, 2025
6 checks passed
@matttrach
Copy link
Collaborator Author

#1516
Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants