Injecting Secrets in GKE with Secret Manager

Alessio Trivisonno
5 min readMay 5, 2021

Applications need to read secret values to connect for example to a database, or to call an API. Provisioning these secrets in Kubernetes is one of the hottest topic that I had to measure myself with when building a pipeline for Continuous Delivery.

More in detail I was searching for answers to these questions:

  • Where do I store safely the secrets ?
  • How do I get the secret from the secret store to the application ?
  • How to keep track of changes in the secret ?
  • Can I rollback a secret ?

I was using Helm to template kubernetes manifests and I wanted to store them on GCS before applying them with Spinnaker. This is very useful as well for audit and troubleshooting but it gave me a bit of headache because the baked manifests contained the secrets. How I could avoid that all the credentials end up in a poorly protected GCS bucket? I had to find a way to pass the secrets to my app so that they didn’t leave any trace in the baked manifests.

Looking around I saw that a common approach is to deploy the secret separately from the application ( maybe manually with a simple kubectl apply -f secret.yaml ) but this approach diverges from the automation principles of CD and doesn’t offer you the possibility to track any changes to the secret, let alone rollback.

Looking more deeply I found these three approaches that could somehow suit my needs:

  • Encrypt secrets on the Git repo and decrypt it at runtime/deploy time
  • Use Hashicorp Vault
  • Use GCP Secret Manager (or equivalent) to provision secrets

But which one is the most secure? Or which one is the easiest to use? And what are the pros and cons of each approach?

Decrypting secrets from Git, this is pretty easy to do, but if you think about it, I would just move the problem -> How do I provision the decryption key? And more importantly where do I store the decryption key in a way that allows me to have automated deployments? 🙄

Hashicorp Vault: This is is pretty appealing and surely the most secure way to deploy secrets in Kubernetes. But I needed something more simple to manage and more cost effective 🤕.

Well if you find your self in a similar situation , I would suggest you to have a look at GCP Secret Manager.

What is GCP Secret Manager?

GCP Secret Manager is a storage system for credentials or sensitive data in general. It is very convenient ( $0.06 per version per location ) and fulfills most of our requirements:

👍 Access Control with IAM

👍 Versioning of secrets

👍 Audit Logs

👍 Encrypted AES-256-bit

👍 API support

You can create a secret very easily from the Console or from the CLI interface

And access it, from the CLI in this way :

gcloud secrets versions access latest --secret=my-super-super-secret-test

Secret Provisioning for Kubernetes

So now that we have seen how you can create and retrieve secrets from GCP Secret Manager we have two methods of using it in Kubernetes, depending on your needs:

  • Use an environment variable approach, like withkube-secrets-init ( see here ): a Kubernetes mutating admission webhook, that mutates any K8s Pod that is using specially prefixed environment variables
  • Provision your secrets with a custom script

Which one you decide to choose is up to you and your application needs. The first approach seems to be more automatic and easy to use, but, in my opinion, I would prefer going for the second one because I saw that environment variables are more easily leaked (think about an badly protected php_info page 😱 ) .

Ok, but how do I do it ?

  1. You can write a small script (like the one below) that pulls the secrets from the Secret Manager, output them in a file. Your application will read the file at container runtime.
#!/bin/bash mkdir /secretscat > /secrets/secrets.yaml <<EOFsecrets:
db_password: "`gcloud secrets versions access $DB_PASS_VERSION --secret=DB_PASS`"
api_key: "`gcloud secrets versions access $API_KEY_VERSION --secret=API_KEY`"
EOF

2. Build a Docker image containing this script

FROM google/cloud-sdk:332.0.0-alpineCOPY ./get-secrets.sh /get-secrets.shWORKDIR /

3. Run the image as an init-container with a mounted emptyDir volume

apiVersion: apps/v1
kind: Deployment
...
initContainers:
- name: secret-injector
image: your.repo/your-secret-injector-image:version
command:
- "sh"
- "-c"
- "get-secrets.sh"
env:
- name: DB_PASS_VERSION
value: "1"
- name: API_KEY_VERSION
value: "1"
volumeMounts:
- name: secrets
mountPath: "/secrets"
...
volumes:
- name: secrets
emptyDir: {}

( See Kubernetes init-containers and Kubernetes emptyDir volumes for more info )

4. Now you can use it in your application containers

apiVersion: apps/v1
kind: Deployment
...
containers:
- image: your-repo/your-app:version
name: app
volumeMounts:
- name: secrets
mountPath: "/app/secrets.yaml"
readOnly: true
subPath: "secrets.yaml"
...

5. All you need now is a way to authenticate your pods to the GCP in order to pull the secrets, how do we do it?

credentials.json? Of course NOT 📢📣!!

The answer is Workload Identity:

apiVersion: apps/v1
kind: Deployment
...
spec:
template:
spec:
serviceAccountName: your-service-account-name
...
----apiVersion: v1
kind: ServiceAccount
metadata:
name: your-service-account-name
annotations:
iam.gke.io/gcp-service-account: your-gcp-email-sa@<your-project>.iam.gserviceaccount.com

There are plenty of articles that explain how to use Workload Identity in an excellent way, here are some references if you need to go deeper in the subject

🔍 https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity

🔍 https://medium.com/google-cloud/mapping-kubernetes-service-accounts-to-gcp-iams-using-workload-identity-b53496d543e0

Advantages of this approach

  • As your manifests are now without any hard-coded secret in it you can store it more safely for example in GCS for auditing
  • Secrets are Versioned so you can safely create a new version, update the manifest environment variables and push the change
  • As your secret injector is a piece of code you can version it and publish it along your application image, this makes changes to the script and the versions very safe to do. In fact, in the worst case, your application will fail the init container and traffic will never be served to the pod (this is even better if combined with a blue/green deployment strategy)

Thank you !

Thank you for your time reading my article.

The objective of this article was to give you the gist of what I think could be a good way to do solve this problem, but as I found out during my researches it seems there is no silver bullet for it. Every alternative has to be evaluated under its constraints. If you have any suggestion or if you discover a better solution please let me know down in the comments.

I hope it was useful for you 😀 If so, give a 👏 and stay tuned for more articles.

Thank you ! 🍻

--

--