All posts by tpryan

Makefile – Start, Stop or Delete 20 VMs at once

Last time I created 20 virtual machines at once.  Now I want to stop those machines, or start them back up, or delete them. Basically, I want to do bulk operations on all of the machines that I am using in this scenario.

If you look at the create 20 VMs post, I gave each one of them a similar name, based on the pattern “load-xxx” where load is the operation I am using them for and xxx is a three digit sequential id with 0s prefixed. (This makes them order correctly in our UI.)

Because I know their names, I can count them up and not have to explicitly tell these operations how many machines I want to operate on.  To do that, I create a make variable that contains the count of all VMs prefixed by “load.”

Once I have that, I can perform batch operations very simply.

To stop 20 running VMs:

Just to explain, like the previous post, we loop from i to COUNT, creating a variable that contains the name of our server, and running a function call to execute the gcloud stop instances command.  Why is this a separate function?  Because I usually do more than just stop the VM.

I also wrap the call in parentheses and append the & to allow multiple calls to execute in parallel.

To start them back up:

To delete them all:

And in this case, I do a little bit more here in delete.  I make sure all of the disks are deleted, and I set the request to quiet. Why? Because I don’t want to confirm this 20 times, silly.

In any case, doing batch operations on my set of VMs is as easy as:

There you have it, fleets of VMs responding in concert to your requests.  As it should be.

Makefile – Launch 20 Compute Engine virtual machines at once.

We’re going to try something a lot more complex in make now. I’m going to dynamically create 20 Compute Engine virtual machines that are absolutely the same. This requires quite a bit more complexity, so we’ll break it down step by step.

Let’s start with the gcloud command to create an instance.  

I encapsulated this into a Makefile function. Why?  Well, as I have it here, it is a pretty simple event with adding apt-get update but I usually do more then just create the node and install software. I often set environmental information or start services, etc. So by putting all of the instance specific instructions in a function, I make it just slightly easier to grok.

Let’s go through this part step by step.  

  • Define a function with the define keyword, and end it with the endef keyword
  • It appears that functions must be one line, so use ;\ to organize multiple calls into one function
  • Wrap all of the real work in a parenthesis. Why? It turns it into one operation, so that each step of the function doesn’t block parallel execution of other operations in the makefile.
  • Capture the first argument – $(1) – passed into this function – we’ll use it as the name of the instance
  • Create a machine using gcloud compute instances create. Note setting the machine type.  If you are creating a lot of instances, make sure you don’t run afoul of quota or spend.
  • SSH into machine and run apt-get update.
  • Tell us this machine is ready.   

Okay, that handles the instance creation, but now we have to loop through and create a variable amount of machines. I said 20, but I often spin up anywhere from 10 to 150 using this method.

Again, step by step:

  • Use @ so that the commands aren’t echoed to the output.
  • Set up a while loop with iterator – i, that will run as long as i is less than the explicitly passed variable named count
  • Use ;\ to make the command one logical line.
  • Use printf to create a variable named server to name the instances. In this case each instance is named “load-xxx” where xxx is a sequential id number for the node that always has three digits. This makes it easier to go back later and do more group operations on the entire set of machines. 
  • Call the function using the syntax $(call function_namevalue_to_pass)
  • Wrap call in parentheses and append a &.  This shoves the call to the background so you can create 20, or 100, or 150 of these in parallel instead of sequentially.
  • We then increment the counter.   

Finally we call the whole thing with:

Pretty straightforward. I frequently use this technique to launch of fleet of VMs to send large amounts of load at App Engine. Next I’ll tell you how to delete them all.   

Don’t forget the count=N, or the call will bail.

Makefile – Tell me when you are done

I have been doing a large number of tasks lately that involve executing long-running processes from a Makefile — maybe somewhere in the neighborhood of 40 seconds to 5 minutes.  They’re just long enough that I get bored and go off and do something else, but short and urgent enough that I would really want to do something (usually manually test) right after the process is done. I need to not drift off into procrastination world.  I need to be alerted when my process completes.

I have taken to adding a ‘say’ command to my Makefiles.  If you aren’t familiar, on OS X ‘say’ will have the computer speak out whatever you have written using the Text-to-Speech system setup in the OS.  So that way, when I am looking in a browser window being distracted, a disembodied voice can startle me out of my reverie and I can jump right back in as soon as possible.

I usually do something like this:

 

 

As always, your command names and mileage may vary. And turn down your volume.  Or don’t. 

Makefile – Clean App Engine flexible environment

One of the more interesting quirks of App Engine flexible environment is that App Engine launches Compute Engine virtual machines that you can’t spin down directly. The way to spin down App Engine flex is to delete all versions of the app.  This will close down all of the VMs, and shut down your App Engine app.

You can do it manually through the web interface, you can do it manually by listing versions in gcloud then deleting them, or you can have a Makefile do it for you.

First I use the trick I wrote about capturing dynamic data from gcloud. Then I pipe that to a Makefile command that will delete the versions.

Note that I add -q to the command because I don’t want to be prompted; I just want them gone.

Makefile – Delete Forwarding Rules

I have a demo where I build a Kubernetes cluster on Container Engine to run a LAMP app. In the demo, I script out a complete build process from an empty project to the full running app.  Testing this requires a clean up that takes me all the way back to an empty project with no cluster.  

There is one thing I do not tear down – static IP addresses.  I don’t tear these down because they are locked to host names in Google Domains, and I use those IPs in my Kubernetes setup to make sure that my cluster app is available at a nice URL and not just a randomly assigned IP.

But I have been running into a problem with this. Sometimes the static IPs hold on to Forwarding Rules that are autogenerated with crazy randomized names by Container Engine. It appears to  happen only when I do a full clean.  I suspect that I am deleting the cluster before it has a chance to issue the command to delete the forwarding rules itself.

In any case, I got tired of dealing with this manually, so I made a Makefile solution. First I get the dynamic list of crazy random forwarding rule names using the Makefile technique I outlined earlier.  Then I pass that list to a gcloud command:

Note that I had to make sure I passed a region, otherwise the command would have prompted me to enter it manually.

Makefile – Get dynamic values from gcloud

Most of the time when I create something in my environment on Google Cloud Platform, I give it a specific name.  For example, I create servers and call them “ThingIWillReferenceLaterWhenIDeleteYou” or more boringly, “Server1.”

Having set names, as I alluded to, makes it easier to clean up after yourself. But there are some cases when you cannot name things when they are created. So it would be nice to get a list of these names. For example, App Engine flexible environment versions for cleaning up after a test.

You can get a list of them with this command:

Which yields this:

Now normally I would have to add extra code to my Makefile to rip out the version names.

But gcloud actually has a robust formatting tool. So instead of running the command above I can run:

And get the JSON representation, which looks like this:

Using a JSON parser might make this easier, but there is an even easier way:

Which yields:

What will this do?

It will list just the value of version.id and it will separate each record it returns with a ” “, not a line break.  This allows me to drop this generated list into any command that takes multiple names and run them. The gcloud CLI takes multiple arguments in this way. 

So to make this applicable to Makefiles I have to do one more thing – take this data and put it in a variable.

Here we are, ready to use this variable in other Make commands. This works for most of the other places in GCP where you see random values spitting out, like IP forwarding rules, and GKE nodes, to name two. 

To learn more about how to filter and format your gcloud commands, check out the Google Cloud Platform Blog.

 

Makefile – quick series

Before joining Google Cloud, I wasn’t programming as much as I used to.  So when I joined up the last build system I had used with any regularity was ANT.  Upon getting back in the routine of programming, I started down that path again, and immediately stopped.  I did not want to deal with ANT and XML when I started back up.  I also wasn’t doing anything even tangentially related to Java. So no, I’m not using ANT. I stopped doing single build files altogether and settled for folders of bash scripts.

This was… unsustainable.

Enter Mark Mandel and his constant exhortations to use Makefiles.  Eventually I listened to him, and now instead of folders of scripts, or line after line of XML, I have giant Makefiles.

Make is awesome.  And I know it is for more than just pushing files around, but that’s what I use it for.  And I love it.  

I’m running a short series on a number of productivity tips and tricks I’ve learned.  Many will be about Google Cloud. Some will not. I hope these help someone else learn to love Makefiles.

How Kubernetes Updates Work on Container Engine

I often get asked when I talk about Container Engine (GKE):

How are upgrades to Kubernetes handled?

Masters

As we spell out in the documentation, upgrades to Kubernetes masters on GKE are handled by us. They get rolled out automatically.  However, you can speed that up if you would like to upgrade before the automatic update happens.  You can do it via the command line:

You can also do it via the web interface as illustrated below.

GKE notifies you that upgrades are available.
GKE notifies you that upgrades are available.
You can then upgrade the master, if the automatic upgrade hasn’t happened yet.
You can then upgrade the master, if the automatic upgrade hasn’t happened yet.
Once there, you’ll see that the master upgrade is a one way trip.
Once there, you’ll see that the master upgrade is a one way trip.

Nodes

Updating nodes is a different story. Node upgrades can be a little more disruptive, and therefore you should control when they happen.  

What do I mean by “disruptive?”

GKE will take down each node of your cluster killing the resident pods.  If your pods are managed via a Replication Controller or part of a Replica Set deployment, then they will be rescheduled on other nodes of the cluster, and you shouldn’t see a disruption of the services those pods serve. However if you are running a Pet Set deployment, using a single Replica to serve a stateful service or manually creating your own pods, then you will see a disruption. Basically, if you are being completely “containery” then no problem.  If you are trying to run a Pet as a containerized service you can see some downtime if you do not intervene manually to prevent that downtime.  You can use a manually configured backup or other type of replica to make that happen.  You can also take advantage of node pools to help make that happen.   But even if you don’t intervene, as long as anything you need to be persistent is hosted on a persistent disk, you will be fine after the upgrade.

You can perform a node update via the command line:

Or you can use the web interface.

Again, you get the “Upgrade Available” prompt.
Again, you get the “Upgrade Available” prompt.
You have a bunch of options. (We recommend you stay within 2 minor revs of your master.)
You have a bunch of options. (We recommend you stay within 2 minor revs of your master.)

A couple things to consider:

  • As stated in the caption above, we recommend you say within 2 minor revs of your master. These recommendations come from the Kubernetes project, and are not unique to GKE.
  • Additionally, you should not upgrade the nodes to a version higher than the master. The web UI specifically prevents this. Again, this comes from Kubernetes.
  • Nodes don’t automatically update.  But the masters eventually do.  It’s possible that the masters could automatically update to a version more than 2 minor revs beyond the nodes. This can a cause compatibility issues. So we recommend timely upgrades of your nodes. Minor revs come out at about once every 3 months.  Therefore you are looking at this every 6 months or so.

As you can see, it’s pretty straightforward. There are a couple of things to watch out for, so please read the documentation.

Making Kubernetes IP addresses static on Google Container Engine

I’ve been giving a talk and demo about Kubernetes for a few months now, and during my demo, I have to wait for an ephemeral, external IP address from a load balancer to show off that Kubernetes does in fact work.  Consequently, I get asked “Is there any way to have a static address so that you can actually point a hostname at it?” The answer is: of course you can.

Start up your Kubernetes environment, making sure to configure a service with a load balancer.

Once your app is up, make note of the External IP using kubectl get services.

services

Now go to the Google Cloud Platform Console -> Networking -> External IP Addresses.

Find the IP you were assigned earlier. Switch it from “Ephemeral” to “Static.” You will have to give it a name and it would be good to give it a description so you know why it is static.

ipassign

Then modify your service (or service yaml file) to point to this static address. I’m going to modify the yaml.   

edityaml

Once your yaml is modified you just need to run it; use kubectl apply -f service.yaml.

To prove that the IP address works, you should kubectl delete the service and then kubectl apply, but you don’t have to do that. If you do that though, please be aware that although your IP address is locked in, your load balancer still needs a little bit of time to fire up.  

Instead of this method, you can create a static IP address ahead of time and create the forwarding rules manually. I think that’s its own blog post, and  I think it is just easier to let Container Engine do it.

I got lots of help for this post from wernight’s answer on StackOverflow, and the documentation on Kubernetes Services.

I can confirm this works with Google Container Engine. It should work with a Kubernetes cluster installed by hand on Google Cloud Platform.  I couldn’t ascertain if it works on other cloud providers.

Kubernetes Secrets Directly to Environment Variables

kubernetes-secretsI’ve found myself wanting to use Kubernetes Secrets for a while, but I every time I did, I ran into the fact that secrets had to be mounted as files in the container, and then you had to programmatically grab those secrets and turn them into environment variables.  This works and there are posts like this great one from my coworker, Aja Hammerly that tell you how to do it.

It always seemed a little suboptimal for me though.  Mostly because you had to alter your Docker image in order to use secrets. Then you lose some of the flexibility to use a Dockerfile in both Docker and Kubernetes. It’s not the end of the world – you can write a conditional script –  but I never liked doing this.  It would be awesome if you could just write Secrets directly to ENV variables.

Well it turns out you can. Right there in the documentation there’s a whole section on Using Secrets as Environment Variables. It’s pretty straightforward:

Make a Secrets file, remembering to base64 encode your secrets.

Then configure your pod definition to use the secrets.

That’s it. It’s a great addition to the secrets API.  I’m trying to track down when it was added. It looks like it came in 1.2.  The first reference I could find to it in the docs was in this commit  updating Kubernetes Documentation for 1.2.