Use an AWS Serverless lambda to trigger a Jenkins build by a Github PR comment – Part 2

13 Nov

This is part 2 of the ‘Serverless lambda to trigger a Jenkins job on a Github PR comment’ series. We have already set up everything we need for this in part 1. In this part, we will purely focus on everything around the actual lambda that will do the magic for us. Let’s get back to our handler lambda function from the very first serverless setup step. We will repurpose the same lambda function to handle the payload that Github will push using the webhook we configured for the repository.

The idea is simple.

  1. Our handler function will parse the webhook payload, and extract the comment out of it.
  2. It will then check if the comment contains the word ‘BUILD’. If yes, it will further make the API call to Jenkins for triggering the build.

First we need to change our httpApi method to ‘POST’ since Github will be ‘posting’ to our lambda.

functions:
  hello:
    handler: handler.hello
    events:
      - httpApi:
        path: /
        method: post

As you can see, I have changed the method from ‘get’ to ‘post’ in the events section. Now let’s modify the handler. Open the handler.js file, and add the following code to it, or rather to the body of the ‘hello’ handler.

"use strict";
 
module.exports.hello = async (event) => {
  const buffer = Buffer.from(event.body, 'base64').toString();
  const data = JSON.parse(Object.fromEntries(new URLSearchParams(buffer)).payload)
  //const data = JSON.parse(Object.fromEntries(new URLSearchParams(event.body)).payload)
  if (data.comment.body.indexOf('BUILD') !== -1) {
    console.log('Johnny\'s my best friend!')
    console.log(data.comment.body)
  }
  else {
    console.log('Why, Lisa, why, WHY?!')
  }
 
  return {
    statusCode: 200,
      body: JSON.stringify(
        {
          message: "POST EVENT RECEIVED",
          input: event,
        },
        null,
        2
      ),
  };
};

We have modified our ‘hello’ handler to parse the request payload from the event.body, which will be sent to us by Github. If the payload is sent to the actual AWS environment where our lambda is deployed, it is base 64 encoded so we need to decode it, and then JSON parse it to get the comment. However on a local environment this payload is a simple JSON object so you don’t need to base64 decode it. I have left out the commented out code for the same for reference. Please remember that to test on your local environment, you need to make the port 3000 from localhost available on the network using ngrok as Github won’t allow a localhost address as the payload URL for a webhook.

Right now will simply print the comment body out, just to check if the communication between Github and our lambda is working as expected. Let’s deploy this lambda, and trigger the webhook from Github:

$ serverless deploy

Copy the endpoint URL displayed on the console, and paste it in the webhook section’s payload URL field, the one we saw in the Github setup stage:

Triggering the webhook

We have all the things in place now. Let’s trigger the webhook. Go to the PR you created for your build branch, and post a comment on it. Now we need to check if the webhook was triggered and whether it was delivered, or if an error occurred. Go to the Settings > Webhooks section for your repository and select the webhook you had created earlier, then click on ‘Recent Deliveries’. This section should have one entry if everything went well.

If you see the delivery as successful, it means our Github to lambda communication works.

You can see the message ‘POST EVENT RECEIVED’ in the response which was sent by our lamba. Ignore the ‘redelivery’ part, as I had to retrigger this webhook to fix an error in the code. To check the console log and comment body, you can simply visit the AWS console, go to the ‘Lambda’ service, then select the function, and check its logs. This is also useful to debug any errors occurring, as Github’s webhook UI will not show the error details, it will only print the status code and an error message. The Cloudwatch error log shows everything in detail, including any runtime errors.

You can see the message we printed as well as the comment body. I had posted a comment ‘BUILD’ on my PR, hence the comment body says ‘BUILD’.

Next we bring in the Jenkins API for triggering a build. Meaning, now that our Github webhook is communicating successfully with the lambda, we can start the communication between the lambda and our Jenkins installation. Assuming you have set up Jenkins by following the first part of this series, modify the ‘hello’ handler code to look like the following:

"use strict";
 
// Jenkins URL and Username/Password
const URL = 'your Jenkins URL'
const AUTH  = {username: 'Jenkins username', password: 'Jenkins password'}
 
// Axios needs to be added as a layer
const axios = require("axios")
 
const build = async () => {
  const tokenURL = `${URL}/crumbIssuer/api/json`
  const defaultParams = {
    withCredentials: true,
    auth: AUTH
  }
 
  // GET A CRUMB FOR JENKINS
  await axios.get(tokenURL, {
      withCredentials: true,
      headers: {
        "Accept": "application/json",
        "Content-Type": "application/json"
      },
      ...defaultParams
  })
  .then(r => {
	// The POST call for build will be done here
      console.log('GOT TOKEN', r);
  })
  .catch(e => console.log(e))
  .finally(() => console.log('PROCESS DONE'))
}

// HANDLER
module.exports.hello = async (event) => {
  const buffer = Buffer.from(event.body, 'base64').toString();
  const data = JSON.parse(Object.fromEntries(new URLSearchParams(buffer)).payload)
  //const data = JSON.parse(Object.fromEntries(new URLSearchParams(event.body)).payload)
  if (data.comment.body.indexOf('BUILD') !== -1) {
      console.log('Johnny\'s my best friend!')

      // THIS WILL START THE PROCESS
      await build()
  } else {
      console.log('Why, Lisa, why, WHY?!')
  }
 
  return {
      statusCode: 200,
      body: JSON.stringify({
              message: "POST EVENT RECEIVED",
              input: event,
          },
          null,
          2
      ),
  };
};

Our handler now calls a ‘build’ function, which in turn makes a GET call to the Jenkins URL provided for a ‘crumb’. A crumb is basically an authentication cookie which needs to be sent along with any Jenkins request, which in our case is the build request. You will need to provide your Jenkins URL, username and password at the respective places. This step is very important, as without the crumb no Jenkins call is allowed, even with an auth token. Again, if you want to run Jenkins locally, you will have to put it on the network using ngrok.

This code is not working yet, since we need to add the Axios layer to it.

Adding the Axios Layer

In an AWS lambda, you need to add any dependencies that the handler imports as a ‘layer’. A single lambda can have 5 layers and each layer can be of 200 MB or less. You cannot bundle the dependencies along with the handler since that will not only be useless, but it will also add on to the upload size of your application. The lambda, when deployed in AWS, will look for the dependency modules as a ‘layer’ instead of any relative paths that you specify while developing. In case of node modules, it will look in the ‘opt/nodejs’ folder. This also applies for any other dependency that you want to create. Meaning, if you want a separate file for something like ‘utilities’ or ‘constants’, that has to go into a layer and cannot be shipped along with your lambda. Painful but that is how it is!

(If you want to run the code locally with Axios for development, you will have to install the dependencies along with the handler as node modules, in case of a node project. It could be a different stack/runtime for you.)

To create an Axios layer, we need to create a new folder, preferably parallel to our main code folder. This location doesn’t matter as we will not be using it while running our program locally. I am calling this folder ‘layers’. We need to create another folder called ‘nodejs’ inside this folder. This name matters, so do not change it. We will now initiate a node application inside the inner nodejs folder:

$ npm init

Follow through the prompts for npm.

Now we can install any dependencies we want here, which in our example is the Axios library.

$ npm i axios --save

This is how the folder should look:

As you can see I have a nodejs folder inside the layers folder, and it will hold the dependencies for my lambda. The next step is to zip this nodejs folder. Use zip only, not rar or 7z. This is a layer, and it needs to be uploaded to your AWS console. Open the console, go to ‘Lambda’ if you’re not already there, further go to ‘Layers’. Click on ‘Create Layer’ in the top right corner:

Fill up the form, and upload the zip we created in the previous step. Select the runtime applicable to your application. In our example I have a node js application so I will select Node 12 and 14. Select a version and save it. I have named my layer as ‘axios’ since that is what I am using it for. You can create layers of reusable modules and share them across many lambdas. Meaning you can put commonly used libraries across all your projects inside layers that can then be shared across.

The layer is now created and can be used inside our lambda, but we need to attach it to the lambda first. Let’s do that.

Open our lambda in the AWS console, go to the code editor that comes along with it. Scroll to the very bottom and you will see the ‘Layers’ section. Click on ‘Add a layer’.

Select ‘Custom Layers’, and from the dropdown at the bottom, choose the one we created, which is ‘axios’ in my case. Select the latest version from the next dropdown, and click on ‘Add’. You will be taken back to the lambda page on successful addition of the layer, and at the top, you can see the info as Layers (1).

Now the code is ready to be executed. Lets head back to the Github PR, and post another comment with the text ‘BUILD’ in it. After that, check the webhook delivery to see if everything worked. Further, check the log for this function to check if our lambda printed ‘GOT TOKEN’ with the actual token response. If yes, then we are almost there!

Final step!

The final step remains. That is to trigger the actual build in our Jenkins installation. For that, add the following code to the ‘then’ of our GET token axios call, where we have a console.log at the moment:

const build = async () => {
  const JOB_NAME = '<Jenkins job name>'
  const tokenURL = `${URL}/crumbIssuer/api/json`
  const buildURL = `${URL}/job/${JOB_NAME}/build`
  const defaultParams = {
    withCredentials: true,
    auth: AUTH
  }
 
  await axios.get(tokenURL, {
      withCredentials: true,
      headers: {
        "Accept": "application/json",
        "Content-Type": "application/json"
      },
      ...defaultParams
  })
  .then(r => {
    // POST call to trigger the job goes here
    return axios.post(
        buildURL,
        {},
        {
            headers: {
                "Content-Type": "application/json",
                [r.data.crumbRequestField]: r.data.crumb,
                Cookie: r.headers["set-cookie"][0]
            },
            ...defaultParams
        }
    )
  })
  .catch(e => console.log(e))
  .finally(() => console.log('PROCESS DONE'))

I have only pasted the modified ‘build’ function, since that is what we have changed. Notice at the top, I have added 2 new variables, ‘JOB_NAME’ and ‘buildURL’. This is the Jenkins job we had configured in the previous part of this series. The ‘buildURL’ is the URL that we need to POST to, in order to trigger the build. Our lambda will send this POST request to our Jenkins server, which will trigger this job.

The rest is straightforward. Since we don’t need to return anything for the POST call, we can simply rely on the ‘catch’ block to catch any errors. Additionally we can also debug from our Cloudwatch logs in case of any errors.

All we need to do now is test. Let’s go to the PR we created earlier, and post a comment, ‘BUILD THIS’. Then let’s check in the webhooks section for the delivery and its status. If the status is 200, we can finally head to Jenkins for the moment of truth – If the build was triggered.

If you were able to follow all the steps correctly and everything is configured right, the job is triggered in Jenkins. Congratulations!

This method is not perfect, there is a lot of room for improvement here. However, it is a starting point. You can implement better workflows using this as a starter project. I wanted to achieve a particular workflow, which led me to exploring these options. If you do not want to use AWS, the same code can be adapted to work for your own choice of a server, since the GIT and Jenkins parts will remain the same across all stacks. Endless possibilities, and the right set of tools at your disposal.

Feel free to contact me if something fails or if there is an improvement or correction you would like to suggest for the application as well as the blog. I am a noob at this, and any inputs are helpful for me.

Thanks!

No comments yet

Leave a Reply