Skip to main content

Β· 5 min read
Jeffrey Aven

I have used the instructions here to configure Snowpipe for several projects.

Although it is accurate, it is entirely click-ops oriented. I like to automate (and script) everything, so I have created a fully automated implementation using PowerShell, the aws and snowsql CLIs.

The challenge is that you need to go back and forth between AWS and Snowflake, exchanging information from each platform with the other.

Overview

A Role Based Storage Integration in Snowflake allows a user (an AWS user arn) in your Snowflake account to use a role in your AWS account, which in turns enables access to S3 and KMS resources used by Snowflake for an external stage.

The following diagram explains this (along with the PlantUML code used to create the diagram..):

Snowflake S3 Storage Integration

Setup

Some prerequisites (removed for brevity):

  1. set the following variables in your script:
  • $accountid – your AWS account ID
  • $bucketname – the bucket you are letting Snowflake use as an External Stage
  • $bucketarn – used in policy statements (you could easily derive this from the bucket name)
  • $kmskeyarn – assuming you are used customer managed encryption keys, your Snowflake storage integration will need to use these to decrypt data in the stage
  • $prefix – if you want to set up granular access (on a key/path basis)
  1. Configure Snowflake access credentials using environment variables or using the ~/.snowsql/config file (you should definitely use the SNOWSQL_PWD env var for your password however)
  2. Configure access to AWS using aws configure
note

The actions performed in both AWS and Snowflake required privileged access on both platforms.

The Code

I have broken this into steps, the complete code is included at the end of the article.

Create Policy Documents

You will need to create the policy documents to allow the role you will create to access objects in the target S3 bucket, you will also need an initial β€œAssume Role” policy document which will be used to create the role and then updated with information you will get from Snowflake later.

Create Snowflake Access Policy

Use the snowflake_policy_doc.json policy document created in the previous step to create a managed policy, you will need the arn returned in a subsequent statement.

Create Snowflake IAM Role

Use the initial assume_role_policy_doc.json created to create a new Snowflake access role, you will need the arn for this resource when you configure the Storage Integration in Snowflake.

Attach S3 Access Policy to the Role

Now you will attach the snowflake-access-policy to the snowflake-access-role using the $policyarn captured from the policy creation statement.

Create Storage Integration in Snowflake

Use the snowsql CLI to create a Storage Integration in Snowflake supplying the $rolearn captured from the role creation statement.

Get STORAGE_AWS_IAM_USER_ARN and STORAGE_AWS_EXTERNAL_ID

You will need the STORAGE_AWS_IAM_USER_ARN and STORAGE_AWS_EXTERNAL_ID values for the storage integration you created in the previous statement, these will be used to updated the assume role policy in your snowflake-access-role.

Update Snowflake Access Policy

Using the STORAGE_AWS_IAM_USER_ARN and STORAGE_AWS_EXTERNAL_ID values retrieved in the previous statements, you will update the assume-role-policy for the snowflake-access-role.

Test the Storage Integration

To test the connectivity between your Snowflake account and your AWS external stage using the Storage Integartion just created, create a stage as shown here:

Now list objects in the stage (assuming there are any).

list @my_stage;

This should just work! You can use your storage integration to create different stages for different paths in your External Stage bucket and use both of these objects to create Snowpipes for automated ingestion. Enjoy!

Complete Code

The complete code for this example is shown here:

Β· 3 min read
Jeffrey Aven

CloudFormation templates in large environments can grow beyond a manageable point. This article provides one approach to breaking up CloudFormation templates into modules which can be imported and used to create a larger template to deploy a complex AWS stack – using Jsonnet.

Jsonnet is a json pre-processing and templating library which includes features including user defined and built-in functions, objects, and inheritance amongst others. If you are not familiar with Jsonnet, here are some good resources to start with:

Advantages

Using Jsonnet you can use imports to break up large stacks into smaller files scoped for each resource. This approach makes CloudFormation template easier to read and write and allows you to apply the DRY (Do Not Repeat Yourself) coding principle (not possible with native CloudFormation templates.

Additionally, although as the template fragments are in Jsonnet format, you can add annotations or comments to your code similar to YAML (not possible with a JSON template alone), although the rendered template is in legal CloudFormation Json format.

Process Overview

The process is summarised here:

CloudFormation and Jsonnet

Code

This example will deploy a stack with a VPC and an S3 bucket with logging. The project directory structure would look like this:

templates/
β”œβ”€ includes/
β”‚ β”œβ”€ vpc.libsonnet
β”‚ β”œβ”€ s3landingbucket.libsonnet
β”‚ β”œβ”€ s3loggingbucket.libsonnet
β”‚ β”œβ”€ tags.libsonnet
β”œβ”€ template.jsonnet

Lets look at all of the constituent files:

template.jsonnet

This is the root document which will be processed by Jsonnet to render a legal CloudFormation JSON template. It will import the other files in the includes directory.

includes/tags.libsonnet

This code module is used to generate re-usable tags for other resources (DRY).

includes/vpc.libsonnet

This code module defines a VPC resource to be created with CloudFormation.

includes/s3loggingbucket.libsonnet

This code module defines an S3 bucket resource to be created in the stack which will be used for logging for other buckets.

includes/s3landingbucket.libsonnet

This code module defines an S3 landing bucket resource to be created in the stack.

Testing

To test the pre-processing, you will need a Jsonnet binary/executable for your environment. You can find Docker images which include this for you, or you could build it yourself.

Once you have a compiled binary, you can run the following to generate a rendered CloudFormation template.

jsonnet template.jsonnet -o template.json

You can validate this template using the AWS CLI as shown here:

aws cloudformation validate-template --template-body file://template.json

Deployment

In a previous article, Simplified AWS Deployments with CloudFormation and GitLab CI, I demonstrated an end-to-end deployment pipeline using GitLab CI. Jsonnet pre-processing can be added to this pipeline as an initial β€˜preprocess’ stage and job. A snippet from the .gitlab-ci.yml file is included here:

Enjoy!

Β· 3 min read
Jeffrey Aven

Managing cloud deployments and IaC pipelines can be challenging. I’ve put together a simple pattern for deploying stacks in AWS using CloudFormation templates using GitLab CI.

This deployment framework enables you to target different environments based upon refs (branches or tags) for instance deploy to a dev environment for a push or merge into develop and deploy to prod on a push or merge into main, otherwise just lint/validate (e.g., for a push to a non-protected feature branch). Templates are uploaded to a designated S3 bucket and staged for use in the pipeline and can be retained as an additional audit trail (in addition to the GitLab project history).

Furthermore, you can review changes (by inspecting change set contents) before deploying, saving you from fat finger deployments 😊.

How it works

The logic is described here:

GitLab CI

The pipleline looks like this in GitLab:

GitLab CI

Prerequisites

You will need to set up GitLab CI variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and optionally AWS_DEFAULT_REGION. You can do this via Settings -> CI/CD -> Variables in your GitLab project. As AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are secrets, they should be configured as protected (as they are only required for protected branches) and masked so they are not printed in job logs.

.gitlab-ci.yml code

The GitLab CI code is shown here:

Reviewing change sets (plans) and applying

Once a pipeline is triggered for an existing stack it will run hands off until a change set (plan) is created. You can inspect the plan by clicking on the Plan GitLab CI job where you would see output like this:

Change Set

If you are OK with the changes proposed, you can simply hit the play button on the last stage of the pipeline (Deploy). VoilΓ , stack deployed, enjoy!

Β· 2 min read
Jeffrey Aven

I started this blog a few years back to chronicle my journeys through building cloud data platforms, along the way I gathered some friends to share their experiences as well. The easiest platform to start this blog on was Wordpress. This worked, but wasnt really aligned with the way myself and my collegues worked,and didnt really align with the types of things we were writing about in blog articles or embracing as general principles... e.g. 'everything-as-code', 'gitops', etc.

Enter Static Site Generators and Jamstack architecture. Not only does a Jamstack, SSG architecture for a blog site (or docs site or any other site), allow you to manage every aspect of your web property as code, but as a static site has several other benefits inlcuding increased performance, easier distribution (using CDNs), better security (no origin server required), all this as well as being SEO friendly (and optimised in many cases).

But moving the site from Wordpress to a SSG must be an onerous task.. wrong.

I moved this blog over a weekend which was quite simple in the end, here are the steps:

  1. Export your Wordpress site (Tools->Export), make sure to select All Content.

  2. Use wordpress-export-to-markdown to convert your posts to a basic Markdown format with frontmatter, does a pretty good job

  3. Choose and deploy a Static Site Generator (I chose Docusaurus, but there are several other alternatives available such as VuePress, Jekyll, etc)

  4. Drop your Markdown docs into your SSG content (blogs) directory (converted in step 2)

  5. You will probably need to do some fine tuning as some things may not export cleanly, but 99% of the content will be fine

  6. Deploy your new blog site, I am using GitHub Pages, but you could use anything similar - Netlify, Vercel, Digital Ocean, Azure Static Web Apps, etc or implement your own custom CI routine to build your project and push it to an object storage bucket configured to serve a static web site (such as Google Cloud Storage and AWS S3)

Thats it!

Β· 3 min read
Mark Stella

Everytime I start a new project I try and optimise how the application can work across multiple envronments. For those who don't have the luxury of developing everything in docker containers or isolated spaces, you will know my pain. How do I write code that can run on my local dev environment, migrate to the shared test and ci environment and ultimately still work in production.

In the past I tried exotic options like dynamically generating YAML or JSON using Jinja. I then graduated to HOCON which made my life so much easier. This was until I stumbled across Jsonnet. For those who have not seen this in action, think JSON meets Jinja meets HOCON (a Frankenstein creation that I have actually built in the past)

To get a feel for how it looks, below is a contrived example where I require 3 environments (dev, test and production) that have different paths, databases and vault configuration.

Essentially, when this config is run through the Jsonnet templating engine, it will expect a variable 'ENV' to ultimately refine the environment entry to the one we specifically want to use.

A helpful thing I like to do with my programs is give users a bit of information as to what environments can be used. For me, running a cli that requires args should be as informative as possible - so listing out all the environments is mandatory. I achieve this with a little trickery and a lot of help from the click package!

local exe = "application.exe";

local Environment(prefix) = {
root: "/usr/" + prefix + "/app",
path: self.root + "/bin/" + exe,
database: std.asciiUpper(prefix) + "_DB",
tmp_dir: "/tmp/" + prefix
};

local Vault = {
local uri = "http://127.0.0.1:8200/v1/secret/app",
_: {},
dev: {
secrets_uri: uri,
approle: "local"
},
tst: {
secrets_uri: uri,
approle: "local"
},
prd: {
secrets_uri: "https://vsrvr:8200/v1/secret/app",
approle: "sa_user"
}
};

{

environments: {
_: {},
dev: Environment("dev") + Vault[std.extVar("ENV")],
tst: Environment("tst") + Vault[std.extVar("ENV")],
prd: Environment("prd") + Vault[std.extVar("ENV")]
},

environment: $["environments"][std.extVar("ENV")],
}

The trick I perform is to have a placeholder entry '_' that I use to initially render the template. I then use the generated JSON file and get all the environment keys so I can feed that directly into click.

from typing import Any, Dict
import click
import json
import _jsonnet
from pprint import pprint

ENV_JSONNET = 'environment.jsonnet'
ENV_PFX_PLACEHOLDER = '_'

def parse_environment(prefix: str) -> Dict[str, Any]:
_json_str = _jsonnet.evaluate_file(ENV_JSONNET, ext_vars={'ENV': prefix})
return json.loads(_json_str)

_config = parse_environment(prefix=ENV_PFX_PLACEHOLDER)

_env_prefixes = [k for k in _config['environments'].keys() if k != ENV_PFX_PLACEHOLDER]


@click.command(name="EnvMgr")
@click.option(
"-e",
"--environment",
required=True,
type=click.Choice(_env_prefixes, case_sensitive=False),
help="Which environment this is executing on",
)
def cli(environment: str) -> None:
config = parse_environment(environment)
pprint(config['environment'])


if __name__ == "__main__":
cli()

This now allows me to execute the application with both list checking (has the user selected an allowed environment?) and the autogenerated help that click provides.

Below shows running the cli with no arguments:

$> python cli.py

Usage: cli.py [OPTIONS]
Try 'cli.py --help' for help.

Error: Missing option '-e' / '--environment'. Choose from:
dev,
prd,
tst

Executing the application with a valid environment:

$> python cli.py -e dev

{'approle': 'local',
'database': 'DEV_DB',
'path': '/usr/dev/app/bin/application.exe',
'root': '/usr/dev/app',
'secrets_uri': 'http://127.0.0.1:8200/v1/secret/app',
'tmp_dir': '/tmp/dev'}

Executing the application with an invalid environment:

$> python cli.py -e prd3

Usage: cli.py [OPTIONS]
Try 'cli.py --help' for help.

Error: Invalid value for '-e' / '--environment': 'prd3' is not one of 'dev', 'prd', 'tst'.

This is only the tip of what Jsonnet can provide, I am continually learning more about the templating engine and the tool.