Owen Morris

Building, developing... and growing.

By day I'm the CTO for Doherty Associates, but I love diving into the tech and can't stop programming.

This is my outlet covering the things that interest me technically in Cloud, AI, Data and Dev but are far too in-depth for corporate use!

Occasional references to being husband and Dad, very rarely runner.

Diving (back) into Python - Django

04/09/2024

As part of a couple of personal projects I've been needing to write some code for a web application outside of the work context, so I've been able to make some different choices than I'd normally make and have been working under some different constraints:

  1. Speed of delivery is important and needs to support quick working
  2. I need to be able to drop and resume development as quickly as possible, as I've only been able to work on this project in short bursts - even more important at the moment as life has been very busy recently and there's been less time than previously to work on it. Ecosystem has been quite important
  3. Needs to be something I'm at least familiar with, as I wanted to concentrate on building the thing rather than learning on the job

This forced me a little bit outside my comfort zone when choosing which stack to use.

Technology Choices

The choices were really: .NET (probably F#), a full-stack JS framework or something in Python, the languages I'm most familiar with. I also wanted to stay outside the Azure space to avoid any overlap with my day-job.

I narrowed down the options to:

  1. F# with Giraffe and Dapper, plus Fable on the frontend - but discounted this eventually as I hadn't done much with Giraffe or Dapper at all, so felt it was a bit too much of a learning curve. The ecosystem of packages is small for F# only packages, but you also have wider .NET libraries to use. I'd like to spend a bit more time on this stack in the future, but discounted it as I wanted to spend as little time as possible getting something running end-to-end with a frontend and backend working together.
  2. Next.JS - as it's React based there's quite a bit of overlap with the day job, but I needed things for the backend too. I tried a couple of different ORMs and Drizzle was the one that I liked most. Next does look nice and handles things like auth with its own integrations. I was also using it as a static site generator for this blog. As with most things JS though, I felt like I had to assemble a lot of the stack by hand.
  3. C# and Asp.Net/Blazor - I've done a couple of experiments with this and have since been involved with a couple of things in work that have used this stack. I discounted it at the time but again, I'd actually like to spend more time with it myself.
  4. Django - I'd used this briefly when it came out, and looking at the docs it felt very familiar still. I did a couple of spikes on it and the ORM was easy to use. The ecosystem of libraries around it is pretty comprehensive, so I settled on this as the choice.

First thoughts

Python was my main language from the end of university, but I hadn't programmed in it to much of an extent since around 2016 or so, so I needed to revisit the current state of the art. The great thing about Python is that it hasn't changed that much, so even though I felt like a bit of a caveman, it felt familiar. There are a few things that I'm still coming up to speed with like tooling, but I was able to make some fairly fast progress.

Django has been quite similar - they got so much right at the start that the fundamentals haven't changed that much since the beginning, but the current features have mainly improved things.

The admin system is the main thing that's saved me time when building - having the ability to scaffold out data for models allows CRUD operations to be done as needed in the frontend. It saves so much time that I'm amazed that other frameworks haven't followed suit.

The ecosystem of packages is wide, but there are mainly one main third-party packages that does a particular function and there mainly aren't several overlapping ones (like the JavaScript ecosystem).

I ended up adding in JS in the form of React in the frontend as well in the interests of my speed of build, although getting this wired up correctly was frustratingly slow! I used normal Django templates for many things, and then used React for interactive pages in a 'MPA' style. I'm pretty happy with the results but am looking at things like HTMX with a curious eye...

Libraries

In addition to the out of the box functionality, I've pulled in the following packages, which I've been generally pleased with:

  • django-tailwind (for backend CSS)
  • django-ninja (for API creation). I really love how easy this library has been to use and how little boilerplate it uses.
  • allauth - for social logins. I tried a couple of other libraries for auth with varying levels of success.
  • django-webpack-loader (for loading multiple React bundles - I tried and failed to get django-vite working for this, something to retry at some point)
  • Flowbite and the React bindings to get the same UI look and feel between frontend in React and Django templates

Some backend tasks were written as django management commands, I haven't found a good scheduling library yet.

Things that I need to spend more time on

All in all, I got things up and running to a certain point, but there's work to do before finishing. I want to understand Allauth better as I just got the basics working and authentication is always a big requirement for modern apps. I also spent far too much time getting the django webpack integration running alongside tailwind, and I really want to see if I can transition to django-vite in the future.

Resources I used Aside from Copilot & ChatGPT, plus the readme's for the

libraries above, I found the following articles useful:

Current thoughts

All in all, I'm relatively happy with the choice of stack at the moment and I think it's got me up and running relatively quickly. I'm sure I'll add libraries as I go, but my experience thus far has been quite good.

Giving Pulumi a Spin

26/02/2023

I had a need to spin up some multi-tenant Azure infrastructure recently for a proof of concept. This required similar but differing deployments, with frequently changing infrastructure components, based on a self-service model. A goal was to have a central solution, deploying to multiple tenants. This was an interesting design challenge!

My requirements were:

  • Create a standard set of infrastructure that didn't vary between deployments
  • Add multiple specialised resources that can vary between deployments with differing configurations
  • The deployment process should handle the removal or addition of the variable resources if they are not present compared to the particular deployment.

Most of the time when using infrastructure-as-code (IAC) techniques to build infrastructure, the infrastructure deployment artefacts are often kept in source control and deployment can use continuous delivery (CD) techniques to deploy the infrastructure. In these scenarios the infrastructure is relatively static and not deployed that frequently. In my scenario, the deployment could happen many times an hour during testing. In addition, the multi-tenant nature made it difficult to automate the deployment method, as each deployment needed to be a different tenant ID. I needed a data driven approach to generating the deployment artefacts.

flowchart LR A[Data Source] B[Generic Deployment Artefact] C[Per-Tenant Artefact] A-->C B-->C

I was struggling to think of a good way to do this using Azure DevOps and Bicep or ARM templates. Using text templating seemed like a potential option (e.g. Liquid templates), but seemed quite brittle. The flow to the backend would be feasible as part of a deployment pipeline, but updating the data source would likely be fairly manual.

flowchart LR A[tenantdata.csv] B[template.liquid] C[tenant1.json] D[tenant2.json] E[tenantn.json] A-->C B-->C B-->D B-->E A-->D A-->E

I wanted a more automated and simple process. I had good sucess doing similar work previously using Farmer (a F# system that builds out ARM templates using F# computation expressions), but it does require teaching people to use F#.

I remembered a couple of articles I'd read recently about Pulumi and thought that this might be a good fit due to it's use of code to define resources; this would give me a chance to handle the deployment differently based on some incoming parameters.

Getting started with Pulumi and the build

I started with installing the CLI using the instructions, then set about building out my infrastructure as a class in C#, following the tutorial. To build out a Pulumi deployment you create a class in C#, inheriting from the Stack class and build out the deployment as part of the constructor.

One of the great things about using a programmatic deployment model is that you can create a different deployment using external inputs. I used this to build a stack that contained a consistently named set of base resources and then a set of resources created on input data held in a different data source. After building this out I had my target deployment. I was able to put the details of the variable resources into the data store and then run the CLI to create those resources on demand.

Deploy on demand

The next part of my build needed to be running the deployment on demand. My normal preference would be to run as much as possible from CI/CD, so I investigated using Azure DevOps to perform the build (perhaps initiated from a webhook), but because I wanted the deploy to be self-service, I decided against this approach. A CI/CD initiated build can be slow to start due to the need to acquire a worker and deploy a container. There would be ways to do it by using a self-hosted runner, but this would be quite expensive. I also ruled out using Azure Functions for this as it was possible that the deploy would not finish in the maximum function duration (maximum 10 minutes).

One thing that I've done successfully in the past is to deploy a .NET worker service and decouple the API written in Azure Functions from the back-end service, communicating via a queue. This seemed like a promising approach. This has become much easier since .NET Core 3.1 came with a worker service template that previously needed a bit of self-assembly, which I'd previously used for other solutions. Microsoft also recently released the Azure Container Apps service - deploying one of the workers there allows the code to run serverlessly, spinning up when a queue message enters the queue. This functionality is enabled by the use of Keda scaling provided by the container apps service.

Using the automation API

Switching to a worker meant running the Pulumi deployment from code rather than by using the Pulumi CLI. Pulumi provide an "Automation API" to do this, which was relatively straightforward to get running. I generated an API token and then accessed this using .NET configuration injected into my worker class. I followed the Inline Program example from the Automation API examples. Once this was in place, I integrated waiting and pulling messages from an Azure storage queue into the worker and used this and querying my data store to build out the deployment resources. Once done, I built out a simple endpoint in my Azure Functions project to drop messages into the queue. I then built out a Dockerfile. In order to get the worker to run I had to install the Pulumi CLI as part of the Docker image. I tested this locally and then and pushed the container app to Azure.

The docker additions looks like:

# any setup here
RUN apt-get update && apt-get install -y \
curl
RUN curl -fsSL https://get.pulumi.com | sh
ENV PATH=/root/.pulumi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# your entrypoint here

The final architecture looks like this:

flowchart TB U[User] A[API] B[Storage Queue] C[Container] D[CosmosDB] AZ[Azure] U -- Submits Deployment --> A A -- Adds Message to Queue --> B B -- Pulls Message --> C C -- Queries data store --> D C -- Performs deployment --> AZ

I was pleased with how effective Pulumi was to create this integration and would look forward to using it again in the future. Using the background worker in conjunction with the Function App is an useful pattern to create decoupled services and container apps makes this pattern really easy to adopt. Both services allow 'scale to zero', so this type of application can be run very cost-effectively.