Re-Architecting This Website IV

“Biting off more than I can chew.”

As of Sunday, I had gotten droplet creation, volume storage, and a good bit of the base OS setup work handled. The major missing piece was the creating the database. I am happy to say that work is now complete, but it’s not without bumps, warts, and bruises.

Creating the database server proved daunting because Ansible modules do not exist for DigitalOcean’s managed database server. I thought I’d just write a module or two, however that proved to be a much more daunting task than I first realized.

I’ve never written a module for Ansible, I’m no DBA, and the number of APIs exposed for database actions left me feeling deflated. Still, I soldiered through and created a Python script I can use to create, configure, or destroy a managed MySQL server. There are a ton of assumptions made to keep the scope of work reduced, but it’s sufficient for my needs at this point. It’s wrapped with argparse so I can run it from an Ansible command task then parse the JSON output.

Check it out here: https://github.com/seaburr/WordPressOnDigitalOcean/blob/master/roles/database-server/files/digital_ocean_database.py

As I said, I believe Ansible modules should exist for their managed database, as they do for AWS and GCP however, I think that’s a larger project than one individual can tackle on their first attempt writing modules. I’d be happy to collaborate on that if it ever comes up.

In related news, I’ve also extended the install-wordpress role to create a wp-config.php from template, including fetching unique salts from wordpress.org, I’ve also fixed a few small issues with the Apache installation and added missing packages.

At this point, the only thing missing to go from nothing to a functioning WordPress site is adding database connection information into wp-config.php and handling SSL.

 

Re-Architecting This Website III

“Automate all the things!”

This weekend I’ve been busy laying much of the groundwork requires to be able to re-deploy this website. In part one, I provided a diagram that showed how the entire website, including the database, is deployed on a single droplet (VM). While this model has worked well for about three years, it’s time to move on. To that end, I’ve begun building automation to create and manage WordPress sites.

I’ve created a repo on Github to house the infrastructure code. It’s available here: https://github.com/seaburr/WordPressOnDigitalOcean

As you can see, I’ve decided to continue using DigitalOcean. They’ve proven reliable. Unfortunately, DigitalOcean lacks some things, like NFS, that would make the website even more scalable. Still, I don’t feel they’re necessary at this point.

Currently, I have the ability to create/destroy VMs, create/destroy/attach storage volumes, and a decent portion of the application installation is complete. It’s capable of going from nothing to the WordPress setup page over HTTP.

Finally, I know I could just move all of this to AWS LightSail but what’s the fun in that?? I learn best by doing and I want to have a deep understanding with all the moving parts. Once I’m done building this out with DO, I might do the same on the other major cloud vendors.

 

Upgrading Atlassian Applications via Ansible

Over the next few weeks, I’ll be doing write-ups on how I upgrade some of our developer tools. Specifically, I’ll focus on the Atlassian applications I manage, but the process for our other tools (like Artifactory, SonarQube, etc.) is largely the same and most of the Ansible Roles are common across all of the applications.

For now, the playbooks and roles will not be available but I’ll post relevant snippets where I feel it’s needed.

Finally, we use RHEL/CentOS internally (for hosting our internal tools, we have other OSes for doing builds) so our playbooks are oriented towards a single OS.

 

Building Something from Nothing

It’s pretty cool to build something that people grow to rely on. It’s also a little terrifying for someone like me who has a habit of second guessing decisions and who tends to be risk adverse.

A few years back, the developer who writes our installers made changes to enable them to run in headless mode. To facilitate testing, I wrote a small PowerShell script that ran through the basic installation steps that our engineers and QA followed manually. My goal was to give this script to our QA department so they could download the build from our build system and with a few parameters, run this script against their testing environments to perform an upgrade.

We demoed it to our R&D organization. We showed how it could be tied into the build system so that when the build was completed, it could kick off a “deployment” job that would upgrade environments. It was pretty straightforward: Download the build binaries into a folder called “artifacts” in a sibling directory to the said script and then execute the script with a few parameters.

For over a year, no one really bit. My colleague and I just used it for installer testing, mostly. That’s not completely true. There were a few QA teams that decided to start using it, but if I had to guess, the automated upgrade script was being used for approximately one in twenty test environments.

At some point, the organization began to build out a performance testing team. They had requirements around automating the deployment of environments with more than one server. They were trying to replicate how our application was run in production environments, which meant that there might be dozens of VMs running different portions of our product.

Well, perhaps unfortunately for them, I was the only person really fooling with ways to automate the deployment of the product. I took their requirements and changed my script so that you could give it all the same parameters as before but also give it a “run mode” flag that’d tell it what kind of work it’d be doing. For example, perhaps you wanted to stop or start all of the application services, or just update the database, or maybe just install a subset of the products functionality againt the target VM. I (quite kludgily, if that’s a word) added that functionality to the existing script all while sticking to my original goal: Have a single PowerShell script that a user could run from anywhere with just a few parameters.

No one bit and the requester exited the company shortly thereafter.

Around the time I was asked to extend the functionality of my script, other teams began researching how to build out full automation for our SaaS environments. I sat with them and went over what I had devised. Obviously, what I had would not scale but I at least had sorted out the process to upgrade an environment and scripted it out. They began building a large, scalable automation system. I later found out that for several months my script was tucked inside their automation system doing some of the heavy lifting until they had re-written everything.

One day out of the blue, my manager sent me a link to a repository the performance team owned that had taken the script I had written and began developing additional tooling to coordinate their use case of large scale deployments. At the core, it was calling my script to do all of the installation work. I was both elated and annoyed. It felt like my work, which had largely been ignored or dismissed as being insufficient was being used and built on for bigger and better things. Additionally, the only direct proof that this was my code was a single comment left in the script that had my name and email and some commit messages vaguely referring to my repository.

In short, it felt like a group of engineers grabbed my work, tweaked a couple things, and ran with it without giving credit where credit was due. Still, swallowed my pride quick. The other side of the coin was this: I had built something that was seen as worthy of being reused. I had never done that before. Never mind that I was never hired as a developer. I came onto the build team through a string of transfers / promotions and built the script in my spare time. It wasn’t robust. It wasn’t easily extendable. It was not built with any solid patterns. My goal was to write a single, portable tool that people could use to upgrade their single environments with and it had grown into something much, MUCH larger.

Now days, it’s used to upgrade several dozen testing environments including several large multi VM product environments each day, some several times a day. Though I’ve never measured it, I’m positive It has by now given hundreds and hundreds of hours back to the engineering team and rescued them from a boring, thankless, and sometimes tedious task.

I learned a lot from building my automation tool. There’s a lot I wish I had done differently, but that’s okay. It’s in maintenance mode these days, it mostly does everything that it’s ever been asked to do, and it continues to be relied on two years after I demoed it initially. People keep waxing poetic about a replacement but it hasn’t happened yet.

EDIT: I recently came across another instance of this code being used without attribution (which is fine, but goes against the spirit of code sharing) for some tools being shipped to a customer. 😐

 

This Old Blog

I originally created this blog as a way to document my career-work. I have a real passion for what I do and I truly enjoy sitting down and looking for refinements in an existing system or process. Still, my personal passion is not my career. I draw a clear line between my personal interests and the work I do to pay the bills. I know that doesn’t work for all people. We’re told to follow our passions and I do but just not through my 9-5 type work.

Still, I haven’t written about my work in a long time, so here’s my attempt to lay out some of the projects I’ll be working on and documenting next year.

  1. Automate the deployment of all our internal applications. Currently our internal tools are upgraded manually and approximately every six months. I’d like to be able to do that on a regular basis and have it be touchless. I’ve already laid the groundwork, demoed it to my team, and I’m over 50% complete.
  2. Create a SSP for common requests we currently handle. There’s a lot of work that’s simple and repetitive for us (like creating repos, build configurations, user provisioning) that we’d be better off not dealing with. I propose a self service portal for teams to use to get these things done faster and without us being directly involved with each request.
  3. Automate all of our release procedures. We do some work, like release branching andcertain packaging tasks, manually today. I’ve already automated a lot of this but it needs to be rolled out fully so some of our tasks become virtually touchless.
  4. Completely automate provisioning of our build infrastructure and expose the provisioning scripts to engineering so they can review, propose changes, or even make those changes themselves.

So yeah, I think the theme of 2018 is automation and standardization. It’s gonna be a good year, just gonna send it.