Building a Recipe App on Vultr's Platform
Taking Flight with Vultr
Back in February, I made a very exciting move by joining Vultr as Senior Director of Engineering. Vultr is an independent cloud provider that has been in the industry for roughly 20 years. Over the last several years, this has become a very compelling space. Digital transformation has brought more and more businesses into the cloud, and many more businesses have started their lives as cloud native over the last decade. With the growing sentiment of cloud agnosticism, the fear of vendor lock-in, and the availability of highly configurable approaches to deploying infrastructure as code, there’s never been a better opportunity to sieze a portion of a growing and exciting market.
Not hitching your wagon to the “big three” isn’t just a matter of cost, though price arbitrage will always be a compelling reason. Diversifying your providers and data centers is the most effective way to avoid service-impacting outages. It helps you reduce latency by keeping the edge closer to customers. It also helps maturing companies meet a growing laundry list of regulatory issues varying from location to location related to data residency. Vultr has dozens of points of presence, and growing – thanks to our excellent deployment operations team and the sysadmins who continue to build and grow our data centers.
It’s easier than ever to take advantage of Vultr’s platform thanks to our broad adoption of the industry standards that have evolved over the last several years. We offer many things that most cloud developers already use. I thought it’d be fun to create a reference implementation that took advantage of our available tooling, showing how truly easy it is to build modern web applications off of Vultr’s offerings.
Enough Recipes!
To kick the tires on everything, I created a simple website called enough.recipes. Why call it that? I noticed countless recipe websites on Hacker News, and while they were all interesting proofs of concept, most of them just felt like another To-Do List App. The challenge to me was usually that they expected people to add recipes themselves – as if there weren’t already enough recipes on the web!
I used my background as a search engineer at Wikia (now known as Fandom) to scrape the Recipes Wiki and create a search engine to make its over 40,000 pages discoverable from a simple search. Back in my day, the company embraced open source a little bit more and made it far easier to extract content from their site. We even contributed to open source tools to make it easier to interact with data from a given wiki. That seems to have changed over the last decade, but no biggie – I could still use the core MediaWiki API to enumerate the URLs for all pages, and then simply extract the relevant portion of the DOM from a simple HTTP request.
On a daily basis (using a K8s cron resource), I would iterate over the page list in the MediaWiki API, publishing each URL to my message bus. A consumer resource will perform a request against the URL in question and store the appropriate data from the DOM in both a database as well as a search engine.
The Vultr Ecosystem
So what are all the bits and pieces that I used to create this site?
I built a containerized application using Django, with all the pieces working in a simple Docker Compose definition before proceeding to get it deployed to a production environment.
You can easily deploy containerized applications to a Kubernetes cluster, and Vultr’s Kubernetes Engine is a major achievement that we’ve GA’ed this year. Having used a variety of cloud-based Kubernetes offerings, I’m quite pleased with what the team had delivered. Being able to back our clusters with a variety of customizable instance types gave me a great deal of flexibility around both cost and performance.
I was able to create the Kubernetes cluster using Vultr’s Terraform provider, which was very seamless to use. It’s built on top of our API, which is very nicely documented since it conforms with the OpenAPI spec.
The Docker image for the Django app served as the basis for the kubernetes deployment and service definitions that acted as a backend to a simple nginx image, which was used to provide SSL termination and allow scaling across one or more deployments of Gunicorn + WSGI.
Vultr also provides a Load Balancer that can be deployed as a resource within a Kubernetes deployment. This automatically exposed a static IP for public ingress, and annotations provided the ability to properly handle TLS and port-forwarding.
I was even able to use Vultr’s DNS by pointing the domain I purchased to their nameservers, and then setting the LB’s external IP as the A record for the domain.
Since this was a Django app, I would need to get a database set up.
Vultr has recently rolled out MySQL as a beta Database as a Service
offering. It was super exciting to get a chance to use this as an opportunity to preview how its functionality.
My favorite thing about how our DBaaS works is that the UI provides the ability to
“click to copy” the database URL (i.e. mysql://user:pass@some-ip:3306/your_db
).
The database URL has become the lingua franca of many ORMs, and
I’ve become quite accustomed to having to compose this string myself to work with
dj-database-url. The convenience
was fantastic. Remember that this URL contains secrets, and so should be handled
as sensitive information.
Since I was doing some basic styling with Tailwind, I needed a place to store my static assets. I was able to very easily use the django-storages S3 backend with Vultr’s S3-Compatible Object Storage by simply plugging in the right credentials and configurations.
Some of my deployments needed block storage, as did the helm charts I needed for both Kafka (my message queue) and Elasticsearch. I was able to use our Scalable Block Storage product to support these use cases. It’s worth noting that this required specifying the storage class name as well as the desired size. Without both of these configured for each deployment the helm chart maintains, you wouldn’t be able to successfully run those instances in your cluster. Here’s an example for the helm command used to get Kafka running:
helm install -n enough-recipes\
broker bitnami/kafka \
--set=persistence.storageClass=vultr-block-storage \
--set=persistence.size=10Gi \
--set=zookeeper.persistence.storageClass=vultr-block-storage \
--set=zookeeper.persistence.size=10Gi
Bringing it All Together
I’ve built plenty of things with EKS that involved a lot of banging my head against the wall trying to figure out the nuances of IAM roles and various permissions settings. Vultr’s default behavior eschews many of the arcane things that make the bigger cloud providers so hard to work with. That’s one of the reason’s we call it the Developer-First Platform. By enabling fast prototyping at a competitive price, we should be on the tip of your tongue when developing MVPs or building contract applications for cost-conscious clients.
You can view Enough Recipes on GitHub for all of the application code and definitions.
It was a lot of fun to build this site and understand how all the great pieces of the Vultr platform fit together. I talked about this with our developer advocate, Walt Ribeiro, for Vultr’s YouTube channel. You can check it out here:
So what’s next for Vultr? Well, without giving too much away, we just had a very exciting H2 planning session with lots of great takeaways. You’ll be able to do a lot more on our platform with many more of the conveniences you may have come to expect from the bigger guys.
Did I mention we’re hiring? Solving interesting problems on a daily basis is just par for the course. We occupy a space that’s not going away any time soon, and will only gain more attention as cost-conscious companies revisit their cloud costs during the upcoming business cycle. If you’re interested, drop me a line or check out our jobs page!