Hi there! This is just a quick and short post to keep you guys updated on what is happening with this blog and myself. I'm getting ready for GopherCon UK, and it takes much more time than I thought.
So far we've managed to play with NATS pub/sub and extended it with streaming service to create a more reliable messaging queue. The problem is, that by using the default configuration, we have little to no control over where the messages, subscriptions, and information about queue's clients reside. Fortunately, NATS Streaming allows you to use SQL database as a storage, which we will explore in this very post.
In the previous blog post, we've taken a look at how NATS works in general, we created a pub-sub connection between two services allowing some kind of communication. The problem was, that it only worked when both publisher and subscriber were working at the same time. What if we want to allow particular pieces to go down, get redeployed and updated, but still maintain all the messages so that they are processed once the receiver wakes up? This is where we need NATS Streaming.
When building a microservice architecture, you have basically two ways you can build communication between its elements. First, the obvious one is to make services call each other directly via eg. using HTTP endpoints, while the other is to have a message bus/queue where one app publishes a message, while others read it. This post explains the basics of one of the message bus solutions, NATS.
From my experience, there are two main use cases for Docker: creating an output container with the applications that can be deployed somewhere, and creating a container with development dependencies (eg. language, database versions) that allow you to build/compile it without having everything installed on your machine. You can do both at the same time, but it used to result in huge images that go to production. Thankfully, there is a better way now - multi-stage builds.
Some time ago I ran into an example source code of the standard library with some interesting approach to writing unit tests. At first, it felt strange, but I decided to apply it to my everyday routine and realized how awesome it is. I always put readability of my source code as a top priority, that's why I adopted custom check functions in my tests.
Recently I've been working with an internal Go tool that uses environment variables for accessing user credentials that are being used to authenticate. While I find that very handy, I was wondering how difficult would it be to add the functionality of providing the password manually, but in a safe (non-displaying) way. As it turned out, it cannot be any easier!
One of the first things I've learned when starting working with Go was that it has so-called _proverbs_. They are a list of rules, which sound like some smart quotes, which should guide you during your journey. For a long time, I didn't quite understand why I should _accept interfaces but return structs_. I wanted to return interfaces as well since this would define what my return type does, not what it is exactly. It struck me almost a one full year or working with Go exclusively, how wrong I was. This post explains my line of thought, I hope it might save some of you sometime before you have your _Aha!_ moment.
We've already seen the basics of Vault and wrote some code to access it in the last posts, this time we'll focus on two aspects that allow us to have more control over who can do what with our vault. Let's dive into it.
I the previous post we talked about the basics of Vault, its architectural concepts, nomenclature and basic operations that can be performed. Now it's time to turn that theory into practice and write some code in Go that will allow us to access our secrets.