All Articles

Weekly Changelog @ Pixeesoft

Welcome

Hello and welcome to our first Weekly Changelog mapping out everything that happened last week. Here we summarize everything and link all useful content.

Sprint Planning

As you might already know - we are currently in the middle of a bigger project of migrating an older application from bare metal infrastructure to the cloud. We began our next sprint - second - and it was necessary to have the standard ritual of sprint planning.

Post is here, video below:

In short - main focus is bringing to life all the remaining API services.

Live Streams

As some of you may have noticed - the pace has changed, and standups are no longer scripted or post-produced in a fancy way. That approach was too lengthy and it did not allow much time for the actual art of programming. Therefore a new format has arisen - live streaming. While this approach is definitely challenging with respect to it being…actually LIVE…it also provides a good practice (for me) to articulate the thoughts clearly in order for the standup to remain short and brief, but still informative.

The standup prep is now significantly shorter than for a typical YouTube video. I still prepare some notes for the talking points and I still have to make a thumbnail in advance, but that’s peanuts compared to writing a full script and then cutting the video together in post-production.

After the stream is recorded, the workflow is as follows:

  1. Cut lengthy beginning screen.
  2. Find suitable timestamp where ending screen can be set up.
  3. Count 20 seconds from that and cut the video to end there (ending screen can only last 20 seconds max).
  4. Add ending screen (subscribe button, newest video and recommended list).
  5. Add any YouTube cards if applicable.
  6. Go to the standup video from the day before and change the ending screen to point not to generic “newest video” but to the actual newest recorded one. This asserts continuity in the future.
  7. Create a Buffer post for FB, Twitter and LinkedIn to share about the YouTube video.

Then for Instagram and Facebook:

  1. Wait for the HD version to go online.
  2. Download the video.
  3. Open DaVinci Resolve, open preset for IGTV.
  4. Open the YouTube video, load it to the timeline.
  5. Remove background humming noise by a preset.
  6. Cut the beginning and end of the stream (empty screens).
  7. Add blanking fill to have colored and blurred background.
  8. Add two titles above and below.
  9. Export the video.
  10. Capture a screenshot for thumbnail.
  11. Post video on IGTV and FB.

It’s still “work” and still takes time, but with enough practice, I believe this can all be done in a short period of time. Especially with all the presets, the process feels very smooth.

On to the actual standups:

Standup 1

The very first live standup and it almost began with a disaster - once the stream started, the computer told me it had 5% battery left. It’s part of the Gary Vee’s recommended strategy - “Don’t create, document!”, though. It’s a proof it wasn’t scripted 😅

Anyway, for the stream we’re using OBS (= Open Broadcaster Software) Studio. It’s a super handy tool which allows you to manipulate all various “sets” of your stage and have multiple versions for your stream. It took a solid hour to fully set it up - not just the scenes, but also the connection to YouTube, the lighting and the microphone.

For the first stream we had:

  1. Full-screen selfie camera
  2. Trello board + selfie camera in the corner

Unbeknownst to us, the stream might behave in a funny way at the beginning and at the end, so for the next stream, we figured would we need a start and an end screen to allow the stream to properly start and finish, respectively.

Tasks

The main task is to get acquainted with the Billing Service:

  1. Create a repository
  2. Create a DB based on the installation script

From then on, we can start deploying the service in a Kubernetes cluster as all the necessities will be in place.

Standup 2

This time there were no problems or obstacles - batteries fully charged, computer plugged in, no software to tinker around with. Just set the lighting, connect the microphone and…LIVE 🔴!

Based on a thorough code analysis of the Billing Service the decision has been made to sunset this codebase. There are two major issues:

  1. The service has exhaustive codebase wired onto a CRM tool we no longer use.
  2. The service uses MS SQL.

While both of the obstacles have valid reasons to exist, they render the service unusable for our project moving onward. The billing service has to be decoupled and standalone and any proprietary technologies where licenses apply are a no-go right from the start. A vendor lock-in is a trade-off we are currently not comfortable making as we are in an early migration stage.

I am fully aware of the choice made behind the technology years ago. It was the most robust solution for the given task. But if we are to use a 3rd party billing service anyway, such as Stripe, there is no need to bend backwards to make the service run on top of proprietary software.

Tasks

For the reasons mentioned above, we have decided to mock endpoints used by our application using Node.js and Google Cloud Run. The application uses only 5 endpoints on the service:

  1. getAccount
  2. prechargeCredit
  3. confirmPrecharge
  4. charge
  5. getPrice

Their return values were very simple, usually either true/false or a simple json object. This task shouldn’t take more than a half a day.

Standup 3

In the previous standup we talked about the need to pivot because of the billing service being too tightly coupled to its previous environment. For mocking purposes we opted for Node.js + Express.js and Google Cloud Run. There was one challenge during the mocking creation and that was path building. As this service was used by multiple actors in the system, not everything was called in the same way - but thanfully everything is resolvable by path.

One especially juicy problem was encoding - also in the path. But Express.js allows wild-card parametrization of the path, allowing for code, such as:

app.post('/example/:type/DummyClass/getExampleResource', (req, res) => {   
    if (req.params.type == 'json') {
        res.setHeader('Content-Type', 'application/json');
        res.end(JSON.stringify({ data: true }));
    } else {
        res.setHeader('Content-Type', 'text/plain');
        res.send(true);
    }
});

which massively reduced the friction in mocking and resulted in a neat single file project.

For debugging purposes of what gets called and how I have added this “catch-all” clause to the end of the routes’ definition:

app.post('*', (req, res) => {
    console.log(req.params)
    console.log(req.path)
    console.log(req.body)
    res.send(null);
});

This allows for logging any traffic coming in our way to see which parameters and data are other services sending to fine tune the mock.

The whole project is in a docker container using node:10-alpine, exposing port 8080 (Google Cloud Run default - don’t use 0.0.0.0). Cloud Run then exposes the service on a standard HTTPS enabled URL. Super handy use-case.

Tasks

From now on, the focus shifts to the main API Server. It’s important to create a git repository with all its respective subprojects and set up the database where all the data will live. There is one obstacle in the way to completion of the DB - there is no schema in the codebase.

The solution? Logging into the production server and dumping it - I have all the access, so this should be a quick visit.

Standup 4

Last standup we talked about cloud DB deployment (after schema dump) and a git repository for the code. All of those activities went smoothly and allow us to proceed further.

Tasks

As we now move to containerization of the main API Server it is important to note, that the deployment is almost identical to the Auth Service. That might save us a lot of time. It’s not exactly 1:1, so it is important to properly document the differences in order to prevent any future confusion.

In order to test the container locally, it is necessary to have the Auth Service deployed, so let’s start with that. The idea is to deploy it in a Kubernetes cluster, namely the GKE version from Google. Google recently started offering managed SSL certificate provisioning as a first-class object in Kubernetes, which is a big step from installing cert-manager and debugging it with Let’s Encrypt’s staging servers. I’ve done the more complicated workflow before so I welcome the new, albeit slightly limited, managed version.

The deployment needs to be tested and may overlap to the weekend, which would be really annoying. I am not a big fan of starting issues that might have this property, but no can do - it has to be done, regardless of this scenario.

Conclusion

That’s it for the weekly changelog, thank you for being here with us on this incredible journey. Don’t forget to subscribe on YouTube, follow us on Instagram, Twitter, Facebook or LinkedIn. And keep an eye out on our open-source endeavors on GitHub as well!

See you out there! 👋