All Articles

Weekly Changelog @ Pixeesoft

Welcome

Hello and welcome to our Weekly Changelog mapping out everything that happened last week. Here we summarize everything and link all useful content that you might have missed.

This week is one of the weeks where we simply continue with the work in the sprint we planned out last week. Our sprints usually span out across two weeks and so don’t expect sprints taking longer than that, otherwise it would be waterfall development.

At the end of last week we left the standup with a hope we would deploy the Auth Service and slide into the weekend without any worries. Let’s see how that worked out.

Standup 5

Of course I was left with unfinished work at the end of the day and of course it haunted me the whole weekend. Lesson learned - don’t start anything big and possibly stressful before the weekend.

The problem with the Auth Service is that it is basically a good old LAMP with several Virtual Host applications. It used to even have the database on the same machine, but that we already decoupled. So now we’re down to the GUI and the API running on the same machine, just different ports. Sounds OK, but where is the catch?

Neither of the apps have any open health endpoint that could be easily called by Ingress when deployed to Kubernetes. The apps are:

  1. Frontend written in Nette Framework.
  2. API Backend written in in-house PHP framework.

I personally haven’t ever written a single line of code for these and am definitely not confident in changing too much of the codebase before getting closely acquainted with it. I’d like to deploy the code as-is and then return to it for changes. For this reason, Kubernetes might not be the most suitable option, as its prerequisite is heaving liveliness and readiness probes hooked up to the health endpoint, so that Ingress can mark their backends as HEALTHY.

As GCE now supports a direct deployment of your Docker container images, it seems like the most reasonable thing to do. Apparently, they’ll let you put an SSL certificate in front of it as well!

Tasks

The plan is simple - Kubernetes didn’t work out, so let’s try to get both the Authentication Service and the main API Server deployed on GCE. Give it public IP addresses and put an SSL certificate in front of it. This time, hopefully, without any hiccups.

Standup 6

Technology is sometimes completely unpredictable - like English weather. Sunny in the morning, rainy noon, cloudy afternoon and storms in the evening. The same way I feel about my deployment adventures - last time I outlined why not to use Kubernetes and now it looks like I need to justify why yes. So what’s the deal?

It appears that if you want to put an SSL in front of your GCE deployment, you do so by creating a load balancer and by defining your backend for it. But the idea behind load balancing is that it has to know based on which value it needs to spin instances up or down. You guessed it - health endpoint. That means that there is (again) no free lunch - we need to get our hands dirty with some PHP code.

If this is the case, then there is no business value in deploying the service to GCE and then defining the load balancer for the SSL - that is a straight up vendor lock-in. So the idea now is to go back to the previous scenario - infrastructure as a code on GKE.

Tasks

So in order to have the Ingress running, we need to create two health endpoints - one for the frontend and one for the backend. In order to have the Ingress pick up the correct health checks, it is necessary to counterintuitively deploy the service twice. This is an important aspect so that we can decouple it more easily in the future.

  1. Port 80 on '/*' with health check on '/health/check/'.
  2. API port on '/api/*' with health check on '/'.

The asterisks are an important part of the path, otherwise the pattern matching doesn’t work and Ingress marks the backends as unhealthy! ⚠️

This results into having this YAML setup:

  1. 2 Deployments
  2. 2 Services (NodePort)
  3. 1 Ingress

If the deployments are not separated and you implement two ports on a single deployment, Ingress will assume that a health check for the other port is on '/'

That way the Google managed SSL will provision a certificate for the whole cluster and everything is therefore hidden behind the same URL over HTTPS. The only vendor lock-in position from here is the managed certificate, but otherwise we’re free to take the application wherever Kubernetes is offered.

Standup 7

This time no more changes - we are sailing towards the haven of Kubernetes deployment. As the Auth Service is up and running, I copy+pasted the YAML for the API Server as well (with success). The service is reachable on a URL, has SSL provisioned and is ready for testing - but what kind of testing? Unfortunately, integration testing. And that takes time.

Tasks

The main obstacle when one doesn’t have MAMP/LAMP installed is the path towards each test-case:

  1. Make a change in the code.
  2. Build a container.
  3. Give the container GCP tags.
  4. Push the container to the cloud repository.
  5. Restart the cluster.
  6. Test. Rinse and repeat.

As you can imagine, it’s fun for the first three runs, but ends up as an annoying work on the fourth time. So we need to optimize the workflow - find the bottlenecks and speed it up. Aside from that we need to make sure that all the actors in the system are talking to each other (auth - api - billing).

Standup 8

Last time the problem outlined was the bottleneck of slow deployment time - too long and inefficient process of getting the code up and running for testing purposes. Thank goodness we have Shell Scripts - the swiss army knife of every developer who likes to automate things.

The scripts I wrote are:

  1. build.sh - builds the latest image.
  2. local.sh - calls build.sh and runs the resulting image.
  3. redeploy.sh - pushes the latest image to GCP and redeploys it in the cluster.
  4. stage.sh - runs build.sh and redeploy.sh.

One lesson learned - the HARD WAY:

kubectl replace -f config.yaml is NOT the same thing as calling kubectl delete and then kubectl apply - It all depends on your imagePullPolicy ‼️

I made some changes and didn’t see them in the production and of course I questioned myself first, rather than the platform, to find myself two hours later realizing that the image running is indeed not the one I pushed last.

Tasks

There seem to be some specialities with DB and the Auth Service’s behavior - some parts are not clicking together and it’s important to check the production database to see what kind of magic is happenning behind the closed doors.

Also, as talked about in the sixth standup, we are mapping the API endpoint to '/api/* on the Ingress. But this is not a desired behavior, because the Ingress takes the WHOLE path sends the request to the backend behind it. The backend, however, is not expecting '/api/* in the request and returns either 404 or nothing at all. This is not a big issue, but it has to be changed to '/call/* so that it works well.

Other than that - continuing with debugging and testing communication between the services.

Friday

At first I thought I’ll be filming the Sprint 2 Review. But when counting back the days spent on work, I left some hours out for admin and other work than just the migration. So I added the last day to the mix to polish and finalize the tasks at hand. There were no new tasks, just putting everything together and wrapping it up for the next sprint iteration. It turned out to be exactly what the doctor ordered:

  1. The reason the requests were timing out was because of obsolete TLS. This issue has been mitigated simply by updating the TLS to 1.2 in php.ini.
  2. It seems like the API Service is persisting some sessions on the disk. How can this be solved in a read-only CentOS container? By using Volumes, namely emptyDir. This allows for storing ephemeral data without any problems.

At the end of the day - commit, push, close the issue and enjoy the weekend. Diametrically different than the last weekend!

Conclusion

That’s it for the weekly changelog, thank you for being here with us on this incredible journey. Don’t forget to subscribe on YouTube, follow us on Instagram, Twitter, Facebook or LinkedIn. And keep an eye out on our open-source endeavors on GitHub as well!

See you out there! 👋