Please support us by subscribing to our channel! Thanks a lot! 👏
Hello and welcome to our Weekly Changelog mapping out everything that happened last week. Here we summarize everything and link all useful content that you might have missed.
Sometimes, life punches you in the face. And sometimes it’s admin. A LOT of admin. Having a tech startup isn’t just cool hacking around and re-inventing the world. It’s also a lot of paperwork. While we do have a partner for a lot of the paperwork, there are simply tasks, that can be only done in-house. Such tasks involve, but aren’t limited to:
- Replying to a letter from the tax office, that we REALLY declared and paid everything.
- Collecting all invoices and receipts from the previous quarter so that we can send it to the accounting company.
- Cleaning up e-mail inbox from various non-urgent questions and queries.
Yeah, it’s a lot of non-tech tasks. But they are important and have to be done, otherwise the company couldn’t exist.
On Wednesday we started YET ANOTHER chapter in our migration to the cloud! We went through the standard ritual of planning and I fully encourage watching it. How does it differ from yours? Let us know!
Post is here, video below:
One thing you will notice with the following video (an the following ones as well) is the quality of the picture. I have managed to switch the streaming camera in OBS to the M50, which outputs superior picture quality to the FaceTime HD camera in my computer. I used a program in this repository.
Once a sprint is started, standups follow! As there is nothing to report from the day before, we jump right into what needs to be done - deploying notification service proxy.
THis service is a proxy to all events in the system. It was written prior to Firebase Cloud Messaging being around and serves similar purpose - channeling events from the backend to the frontend. There is some level of unknown. Without a closer look into the system, we aren’t sure where the notifications are stored. Whether it’s the Memcached service we deployed last sprint or simply the MySQL database - we have to find out by tinkering with the service.
SPOILER ALERT! I hope, you’re not jumping headings, are you? Because first read the Standup above, otherwise where is the fun? Anyway…
The notifications service was easy to package and deploy, without any speed bumps. Hopefully we can move as fast with the next task, or even faster - URL tester. This service serves as a tester for a newly added edge device - or rather - it serves as a test PRIOR to adding the edge device to the system. It verifies whether or not the given device yields results in expected format.
There is nothing special about this service and should use the very same dependencies as the other services. It needs to be verified which cluster it will run in (eater vs feeder). We were wrong about the notification service and its belonging to a cluster, so maybe it will be in the opposite now? 😅
If it takes less time than anticipated, we will take another task to work on and move forward quicker if time allows.
Other than that, that’s it! Thanks for reading, we hope you had a lovely weekend.
Thank you for being here with us on this incredible journey. Don’t forget to subscribe on YouTube, follow us on Instagram, Twitter, Facebook or LinkedIn. And keep an eye out on our open-source endeavors on GitHub as well!
See you out there! 👋