Learning ROP: Resiliency, Observability and Performance

Learning ROP: Resiliency, Observability and Performance

Tech stack breakdown: Reach | Building my SAAS

In the dynamic landscape of digital content creation, where innovation is the key to staying ahead, a robust and tailored tech stack is the lifeblood of success. For content-first teams navigating the ever-evolving challenges of collaboration, scalability, and seamless workflow integration, the right software can make all the difference. In this blog series, we embark on a journey to demystify the technological marvel behind a SaaS platform meticulously crafted for content-first teams.

Ideologies behind the choices

When making decisions about the technology for this project, I kept in mind three key factors or goals that I personally aim to achieve, beyond the obvious monetary benefits. With four years of engineering college and one year of industry experience, I've been actively involved in building various projects both personally and professionally. My projects have functioned well for smaller user bases, ranging from tens to hundreds—maybe a thousand people at most. Therefore, my overarching goals for this project include:

  1. Resilience

  2. Observability

  3. Performance

Exploring these ideologies

  • Resilience

    Definition: Resilience in software engineering refers to a system's ability to maintain functionality and recover quickly from failures or disruptions. It involves designing software architectures and components to gracefully handle errors, faults, or unexpected conditions.

    As previously stated, I have successfully obtained the desired outcomes from my projects, although this achievement has typically been confined to a specific instance or a limited set of cases. Additionally, I have never attempted nor achieved the creation of a comprehensive user flow where all conceivable elements function seamlessly.

  • Observability

    Definition: Observability is the ability to understand and monitor a software system's internal state and behavior using various tools and techniques. It involves collecting and analyzing data, logs, and metrics to gain insights into the system's performance, health, and potential issues.

    A significant aspect of attaining comprehensive resilience, or any form of resilience, lies in observability. Therefore, observability is an area I intend to emphasize and concentrate on.

  • Performance

    Definition: Performance in software engineering refers to the efficiency and speed at which a software system or application executes tasks and processes data. It involves optimizing code, algorithms, and infrastructure to achieve optimal response times and resource utilization.

    In my previous endeavors, my focus was primarily on creating products that fulfilled their intended purpose, without paying meticulous attention to the time aspect—whether it took a millisecond or a few minutes. However, with Reach, I aim to rectify this approach. I aspire for every aspect to not merely fall within the acceptable spectrum but to reach the pinnacle of excellence, achieving a status beyond acceptable, in the "hot damn" category.

Basic Architecture

The following diagram provides a concise overview of the cloud architecture employed in this project. We will delve into the rationale behind these choices and discuss the utilized technologies immediately following this illustration.


Remix

Choosing Remix over Next.js comes down to several key factors:

  1. Simpler APIs:

    Remix offers simpler APIs, streamlining the development process. Its API design is intuitive and user-friendly, making it easier for developers to understand and work with.

  2. Pure MPA Experience:

    Remix provides a pure Multi-Page Application (MPA) experience. This approach aligns with specific project requirements that benefit from a multi-page structure without the need for a client-side routing paradigm, offering a straightforward and efficient solution.

  3. Uses Stable and Reliable Web APIs:

    Remix relies on stable and reliable web APIs, contributing to a robust foundation for web development. By leveraging well-established standards, Remix ensures compatibility and stability, reducing the likelihood of compatibility issues and enhancing the overall reliability of the application.

In summary, Remix stands out over Next.js due to its simpler APIs, commitment to a pure MPA experience, and reliance on stable and reliable web APIs. These factors collectively contribute to a more streamlined and dependable development process.

Microservices

Microservices architecture offers several advantages over a monolithic approach, especially in scenarios where project features are complex and require a high level of separation. Here are some key reasons why microservices might be preferable:

  1. Great Separation of Concerns for Complex Features:

    Microservices excel in providing a high degree of separation of concerns. In projects with complex features, breaking down the functionality into individual services allows for better isolation and management. Each microservice can focus on a specific feature or business capability, making the system more modular and maintainable.

  2. Variable Traffic and Horizontal Scaling:

    Microservices are well-suited for scenarios where traffic across different features can vary significantly. Since individual microservices can be scaled independently, this architecture provides a more efficient and flexible solution for handling varying workloads. It enables horizontal scaling, allowing you to allocate resources dynamically to the services that need them most.

  3. Granular Feature Control:

    Microservices enable fine-grained control over features. In projects where features can be turned on or off based on user requirements, microservices provide the granularity needed for managing feature flags. This allows for better control over the functionality offered to users and simplifies the process of rolling out new features or changes incrementally.

  4. Technology Stack Diversity:

    Microservices afford the flexibility to use different technology stacks for different services. This is particularly advantageous when certain technologies are better suited to specific tasks. With a microservices architecture, you can choose the most appropriate technology for each service, optimizing for performance, scalability, and developer expertise.

In summary, for projects with complex features, variable traffic patterns, and a need for technology diversity, microservices architecture provides a scalable and flexible solution. The ability to separate concerns, scale horizontally, and choose diverse technology stacks makes microservices well-suited for addressing the challenges posed by intricate and dynamic project requirements.

Microservices complicate resilience and observability by introducing distributed complexities. Decentralized logs, numerous network dependencies, and diverse technology stacks hinder seamless error handling and real-time monitoring. Identifying and addressing issues promptly becomes challenging, impacting the overall resilience and observability of the microservices architecture.

Using a proxy

Employing a proxy between your frontend and various microservices offers several advantages:

  1. Only One Service Exposed to the Internet:

    By using a proxy, you can expose only the proxy service to the internet. This adds an additional layer of security, reducing the attack surface and potential vulnerabilities. The internal microservices can remain hidden behind the proxy, enhancing the overall security posture of your application.

  2. Separation of Concerns:

    A proxy allows for a clear separation of concerns between the frontend and microservices. This separation facilitates a modular and scalable architecture, making it easier to manage and update individual components without affecting the entire system.

  3. Single Point for Caching:

    Having a proxy enables you to centralize caching logic. Instead of implementing caching mechanisms in each microservice, the proxy can handle caching at a single point. This not only simplifies cache management but also improves efficiency and reduces redundancy in caching efforts.

  4. Browser Support for gRPC Isn't Great:

    Utilizing a proxy becomes particularly valuable when dealing with microservices that communicate using gRPC and Protobufs. As browser support for gRPC isn't as widespread, the proxy acts as an intermediary, allowing you to communicate with microservices seamlessly while ensuring compatibility with browsers that may not fully support gRPC.

In summary, employing a proxy between your frontend and microservices offers benefits such as enhanced security, clear separation of concerns, centralized caching, and improved compatibility with browser technologies like gRPC and Protobufs.

gRPC

gRPC emerges as a superior choice for microservices communication due to several key factors:

  1. Performance with Protobufs and HTTP/2:

    Leveraging Protocol Buffers (Protobufs) and HTTP/2, gRPC offers superior performance. Protobufs, being more compact than JSON, reduce payload size, and HTTP/2 enables multiplexing, resulting in faster, more efficient communication between microservices.

  2. Protobuf Over JSON:

    Protobuf's binary serialization is more efficient than JSON, reducing bandwidth and enhancing serialization/deserialization speed. This efficiency is crucial in microservices environments where minimizing data transfer overhead is paramount.

  3. HTTP/2:

    gRPC utilizes HTTP/2 as its transport protocol, providing features like multiplexing, header compression, and flow control. These features contribute to improved resource utilization, reduced latency, and better handling of concurrent requests—essential for microservices communication.

  4. End-to-End Type Safety:

    gRPC provides strong typing through Protobuf's schema definition. This end-to-end type safety ensures that communication between microservices is well-defined, reducing the likelihood of errors related to data inconsistencies and enhancing overall system reliability.

In summary, gRPC's performance optimizations with Protobufs and HTTP/2, efficient serialization with Protobufs, utilization of HTTP/2 features, and end-to-end type safety make it a superior option for seamless and efficient communication between microservices in distributed architectures.

💡
While I had the option to employ RESTful APIs instead of gRPC, a crucial aspect of this project involves exploring technologies that genuinely intrigue me. gRPC, a technology that has captivated my interest for some time, aligns with this learning objective, prompting my decision to incorporate it into the project.

Postgres (with redis)

Honestly, there's no clear winner, but on a serious note, my aim is to enhance my SQL skills, and PostgreSQL encompasses all the features necessary for this project at a moderate scale. Given my limited experience in evaluating the performance and scalability of various databases, I've decided to opt for PostgreSQL as my primary database choice.

R2 over S3

Opting for Cloudflare R2 with AWS Glacier backup over S3 offers several advantages:

  1. Pricing:

    Cloudflare R2, coupled with AWS Glacier, can often present a more cost-effective storage solution compared to S3, especially for long-term archival needs. Glacier's pricing model, designed for infrequently accessed data, can result in lower costs for certain use cases.

  2. Data Duplication Safety with Two Vendors:

    Utilizing Cloudflare and AWS as separate vendors introduces redundancy and data duplication safeguards. Storing data in multiple locations enhances data resilience and provides an extra layer of protection against potential data loss or service disruptions.

  3. Great CDN Support with Cloudflare:

    Cloudflare excels in providing robust Content Delivery Network (CDN) support. This can significantly enhance the performance and availability of your data, ensuring faster and more reliable access for users across the globe. S3 also supports CDN, but Cloudflare's specialized focus on CDN services can offer additional benefits.

It's important to note that the choice between Cloudflare R2 with AWS Glacier and S3 depends on specific project requirements, including the frequency of data access, budget considerations, and the need for CDN support. Each solution has its strengths, and the decision should align with the unique demands of your storage and access patterns.

Other small stuff

Apart from the above mentioed technologies I also use the following things (languages, libraries and stuff)

  • Resend

  • Railway (for a temporary dev setup)

  • shadcn/ui

  • Tailwind

  • Github

  • Postman

  • Scss

  • Mantine/hooks

💡
This is the first version of the tech stack reveal

Conclusion

In wrapping up our exploration of Reach tech stack choices and ideologies, it becomes evident that the decisions made today are not just about the tools we employ but about crafting a foundation for the future. This marks the beginning of a journey where performance, resilience, and observability converge. The choices we've examined are pivotal, setting the stage for a deeper dive into optimizing performance and hinting at the integral role that resilience will play in our upcoming discussions. Stay tuned for the later blog, where I bring all three ideologies together, forging a path toward a SaaS tech stack that transcends expectations.

Did you find this article valuable?

Support Iresh Sharma by becoming a sponsor. Any amount is appreciated!