I enjoy philosophy. Stoic philosophy in particular.
Philosophy, I think, helps us revalidate our purpose. It acts as a yard stick and makes sure that we are not moving away from our First-Principles.
Applying the same to Software Engineering, in my opinion, every team should have a “Design Philosophy”. What is that one yard stick which teams can use for making better decisions.
Infact, it is done in some forms in a few cases. Some call it Guiding-Principles. Some call it MVPs. I call it “Design Philosophy“.
The core idea is – Whenever a decision has to be made, if it is passed through this “Design Philosophy”, it should produce the same result, irrespective of who is making that decision.
As Engineer, we like equations, formulas and non-ambiguous ways of thinking. A written form of these Design Philosophies, for teams, does a lot of good in making the right decisions at a great pace. It is unfair for all Engineering teams to use the same yard stick. So people should write their own.
Below is my (opinionated) version for an Observability Engineering team.
- Low latency is an important features for Observability signals. The ingested observability data should be available to the users at the earliest.
- Observability tooling system is the torch in the dark. High reliability is a must. It cannot fail when the Platform / Application fails.
- which means, Observability stack CANNOT fail when applications fail
- which means, ideally, Observability stack shouldn’t be completely on the same platform as all Applications.
- which means, Observability Vendors( buy decisions ) are not a bad choice. The choice of a vendor should be cost-effective for the Org.
- for the O11y solutions that we decide to build in-house, Isolation is key.
- Our O11y stack should support – availability, reliability and performance “cost-effectively“ at scale.
- All the tools that we build and maintain should be vendor agnostic (sdks, collectors, refinery etc).
- Rate of decay of data is fast in O11y. People care more about last 1hour/1day O11y data vs last 1month data.
- When we opt into optimising cost in observability, it results in having more than one tool. While we can have different tools, we shouldn’t have many tools which do the same thing. Example:
- Metrics → Prometheus, Traces → Jaeger (Fine)
- Logs → ELK, Logs → Splunk (NOT-Fine)
- Tools change. The tools that we have today for a specific function, might change to something else in a year or two. Observability team should strive to make the change least disruptive.
- There is a clear view point on “what kind of observability signal, has to go where”. (Details on this here) Example:
- count –> metrics
- time –> trace
- high cardinality –> log
These are the elements that I use when making an Observability decision. These might vary for a different team, who might be in a different situation. But the point I am really trying to make is, have a design philosophy that will make decision making easier.

