I am part of the performance engineering practice, and I lead the performance engineering practice at my current employer. We use Honeycomb Enterprise for tracing, which is application performance management in short. Our client has several APM tools such as Datadog, and in addition to Datadog, we only have the monitoring capacity of the counters. We do not have the agent-level monitoring which Honeycomb Enterprise is providing, where we can see the traces for each call being made by the software to trace where it is spending the time. For that gap, they have Honeycomb Enterprise in addition to Datadog. We use Honeycomb Enterprise for the same purpose, as Honeycomb hooks into our applications and tells us the traces where the request is spending the time. We have Datadog here and often we get restrictions related to cardinality on Datadog because of their billing systems. They have limitations of cardinality, and that is the impact. That is how we can compare the impact. Another thing I want to add here is that the team here had tried to use Honeycomb Enterprise earlier for tracing, but they faced issues. They could not get proper tracing with Honeycomb Enterprise at that time. That is what I have been given as feedback. My main focus is on the tracing part. We have a microservices architecture with multiple microservices, and we want to see how when the request flows across multiple microservices, where the time gets spent. We mostly look at the time the requests spend. Honeycomb Enterprise would capture a trace and span, and from there, we look at the time, the milliseconds or seconds that get spent at a particular request. That is what we look at and what we are interested in.
Software Engineer at a financial services firm with 11-50 employees
Real User
Top 20
Feb 6, 2026
We were building a product for one of the biggest wealth management platforms in the world, an American wealth management platform. For them, it is really important for the product to be reliable and for them to set up KPIs, especially for vendors like us who worked for them. The debugging process usually involved Splunk Cloud or Honeycomb Enterprise traces. Whenever I was looking at an issue, I probably went through the traces because it was a microservice architecture. Sometimes it really helped to understand the call chain. For example, if there were 10 microservices calling each other in some sort of order, being able to visualize that and look through that was pretty useful.
Although Grit is a tool code code migration and management of technical debt for large chunks of work, we reviewed Grit from the use case of assisting in faster remediation of vulnerable libraries. We examined 3 areas and how we could use the synergy of Grit.io along with Snyk.io that helps overcome Snyk's limitations: 1. Deep scanning and reachability analysis 2. Management of auto-generated Pull Requests (PRs) 3. Reduction of false positives I'm connected and had interactions with the founder Mr. Morgante Pell, while I designed a comprehensive synergistic solution, and I wrote a 35+ page technical paper on this topic.
The solution is mainly used for stack observability. It observes service behavior or any kind of failure that may be happening. The tool is also related to research. My company is working more on this, but I have been working on my SLOs and defining SLOs for the last seven months.
Honeycomb Enterprise is designed to optimize performance visibility, offering a robust platform for distributed system observability. It provides insights for complex data and aids in faster issue resolution, making it a valuable tool for IT professionals.This tool is tailored for real-time data tracking and improving system performance efficiency. Enterprises benefit from its capacity to handle large-scale data, ensuring seamless operations and continuity. Honeycomb Enterprise helps teams to...
I am part of the performance engineering practice, and I lead the performance engineering practice at my current employer. We use Honeycomb Enterprise for tracing, which is application performance management in short. Our client has several APM tools such as Datadog, and in addition to Datadog, we only have the monitoring capacity of the counters. We do not have the agent-level monitoring which Honeycomb Enterprise is providing, where we can see the traces for each call being made by the software to trace where it is spending the time. For that gap, they have Honeycomb Enterprise in addition to Datadog. We use Honeycomb Enterprise for the same purpose, as Honeycomb hooks into our applications and tells us the traces where the request is spending the time. We have Datadog here and often we get restrictions related to cardinality on Datadog because of their billing systems. They have limitations of cardinality, and that is the impact. That is how we can compare the impact. Another thing I want to add here is that the team here had tried to use Honeycomb Enterprise earlier for tracing, but they faced issues. They could not get proper tracing with Honeycomb Enterprise at that time. That is what I have been given as feedback. My main focus is on the tracing part. We have a microservices architecture with multiple microservices, and we want to see how when the request flows across multiple microservices, where the time gets spent. We mostly look at the time the requests spend. Honeycomb Enterprise would capture a trace and span, and from there, we look at the time, the milliseconds or seconds that get spent at a particular request. That is what we look at and what we are interested in.
We were building a product for one of the biggest wealth management platforms in the world, an American wealth management platform. For them, it is really important for the product to be reliable and for them to set up KPIs, especially for vendors like us who worked for them. The debugging process usually involved Splunk Cloud or Honeycomb Enterprise traces. Whenever I was looking at an issue, I probably went through the traces because it was a microservice architecture. Sometimes it really helped to understand the call chain. For example, if there were 10 microservices calling each other in some sort of order, being able to visualize that and look through that was pretty useful.
Although Grit is a tool code code migration and management of technical debt for large chunks of work, we reviewed Grit from the use case of assisting in faster remediation of vulnerable libraries. We examined 3 areas and how we could use the synergy of Grit.io along with Snyk.io that helps overcome Snyk's limitations: 1. Deep scanning and reachability analysis 2. Management of auto-generated Pull Requests (PRs) 3. Reduction of false positives I'm connected and had interactions with the founder Mr. Morgante Pell, while I designed a comprehensive synergistic solution, and I wrote a 35+ page technical paper on this topic.
The solution is mainly used for stack observability. It observes service behavior or any kind of failure that may be happening. The tool is also related to research. My company is working more on this, but I have been working on my SLOs and defining SLOs for the last seven months.
There aren't any specific use cases for the solution as such. In our company, we use the solution for SLA and SLO-related work.