Share your experience using Interface Masters NPB

The easiest route - we'll conduct a 15 minute phone interview and write up the review for you.

Use our online form to submit your review. It's quick and you can post anonymously.

Your review helps others learn about this solution
The PeerSpot community is built upon trust and sharing with peers.
It's good for your career
In today's digital world, your review shows you have valuable expertise.
You can influence the market
Vendors read their reviews and make improvements based on your feedback.
Examples of the 84,000+ reviews on PeerSpot:

Cyber Security Expert at MOF
Real User
Simplifies troubleshooting and helps to avoid encryption of multiple layers
Pros and Cons
  • "The tool's most valuable feature is the encryption feature. From a security perspective, the solution hasn't significantly strengthened our security posture. However, it has greatly improved performance by streamlining encryption processes and avoiding encryption at multiple layers. This has also simplified troubleshooting, as we can whitelist certain processes."
  • "The Gigamon Deep Observability Pipeline should have a feature showing the traffic flow within its platform. Currently, customers have to use separate tools for monitoring, which is inconvenient. If it had its visibility feature, it would make monitoring easier and more complete without needing extra tools."

What is our primary use case?

We use the solution for traffic grouping and SSL detection. 

What is most valuable?

The tool's most valuable feature is the encryption feature. From a security perspective, the solution hasn't significantly strengthened our security posture. However, it has greatly improved performance by streamlining encryption processes and avoiding encryption at multiple layers. This has also simplified troubleshooting, as we can whitelist certain processes.


The traffic aggregation and transformation feature has significantly impacted our analysis process. The tool helps us investigate our network packet capture. Data aggregation occurs at the network packet capture level, enabling thorough investigation. However, the tool lacks intelligence in providing visibility or traffic flow analysis. Instead, we use other tools to enhance our visibility and analysis based on the captured data.

What needs improvement?

The Gigamon Deep Observability Pipeline should have a feature showing the traffic flow within its platform. Currently, customers have to use separate tools for monitoring, which is inconvenient. If it had its visibility feature, it would make monitoring easier and more complete without needing extra tools.

For how long have I used the solution?

I have been using the product for three years. 

What do I think about the stability of the solution?

The tool is stable, and I haven't encountered any issues. I rate it a nine out of ten. 

What do I think about the scalability of the solution?

Scalability depends on the specific hardware model deployed. In our case, we didn't encounter any scalability issues, and for virtualization, scalability was not a problem. Overall, I would rate the scalability of the Gigamon Deep Observability pipeline at around eight or nine out of ten, as it's straightforward to scale up in cloud environments by adding virtual machines.

On-premise deployments can have scalability challenges if the hardware is outdated or at the end of its lifecycle. Adding more capacity isn't always possible—you may need to replace or upgrade the hardware.

How are customer service and support?

Support from the product has not been very good. They outsource their support to third-party vendors, making receiving direct assistance difficult. Instead, we have to go through intermediaries, such as partners or vendors, which can be challenging and may not always provide satisfactory support.

How would you rate customer service and support?

Neutral

How was the initial setup?

The tool's deployment is difficult. There are multiple dependencies, especially with certificates. It didn't support some certificates, so we had to upgrade them. Also, from a design perspective, the physical setup changed significantly. We needed more cables and connections, and it wasn't a simple plug-and-play process. Implementing the product required downtime, usually around four to eight hours, which needed careful planning. Overall, it wasn't straightforward but more on the tough side. Understanding the current design, planning, and implementation took almost two months for us.

What about the implementation team?

Two resources from our side were involved in deploying the product, and two resources from the third-party vendor were working on the deployment. The entire process, from planning to implementation, took two to three months. This duration included planning, designing, obtaining change approvals, and making necessary network changes.

What's my experience with pricing, setup cost, and licensing?

I would rate the solution as expensive, around an eight or nine out of ten. There are other competitive solutions available.

What other advice do I have?

Gigamon Deep Observability Pipeline has not significantly improved network visibility because it functions primarily as a packet broker. It does not provide visibility directly but instead requires integration with third-party tools to get the visibility. 

Overall, it's a good solution, but there's room for improvement, particularly in configuration competency and data visibility. Currently, there's a lack of data visibility directly from the appliance itself, which needs to be addressed. I rate it an eight out of ten. 

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Jeroen-Dubbelman - PeerSpot reviewer
Director at Bitrate
Reseller
Top 5Leaderboard
Aids significantly in the threat-hunting process and provides a score-based evaluation of user experience
Pros and Cons
  • "There are many valuable features, but understanding end-user response times stands out. It provides a score-based evaluation of user experience, helping customers quickly pinpoint whether issues originate from the network, server, client, or application. Additionally, it facilitates in-depth analysis of application dependencies."
  • "GigaStor feeds into Apex. So, the area where there could be improvement would be in artificial intelligence. For example, the incorporation of more advanced machine learning or AI capabilities could enhance its functionality."

How has it helped my organization?

Observer GigaStor has aided our clients in enhancing their network performance monitoring efforts.

It has significantly helped them understand their network environment and the delivery of business applications to their end-users. It's particularly effective in managing voice or unified communications environments. 

Depending on sizing and its configuration, it can record network traffic over extensive periods, allowing users to retrospectively identify and analyze issues, which, in turn, helps improve their network environment and reduces the mean time needed to repair it.

While it's not directly related to enhancing security posture, it aids significantly in the threat-hunting process. If clients are concerned about specific network traffic, they can analyze it further to identify and address issues.

Moreover, Observer GigaStor's storage capacity meets our clients' data retention requirements. The storage capacity varies based on the model, ranging from 24 terabytes to 512 terabytes. 

Its scalability allows clients to tailor the solution to their specific needs, whether that's deploying units across different sections of a data centre or multiple data centres, ensuring it can meet any customer's storage requirements within their budget.

What is most valuable?

There are many valuable features, but understanding end-user response times stands out. It provides a score-based evaluation of user experience, helping customers quickly pinpoint whether issues originate from the network, server, client, or application. Additionally, it facilitates in-depth analysis of application dependencies.

The retrospective network analysis capability of Observer GigaStor has been beneficial. It's a pivotal feature for network performance solutions. However, I prefer to consider the entire solution suite - GigaStor, along with GigaFlow and Apex - for comprehensive network visibility and monitoring.  

So, I prefer to integrate all three solutions to provide the best visibility. They complement each other well, with Apex aggregating and correlating data from both GigaStor and GigaFlow to enhance environmental understanding and troubleshooting efficiency.

What needs improvement?

GigaStor feeds into Apex. So, the area where there could be improvement would be in artificial intelligence. For example, the incorporation of more advanced machine learning or AI capabilities could enhance its functionality. This improvement would further refine the system's ability to understand and interpret network behaviours and endpoint interactions.

For how long have I used the solution?

We've been utilizing the solution for approximately three to four years.

What do I think about the stability of the solution?

Stability is very high, for sure. I would rate it a ten out of ten. 

What do I think about the scalability of the solution?

I would rate the scalability a ten out of ten. It is no problem. It's only for enterprise businesses.

How are customer service and support?

If things get complicated, they can get complicated. But then again, what you lose in support, you gain in getting the job done in the end because you get extreme visibility.

How would you rate customer service and support?

Positive

How was the initial setup?

The complexity of the setup depends on what you want to achieve. I would rate my experience with the initial setup an eight out of ten with ten being easy to set up. Because, it's quite easy to configure, but you do need knowledge. You do need to be a reasonable engineer to configure this properly.

We mainly do on-prem at the moment, but it doesn't matter. It can be in the cloud as well. So whether it's on-prem or in the cloud, it doesn't matter.

What about the implementation team?

The deployment depends on what you're asking the system to do or what you want out of it.  It could be a situation where you're continuously configuring the device because your environment is constantly changing. So, there's a certain amount of time that will take for configuration. Like, the sales guys like to say it can be configured in hours, and that's true to a certain extent. But if you want the best out of the tool, you'll be continuously configuring it.

The same logic applies to the integration capabilities of this product. It depends on what you want, how much visibility do you need? The configuration itself is actually quite easy, but it does require some good skills. So, I will rate the integration capabilities a ten out of ten. 

What's my experience with pricing, setup cost, and licensing?

I would rate the pricing a ten out of ten. It's quite expensive. 

What other advice do I have?

I would recommend Observer GigaStor to others for network visibility and security. It's pretty much the best system available in this space.

Overall, I would rate the solution a ten out of ten. It is like the saying, "If you are going to be expensive, you are going to be the best." 

Which deployment model are you using for this solution?

On-premises
Disclosure: My company has a business relationship with this vendor other than being a customer: Reseller
Flag as inappropriate