We use SCOM for monitoring our servers in one place.
The easiest route - we'll conduct a 15 minute phone interview and write up the review for you.
Use our online form to submit your review. It's quick and you can post anonymously.
We use SCOM for monitoring our servers in one place.
The most valuable features in SCOM are Azure monitoring and integration with Azure Monitor for monitoring Azure-hosted servers from SCOM on-premises.
In terms of improvement, direct integration with third-party tools, like ticketing systems, is lacking but would be beneficial.
I have been working with SCOM for a year.
SCOM is generally stable with occasional glitches, particularly during patching when downtime occurs. We provide DR services from one data center, which increases scalability but lacks redundancy. Overall, I would rate the stability at around nine out of ten.
I would rate the scalability of SCOM as an eight out of ten. In our company, we manage 500 servers using SCOM.
The technical support is good.
Setting up SCOM is straightforward. We decide on the number of servers to monitor, configure management servers and gateway services, and install SCOM using the setup executable. Onboarding servers involves scripting or installing the SCOM console, making the process easy with no complexity.
Compared to competitors like AppDynamics, SCOM is better for monitoring Microsoft services and roles. It has agent-based pricing, with no bulk license options, but generally reasonable.
SCOM improves system monitoring by centralizing server monitoring.
The alerting capabilities in SCOM are helpful for our organization. The notification feature works well and is beneficial for keeping us informed.
We use the default reporting tool in SCOM, which is Microsoft Server Reporting. It is user-friendly and integrates well with SCOM.
Integrating SCOM into our current IT environment was easy.
Overall, I would rate SCOM as a nine out of ten. I would recommend it to others.
We use the solution for traffic grouping and SSL detection.
The tool's most valuable feature is the encryption feature. From a security perspective, the solution hasn't significantly strengthened our security posture. However, it has greatly improved performance by streamlining encryption processes and avoiding encryption at multiple layers. This has also simplified troubleshooting, as we can whitelist certain processes.
The traffic aggregation and transformation feature has significantly impacted our analysis process. The tool helps us investigate our network packet capture. Data aggregation occurs at the network packet capture level, enabling thorough investigation. However, the tool lacks intelligence in providing visibility or traffic flow analysis. Instead, we use other tools to enhance our visibility and analysis based on the captured data.
The Gigamon Deep Observability Pipeline should have a feature showing the traffic flow within its platform. Currently, customers have to use separate tools for monitoring, which is inconvenient. If it had its visibility feature, it would make monitoring easier and more complete without needing extra tools.
I have been using the product for three years.
The tool is stable, and I haven't encountered any issues. I rate it a nine out of ten.
Scalability depends on the specific hardware model deployed. In our case, we didn't encounter any scalability issues, and for virtualization, scalability was not a problem. Overall, I would rate the scalability of the Gigamon Deep Observability pipeline at around eight or nine out of ten, as it's straightforward to scale up in cloud environments by adding virtual machines.
On-premise deployments can have scalability challenges if the hardware is outdated or at the end of its lifecycle. Adding more capacity isn't always possible—you may need to replace or upgrade the hardware.
Support from the product has not been very good. They outsource their support to third-party vendors, making receiving direct assistance difficult. Instead, we have to go through intermediaries, such as partners or vendors, which can be challenging and may not always provide satisfactory support.
Neutral
The tool's deployment is difficult. There are multiple dependencies, especially with certificates. It didn't support some certificates, so we had to upgrade them. Also, from a design perspective, the physical setup changed significantly. We needed more cables and connections, and it wasn't a simple plug-and-play process. Implementing the product required downtime, usually around four to eight hours, which needed careful planning. Overall, it wasn't straightforward but more on the tough side. Understanding the current design, planning, and implementation took almost two months for us.
Two resources from our side were involved in deploying the product, and two resources from the third-party vendor were working on the deployment. The entire process, from planning to implementation, took two to three months. This duration included planning, designing, obtaining change approvals, and making necessary network changes.
I would rate the solution as expensive, around an eight or nine out of ten. There are other competitive solutions available.
Gigamon Deep Observability Pipeline has not significantly improved network visibility because it functions primarily as a packet broker. It does not provide visibility directly but instead requires integration with third-party tools to get the visibility.
Overall, it's a good solution, but there's room for improvement, particularly in configuration competency and data visibility. Currently, there's a lack of data visibility directly from the appliance itself, which needs to be addressed. I rate it an eight out of ten.