What is our primary use case?
The use cases are based on transformation projects. For example, if clients are trying to implement some new deployments in SCCM, they use this solution and we help them with the migration. We are currently helping with a Windows 10 migration. Clients use it for application and system crashes in order to maintain the compliance status for their business applications.
On individual machines, we can see the performance and what is going wrong as well as what can be improved in order to enhance the performance of the machine. This is very useful when we get any queries from a VIP user or someone from our project's top management. So if they get into any trouble, then we are able to fix it as soon as possible. Because if an agent is unable to work for two hours or even for two days, that won't make much big of an impact. Whereas, if someone from higher management is unable to work on the system for about two hours, then they would lose a lot.
Initially, we were just looking at it in order to maintain compliances, and we were using it only for that. The guy managing this tool before me was more or less concentrated on device compliances. Later on, we started getting into the application and system crashes (when I started).
In our environment, we need to put a proxy server in place. Currently, devices are connected via the corporate network or the VPN. We don't have machines reporting from the open network. This is something that we are working on. Once that is in place, we will get all the machines reporting in our environment.
How has it helped my organization?
In the past two years, the biggest benefit is that we have been able to identify 57,000-plus defects. These are possible tickets that we are preventing by using Nexthink.
On a weekly basis, we monitor the trend, then based on the trend and wherever we see a downfall on the performance grid, we just deep dive into it. We look into what is wrong, then proceed with remediation. We access the output weekly, not in real-time.
There have been some system crashes. We have a dashboard for the blue screen of that. On the dashboard, it is usually a wave type of graph, but I saw depth in it during the weekday. So, we investigated into this and got to know that DLP client was crashing. This was causing the BSODs on the machines. So, we were able to bring it into action within three days. Then, within seven days, the ones which had already crashed were all fixed. Nexthink also helped us in preventing the rest of the machines from using the same version of DLP client, since that version was crashing. The version was the cause for the BSODs. Therefore, the deployment teams went ahead and downgraded the version of the DLP client. When there was an upgrade available, they were upgraded. This helped us in avoiding 30,000-plus tickets or being contacted from end users.
It has helped us in proactive prevention. There was this investigation done for device login time, which was quite high and they were taking a lot of time when logging in. In total, 6,567 were affected by this and we were able to fix the problems for approximately 5,900-plus machines. Those machines do not need to be replaced now, since the performance is better. However, the rest need to be refreshed or replaced. Overall, this provides cost savings.
It is being used by our asset management team. We gave them access to the dashboards. We also created some custom dashboards for them where they could look at the performance of the machines. Based on that, they do a comparison with the list that they have available for the refresh. If they see there is a machine on their list for a refresh view, then they look at our data and the machine is performing quite well, it is best that they don't refresh that machine at that particular time. Instead they put it on another machine, which is in our list, but not on theirs. This way, it is helping both sides.
What is most valuable?
We use the API with PowerShell to automate some reports. This was something that we have used which has saved us a lot of time. This was a manual effort for approximately four hours twice a week (for a total of eight hours a week). We were able to bring that down to one hour just to manage the outputs that we get through the APIs.
Nexthink provides analytics for detailed event data, giving us the ability to really drill-down. We put dashboards in place to be able to find defects, so we need to be a little more proactive with it. We have created some dashboards, like site dashboards, that monitor the performance online. You can monitor it on an individual site, region, or country basis. Wherever you see a dip, then we look at the device performance score. There is a checklist that provides us the scores. Along with that, we now have Digital Experience Scores available. That also helps us in finding out the reasons due to which the performance could have been going down.
The major difference that I see from the Digital Experience Score is when monitoring business applications. It enables us to monitor 17 business applications at the same time. Previously, we were only focusing on some fixed services, like Office 365 services. For one, we also manually added C2C application services. We weren't actively looking at other business applications other than that. Now we know what applications perform well and which ones need an upgrade. This helped the client to free some network bandwidth too. Since they upgraded the application version, this has also helped them to bring down the usage on the bandwidth part.
I don't think that I ever got stuck. If I am trying to look at something related to the device performance or system information, I have always gotten it.
What needs improvement?
I would like it if they could put in some patch deployments and compliances. I think it is there with Act, but I am not sure how deep it works.
They released a new module that would be monitoring the performance of applications specifically. That was missing, but I recently got to know that this is already in progress and they are doing it, which is good.
For how long have I used the solution?
I have been managing this tool for my project for over two years.
What do I think about the stability of the solution?
It is quite stable. We don't have any trouble with it. It has never stopped us from finding anything nor impacted anything in the production. Even if it is down and we are working on something that needs to be rebooted, we don't lose anything because everything is being collected on the endpoints using Nexthink Collector. Therefore, it is collecting information continuously, even if we are making any changes on the servers. Once the servers are up, we get everything reported back into the systems. Thus, we never lose any data nor is there any impact on the information front.
I manage Nexthink. My colleague helps me with the reporting part, but I do the investigations.
What do I think about the scalability of the solution?
The scalability is huge.
We have approximately 700 users with access, and 150 who actively use it on a regular basis. They find it useful when fixing issues for end users.
How are customer service and technical support?
It is the best support that I have received from any application vendor. I am the admin for three applications within the environment. If I compare all those with Nexthink support, Nexthink support is the best. If I forget what I'm facing, such as, what ticket I have logged, what is the issue, or what is the progress? They don't forget it. They come back with a solution, providing continuous follow ups and continuous improvements. I have never had to skip or cancel any tickets. I never had to raise any complaints to anybody. They are available whenever I want, even if somebody has to leave for the day. They make sure that I am being contacted at the right time, i.e., when I asked for it.
Which solution did I use previously and why did I switch?
This was the first solution that we used for analytics.
How was the initial setup?
The initial setup is a bit complex for our environment specifically, because we have 50,000-plus endpoints. One engine can only support 10,000 machines. So, we needed at least five engines. However, we were using seven engines, so we could hold more data, as well as one for the portal. These are the eight appliances that we are currently managing. Other than that, we have two appliances configured just for the UAT environment so we can test some solutions before putting them into production. Although the hardware part is a bit complex, the architecture is quite simple.
While I wasn't part of the initial deployment, I know that it didn't take more than seven days. It took approximately a week to configure the appliances and start the setup. However, the continuous improvements were there, and they have kept on making changes in the configuration.
We are using the analytics part along with the integration. With the help of the analysis, we are currently doing actions manually by involving the deployment teams and remote desktop support teams for manual limitation. Once the dashboard has been placed, it also helps us in bringing it into action very quickly. For example, if I investigate something today, then within two days, it will be there in action for the deployment teams.
What about the implementation team?
There was one guy from my team and two people from the Nexthink side. Then, we needed help from two HCL people from their Unix team (one person) because we are managing virtual appliances. So, we needed support from the Unix team to provide the hardware and support to the vCenter for the configuration. Everything else was done by my colleague and the Nexthink support.
What was our ROI?
There have been over 57,000 reported issues, out of which 54,000s are fixed. The others are in progress. We use Nexthink to proactively identify the defects. No other medium is used to investigate these things.
We had biweekly calls with the client. One week we would use it to identify the problem, then the next week we tried to fix the problem. After two weeks, we could then deliver a report, saying, "In the first week, we found these objects where we face issues. After the second week, we were able to remediate this many and have only this many left."
When we implemented this tool, the user experience index score was somewhere around 5.8. Currently, we are sitting at 7.3. Due to the COVID situation, there has been a little bit of fluctuation in the scores due to network parameters, but the lowest that we have touched has been 7.1 so far.
I have seen it help improve the user experience scores. This helped clients to save costs on another tool that they were trying to deploy in order to improve the user experience by just deploying some scripts. We were able to identify the issues and help them remediate those using our help. The process does not take much time and provides background on end user performance. Therefore, they were able to save a lot, e.g., for approximately 50,000 endpoints at two dollars per machine per year, that is about $100,000 in savings.
For the agents, it has been helping us in terms of reducing the resources required because we are able to automate things in terms of the reporting part. It is saving a lot of costs for us.
What's my experience with pricing, setup cost, and licensing?
With this implementation of the Act and Engage module, I believe that there is a one-time fee for a two-month period, which is for the support team to make additional customizations.
Which other solutions did I evaluate?
Compared to other tools in the markets, there are functionalities that only Nexthink delivers. I don't think other tools in the market can do everything that Nexthink can. For example, the level of depth into the device timeline view is as deep as you can go into device performance by looking at the timeline. I didn't see that with any of the other tools available.
Since I am a part of the HCL Technologies Tools team, I am being trained on other tools too. We haven't explored any options for our project. However, for the other project, there are some other solutions in place. It is based on the customer's needs, their budget, or liking. Whichever one they want to move forward with, they choose. Other solutions available include Lakeside SysTrack and Tachyon.
Tachyon can't be compared to Nexthink. There are a lot of things that Nexthink has, which Tachyon doesn't, e.g., Nexthink has customizable dashboards that helps in tracking investigations and build trends.
Nexthink has the device timeline and gives you more data for investigations. It collects more data from the endpoint system than any other tool.
What other advice do I have?
Whatever you investigate, don't keep it on Nexthink Finder. Only put it on a dashboard so it is always available whenever you want it. If you want to save time and reduce some effort for whatever you are investigating or whenever you create a tag/category, just put it on a dashboard so you can just fetch the data and run with it.
It is currently mandatory for our compliance management. If you don't manage the compliance, updating your compliance's baseline threshold, then you might lag behind. This might have an impact on production. After managing the applications, we knew that there was one application that needed to be upgraded. Once that was upgraded, it made some improvements in the bandwidth usage.
We are not using the Act or Engage modules yet. That is still in discussion with the client.
I have been working on it for quite a time now, so I know what scores are based on what thresholds, e.g., what parameters this particular score is being derived with. So, I quickly look at those parameters and the performance of those particular parameters only. We are doing this on a weekly basis and only have analytics. Therefore, we have a lot of missing due to the absence of Act or Engage. The drill-down is the thing that we have to do, which takes a lot of time.
With the help of Act and Engage, it is possible to send some surveys to users. You can create some parameters and put in conditions. For example, if a user reaches an overall device usage of more than 80 percent, you can push the survey to that particular end user asking, "Your device has reached 80 percent of its overall system usage. Would you like a system cleanup?" There, you can put in "Yes" or "No". If the user says, "Yes," then Nexthink can implement that action. If "No," then you can put in further questions asking, "Do you want us to give us to do it a little later?" or "Will you do it on your own?"
The analytics are as perfect as they can be. They keep on improving them with every version upgrade. They keep on adding new fields. If they want to retire something, then they do. I don't think the analytics need any improvement because they have improved a lot with the implementation of Act and Engage. Act and Engage puts it on the next level.
Our longer-term strategic vision for Nexthink is in sync with where our IT department is headed. Apart from me, the IT guys are using it on a daily basis. Our vision is that we wouldn't need anybody for tech support. We can reduce the strength of the OSs folks, who wouldn't have to worry about troubleshooting on the endpoint until it was a hardware problem. Everything on the software and system, we should be able to fix it remotely. So far, we have not been doing this because everybody is not there on the system due to the open networks problem. Once that has been fixed, anything related to software would come to us only. The OSs folks have a lot of other things to deal with, such as logistics, asset management, allocations, etc., but they should only deal with hardware problems, not software.
I would rate Nexthink as 10 out of 10.
Which deployment model are you using for this solution?
On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor. The reviewer's company has a business relationship with this vendor other than being a customer: Partner