If you wanted to configure the QAs policies right now, I believe you should be technical as well. If you are not aware of such devices and technology, you will not understand. We will assess the existing infrastructure. If they want to improve the existing storage, not just for NetApp, it will be based on what is currently available and being used. Existing IOPs will not be compatible with your current user, regardless of who is concurrently connected to the storage. We'll assess the existing infrastructure, and if they're ready to buy any new hardware, any new solution, we'll add any additional access they'll give solid-state drives. And we will improve the performance. We will increase the number of drives and the overall memory utilization. As a result, the end-user cannot determine the exact utilization of the speed and your IOPs. we'll put it to use. If there is a bottleneck in the network as well, we will find out exactly what assessment is now, and we will make decisions based on that assessment. Based on the assessment, we will make recommendations for improvement and provide guidelines for exactly what we need to do. In terms of the storage, if you implemented the storage one time and if you provided the volumes, you also provided the opportunity to learn. If you provided dials and QS policies, the client and user are not required, and they are not asking for the extension and all of these things. If you configure a quota type of information, or if some of the users are requesting data for a hundred UPR changes before the particular data, particular folder, in this case, we can do the automation involved there. Until we get the automatic resize working properly, if a threshold is reached out to a specific percentage, such as just 90 percent, or an aggregated breach is 90 percent. This is the kind of data that the automation process can provide. That is the fundamental thing in our NetApp storage right now. One of the things that are happening now is that a new requirement has arisen from the end-user, such as the desire to create new volumes of new LAN and present them to a specific physical host, a virtual host, a Windows host, or a Linux host. We have to create this information, this script, and just fill it in their format. NetApp is currently providing monitoring tools. Assume you have multiple clusters in your organization, a large number of containers, and a large number of countries if you are currently using your storage. There's no need to log in to each cluster and storage separately, there is a centralized monitoring and management tool where you can simply log it into one single signup control. It's as simple as a single click. If you wanted to manage these jobs and run the one-sum script, you didn't have to log in to each and every cluster and storage; instead, you just needed to log in once. With a single click, you can access your own information via someone else's system and manage their command tools from the network, which they have developed. They could make the access a bit easier. Every cluster and storage must be done manually the first time, during the implementation and deployment phases. If it is deployed completely, you won't have to do anything manually. You must perform online on all of these automation and all of these centralized, single controls. They need centralized controls. Not every organization is small, and they are using multiple locations. They are utilizing a number of DRaaS solutions. If your primary data center is down, is it due to any of these hardware or natural environment, natural disasters, human beings, or mistakes now? You don't have to worry about this in these cases. Simply click the activate button on the DR site, and your data will be deployed from the DR storage. There's nothing to be concerned about. It's simple. You can migrate your existing data right now. As a result, the feature is now being used everywhere, and what everyone is looking for, is on a cloud right now. For example, your ONTAP system manager, cloud-only the features that are combined with that cloud. If you open your NetApp ONTAP system manager control, it will combine the cloud volumes and you can simply map your three-bucket location, three-bucket credential, and jump in your data going to the cloud. You can either give it a second copy on the cloud network or use cloud data ONTAP. If you've wired your data to the cloud, ONTAP, AWS, Azure, or Google. So it can simply create a duplicate copy on the cloud player as well.