The primary use case is to collect data from different source systems. This includes different soft types of files, such as text files, bin files, and CSV files, to name a few. It is also used for API training.
It works for a large amount of data. It is oriented for endpoint solutions and high and low frequency in small packets of data, for example, files. It can also work well when integrated with Spark, they are complementary in some use cases. At times we work only with Nifi, at times only with Spark and other times when they are integrated.