We pull data from various sources and employ a buzzword to process it for reporting purposes, utilizing a prominent visual analytics tool.
The easiest route - we'll conduct a 15 minute phone interview and write up the review for you.
Use our online form to submit your review. It's quick and you can post anonymously.
We pull data from various sources and employ a buzzword to process it for reporting purposes, utilizing a prominent visual analytics tool.
Our experience with using Spark for machine learning and big data analytics allows us to consume data from any data source, including freely available data. The processing power of Spark is remarkable, making it our top choice for file-processing tasks.
Utilizing Apache Spark's in-memory processing capabilities significantly enhances our computational efficiency. Unlike with Oracle, where customization is limited, we can tailor Spark to our needs. This allows us to pull data, perform tests, and save processing power. We maintain a historical record by loading intermediate results and retrieving data from previous iterations, ensuring our applications operate seamlessly. With Spark, we parallelize our operations, efficiently accessing both historical and real-time data.
We utilize Apache Spark for our data analysis tasks. Our data processing pipeline starts with receiving data in the RAV format. We employ a data factory to create pipelines for data processing. This ensures that the data is prepared and made ready for various purposes, such as supporting applications or analysis.
There are instances where we perform data cleansing operations and manage the database, including indexing. We've implemented automated tasks to analyze data and optimize performance, focusing specifically on database operations. These efforts are independent of the Spark platform but contribute to enhancing overall performance.
It would be beneficial to enhance Spark's capabilities by incorporating models that utilize features not traditionally present in its framework.
I've been engaged with Apache Spark for about a year now, but my company has been utilizing it for over a decade.
It offers a high level of stability. I would rate it nine out of ten.
It serves as a data node, making it highly scalable. It caters to a user base of around five thousand or so.
The initial setup isn't complicated, but it varies from person to person. For me, it wasn't particularly complex; it was straightforward to use.
Once the solution is prepared, we deploy it onto both the staging server and the production server. Previously, we had a dedicated individual responsible for deploying the solution across multiple machines. We manage three environments: development, staging, and production. The deployment process varies, sometimes utilizing a tenant model and other times employing blue-green deployment, depending on the situation. This ensures the seamless setup of servers and facilitates smooth operations.
Given our extensive experience with it and its ability to meet all our requirements over time, I highly recommend it. Overall, I would rate it nine out of ten.
I use the solution in my company for one of the cases where we have to deal with areas like topology engines and big topology chains.
Overall, my company likes the product since it is a good tool.
There can be challenges in getting a good developer for Apache Spark. Getting developers in the market with the right skill set for Apache Spark is tough. The aforementioned area can be considered for improvement in the product.
At times during the deployment process, the tool goes down, making it look less robust. To take care of the issues in the deployment process, users need to do manual interventions occasionally. I feel that the use of large datasets can be a cause of concern during the tool's deployment phase, making it an area where improvements are required.
I have been using Apache Spark for seven to eight years.
Stability-wise, I rate the solution an eight and a half out of ten.
It is a very scalable solution.
In our company, there are users of Apache Spark, and then there are users of the applications that were developed with it.
Currently, my company does not plan to increase the use of the product.
The product's deployment phase is easy.
The product's deployment phase involved the CI/CD pipeline and Jenkins pipeline.
Earlier, the solution was deployed on an on-premises model. Later on, the solution was deployed on a cloud model.
Initially, during the product's deployment phase, it took more than four to five hours. With the passage of time, the product's deployment process became easier.
Around 50 to 100 people in my company are involved in the product's deployment process.
Considering the product version used in my company, I feel that the tool is not costly since the product is available for free.
The tool offers functionality that helps my company deal with data processing in projects on a near real-time basis.
The impact of in-memory processing capabilities on the improvement of computational efficiency is one of the reasons why my company chose Apache Spark.
At the moment, my company plans to explore data analysis with Apache Spark. My company primarily used the product for data processing and not for data analysis.
If you buy the product with the capabilities of Azure DevOps and use the tool's dashboard, you find the solution to be good. The tool has an in-built UI and other good capabilities.
I feel that the product is fine and easy to use for those who plan to use it in the future. I recommended the tool to others based on the performance and scalability features it offers.
I managed data partitioning and distribution with Apache Spark once in my company.
The benefits of the use of the product revolve around the fact that it was easy to get the data processing done in a very quick and fastest possible way with the help of its n-memory processing and performance.
I rate the solution an eight and a half to nine out of ten.