I am currently working on Amazon Bedrock Agent Core. We have created a data pipeline where we are using Amazon Bedrock Agent Core primarily for transformation. We use the agent for custom rules, transformation, and data quality checks. We are trying to create an Agentic AI data pattern using Agentic AI, where the agent will take decisions. We are using the SDK framework of AWS. We have used Amazon Bedrock for creating custom rules by passing prompts to the agent and to an LLM model. The entire project is deployed on AWS Cloud, and currently we are doing a POC with an account provided by AWS.
We have been mainly using Amazon Bedrock for LLM invocation. We explored it for RAG pipelines as well. We did not do extensive exploration for agentic AI. When we started, it was not very developed at that time. There may be some progress now, but we progressed with our solution so it was not really a need after starting on an implemented solution. Our main focus has been on the invocation of the LLM models and the tracking of logging and cost tracking. For analysis or report generation, we are using Amazon Bedrock mainly. We are doing some processing on the data before providing it for the use case in order to generate specific content. There is a processing step before providing the data. It is not provided in raw format.
Amazon Bedrock is one of the services which is similar to ChatGPT. From my perspective, I was creating a version of ChatGPT that would answer my customers' questions. The significant difference between them is that ChatGPT provides answers to all questions, whereas the Bedrock instance I created was specifically designed to answer questions related to a particular business. My website is a wine selling application, and it was configured to answer only wine-related questions, such as health benefits, consumption guidelines, and product recommendations. The customization is important for my use cases because unlike other GPT or APIs that can give answers to all questions regardless of the topic, Amazon Bedrock answers questions based on the knowledge base we attach to it. I usually provide a PDF file with knowledge in it. After that, the vectors are created, and Amazon Bedrock is able to give answers to particular questions. If it does not have the knowledge about the question, it will give a fallback intent. For instance, I added a restriction rule that if someone asks about the disadvantage of a particular thing, it should stop that question and fallback. When someone asked how wine could be used for entertainment purposes, that question was blocked by the system.
The principal use case for Amazon Bedrock that we are working on is regarding a logistic company that has a flow where they receive emails to look for incoming invoices. We have an architecture that sends these invoices to the queue and identifies the partners involved on the invoice, and we have specific queues; for each queue, we have a specific agent that will treat this information and analyze what they are looking for in the invoice they are sending. For example, if we have to send it to the invoice team, we read this email, and based on the request in the email, we get the information from the email body and then the information regarding the invoice. We understand the behavior we have to deliver to the specific system or how we can create a new ticket for the invoice team to get this invoice and register it on the legacy system. It involves this integration and using specific agents to read, understand, and process the incoming invoices.
We are using Amazon Bedrock ( /products/amazon-bedrock-reviews ) for generative AI-related tasks. We utilize Anthropic Claude ( /products/claude-reviews ) LLM to obtain appropriate answers for user questions.
We have a fairly extensive use case library that includes everything from document and report generation on the simpler side to full project management on either the hardware or software side. We build use cases to enhance processes within standard business practices, like automating call centers, customer experience, or customer success functions. We work on financial reporting within that side as well. The AI platform and supportive machine learning models are designed to rapidly prototype any kind of enterprise business use case. Once we prototype a use case, the platform becomes increasingly smarter, faster, and stronger, making that use case more robust and helping it deliver greater cost savings and efficiency. We cannot solve every problem, however, we have a fairly extensible capability. The advantage of Bedrock is not as an amazing enabler of AI platforms, yet we utilize it to deploy application services and microservices within Bedrock ecosystem and leverage prequalified foundation models like Claude and others.
I work with an AWS partner, and we offer cloud managed services to our clients as well as reselling services. I've worked with Amazon Bedrock to create solutions, including an image generation solution and a chatbot for an ERP application for schools.
AWS cloud AI & data scientist at a tech services company with 51-200 employees
Real User
Top 20
Nov 25, 2024
I used Amazon Bedrock for an application for a flight company where users could ask questions about flights, including dates and times. I utilized Amazon Bedrock to generate SQL queries that retrieve data from the SQL database to answer users' questions.
The primary use case for Bedrock involved using Bedrock for vector embeddings to have a data store for my RAG application. Bedrock was used during a project where vectorized data was needed for one of the products.
Amazon Bedrock offers comprehensive model customization and integration with AWS services, making AI development more flexible for users. It streamlines content generation and model fine-tuning with a focus on security and cost efficiency.Amazon Bedrock is engineered to provide a seamless AI integration experience with a strong emphasis on security and user-friendliness. It simplifies AI development by offering foundational models and managed scaling, enhancing both trust and operational...
I am currently working on Amazon Bedrock Agent Core. We have created a data pipeline where we are using Amazon Bedrock Agent Core primarily for transformation. We use the agent for custom rules, transformation, and data quality checks. We are trying to create an Agentic AI data pattern using Agentic AI, where the agent will take decisions. We are using the SDK framework of AWS. We have used Amazon Bedrock for creating custom rules by passing prompts to the agent and to an LLM model. The entire project is deployed on AWS Cloud, and currently we are doing a POC with an account provided by AWS.
We have been mainly using Amazon Bedrock for LLM invocation. We explored it for RAG pipelines as well. We did not do extensive exploration for agentic AI. When we started, it was not very developed at that time. There may be some progress now, but we progressed with our solution so it was not really a need after starting on an implemented solution. Our main focus has been on the invocation of the LLM models and the tracking of logging and cost tracking. For analysis or report generation, we are using Amazon Bedrock mainly. We are doing some processing on the data before providing it for the use case in order to generate specific content. There is a processing step before providing the data. It is not provided in raw format.
Amazon Bedrock is one of the services which is similar to ChatGPT. From my perspective, I was creating a version of ChatGPT that would answer my customers' questions. The significant difference between them is that ChatGPT provides answers to all questions, whereas the Bedrock instance I created was specifically designed to answer questions related to a particular business. My website is a wine selling application, and it was configured to answer only wine-related questions, such as health benefits, consumption guidelines, and product recommendations. The customization is important for my use cases because unlike other GPT or APIs that can give answers to all questions regardless of the topic, Amazon Bedrock answers questions based on the knowledge base we attach to it. I usually provide a PDF file with knowledge in it. After that, the vectors are created, and Amazon Bedrock is able to give answers to particular questions. If it does not have the knowledge about the question, it will give a fallback intent. For instance, I added a restriction rule that if someone asks about the disadvantage of a particular thing, it should stop that question and fallback. When someone asked how wine could be used for entertainment purposes, that question was blocked by the system.
The principal use case for Amazon Bedrock that we are working on is regarding a logistic company that has a flow where they receive emails to look for incoming invoices. We have an architecture that sends these invoices to the queue and identifies the partners involved on the invoice, and we have specific queues; for each queue, we have a specific agent that will treat this information and analyze what they are looking for in the invoice they are sending. For example, if we have to send it to the invoice team, we read this email, and based on the request in the email, we get the information from the email body and then the information regarding the invoice. We understand the behavior we have to deliver to the specific system or how we can create a new ticket for the invoice team to get this invoice and register it on the legacy system. It involves this integration and using specific agents to read, understand, and process the incoming invoices.
We are using Amazon Bedrock ( /products/amazon-bedrock-reviews ) for generative AI-related tasks. We utilize Anthropic Claude ( /products/claude-reviews ) LLM to obtain appropriate answers for user questions.
I am using Amazon Bedrock ( /products/amazon-bedrock-reviews ) mainly to do some analysis of customer support calls.
We have a fairly extensive use case library that includes everything from document and report generation on the simpler side to full project management on either the hardware or software side. We build use cases to enhance processes within standard business practices, like automating call centers, customer experience, or customer success functions. We work on financial reporting within that side as well. The AI platform and supportive machine learning models are designed to rapidly prototype any kind of enterprise business use case. Once we prototype a use case, the platform becomes increasingly smarter, faster, and stronger, making that use case more robust and helping it deliver greater cost savings and efficiency. We cannot solve every problem, however, we have a fairly extensible capability. The advantage of Bedrock is not as an amazing enabler of AI platforms, yet we utilize it to deploy application services and microservices within Bedrock ecosystem and leverage prequalified foundation models like Claude and others.
I work with an AWS partner, and we offer cloud managed services to our clients as well as reselling services. I've worked with Amazon Bedrock to create solutions, including an image generation solution and a chatbot for an ERP application for schools.
I used Amazon Bedrock for an application for a flight company where users could ask questions about flights, including dates and times. I utilized Amazon Bedrock to generate SQL queries that retrieve data from the SQL database to answer users' questions.
I have used Amazon Bedrock to create knowledge bases for a machine learning project. This is my primary use case.
The primary use case for Bedrock involved using Bedrock for vector embeddings to have a data store for my RAG application. Bedrock was used during a project where vectorized data was needed for one of the products.