Hello Everyone!
So for this task, I choose :
Walkthrough 3: “Chatting with your own documents”
Reason: To get a deeper understanding of the usage of ‘Retrieval Augmented Generation’ and also about how the issue context mismatch can be handled.
How well I think this approach addresses the concerns about context awareness: This question is well answered in the "RAG’ short video on the IBM page, if the RAG doesn’t have the specified data or the LLM doesn’t understand the question the result provided by LLM will be confusing for the end user.
Opportunities: This is the first time I have tried anything like this and I need to research it more to give more insights about this one.
Insights about the whole idea of RAG: I didn’t complete the whole task since sharing my Google Drive access with a third-party tool, again raised data privacy concerns in my mind.
Also when trying to perform the same steps on my local machine, on the very first step I got a ‘syntax error’ so tried with the already written code too to an extent.
I would like to research more on this topic and will rework on this task.
that looks so interesting. Did you need to make many adjusmenrs to the LLM that it was able to understand your provided screenshot and created this really cool looking test case?