I’m wondering if anyone is interested in this idea. My friend works in LLM research, and his recent work has inspired me.
I believe the LLM-based RAG service(Retrieval Augmented Generation, a technique that combines LLMs with information retrieval) can be a powerful addition for Safe because it can:
- Lower the entry barrier for developers and new users.
- Make the Safe ecosystem smarter and more user-friendly.
- Provide fast and accurate answers to user queries.
The documentation for Safe is growing rapidly and will continue to do so. The same goes for FAQs.
Adding an LLM-based RAG service to Safe documentation and FAQs seems like a great option. This could significantly lower the barrier to entry for developers and new Safe users.And make the safe ecosystem more intelligent
Is anyone interested in exploring this further?
Or does the safe team have similar ideas and plans?
If I expand to more, we can even build a RAG service for the safe forum, because the current search function of safe is so useless that I can hardly easily obtain information on slightly more complex issues.