📚 Local PDF-Integrated Chat Bot: Secure Conversations and Document Assistance with LLM-Powered Privacy
Check Another LLM Project : 🔐 LLQuery : Your Conversational SQL Bridge
Discover the innovation of this chatbot project, seamlessly merging Chainlit user-friendly interface with localized language models 🌐. Tailored for sensitive data, it's a vital asset for both organizations and individuals. From deciphering intricate user guides to extracting vital insights from complex PDF reports, this project streamlines data access.
Equipped with advanced technology, it offers an engaging conversational experience. It redefines data interaction, empowering you with control.
Make sure you have the following installed:
database = "local"
in the config.toml
file. This will ensure that the necessary dependencies are installed.Clone the project repository using Git.
Download the necessary model from HuggingFace by visiting the following link: Download Llama Model. Once downloaded, move the model file to the "models" directory.
llama-2-7b-chat.ggmlv3.q2_K.bin
and mistral-7b-openorca.Q4_K_M.gguf
Install the required Python packages by running the following command:
pip install -r requirements.txt
Place your PDF document in the "data" directory. You can choose the appropriate document loader from the available options to match your requirements. Refer to Document Loaders for more information. Note that the current implementation is designed for PDF documents.
Launch the application using the following command:
chainlit run main.py -w
For the initial setup, it's essential to build the vector database. Click on the "Rebuild vector" to initiate this process.
With the setup complete, you can now ask pertinent questions related to your PDF document. Input your queries and receive informative responses.
To modify the welcome screen, edit the chainlit.md
file at the root of your project. If you do not want a welcome screen, just leave this file empty.
pip install langchain[all]
pip install --upgrade langchain
pip install --upgrade python-box
pip install pypdf
pip install -U sentence-transformers
python -m pip install --upgrade pip
pip install ctransformers
pip install chainlit
Inference speed will depend upon CPU cores and avaialble RAM. It's recomended to have multi core CPU (Laptop / PC) with atleast 16GB RAM. You can deploy it on Server for better performance.
Tested on :