🌟DataTonic : A Data-Capable AGI-style Agent Builder of Agents , that creates swarms , runs commands and securely processes and creates datasets, databases, visualisations, and analyses.
A Data-Capable AGI-style Agent Builder of Agents , that creates swarms , runs commands and securely processes and creates datasets, databases, visualizations, and analyses.
you can use image or audio input in your native language : DataTonic is AGI for all
DataTonic produces fixed business intelligence assets based on autonomous multimedia data processing.
Based on those it can produce :
DataTonic provides junior executives with an extremely effective solution for basic and time-consuming data processing, document creation or business intelligence tasks.
Now anyone can :
Do not wait for accounting, legal or business intelligence reporting with uncertain quality and long review cycles. DataTonic accelerates the slowest part of analysis : data processing and project planning execution.
DataTonic is unique for many reasons :
yes DataTonic is accessible both audio and image input.
yes. DataTonic will look for the data it needs but you can add your .db files or any other types of files with DataTonic.
yes. DataTonic produces rich , full-length content.
yes. DataTonic is more tailored to business intelligence but it is able to produce functioning applications inside generated repositories.
yes DataTonic is able to automate many junior positions and it will include more enterprise connectors, soon !
You can use datatonic however you want, here's how we're using it :
Data Tonic is the first multi-nested agent-builder-of-agents!
DataTonic Team started by evaluating multiple models against the new google/gemini models , testing all functions. Based on our evaluation results we optimized default prompts and created new prompts and prompt pipeline configurations.
Learn more about using TruLens and our scientific method in the evaluation folder. We share our results in the evaluation/results folder.
you can also replicate our evaluation by following the instructions in #Easy Deploy
DataTonic is the first application to use a doubly nested multi-environment multi-agent builder-of-agents configuration . Here's how it works !
Data Tonic uses a novel combination of three orchestration libraries.
Please try the methods below to use and deploy DataTonic.
The easiest way to use DataTonic is to deploy on github spaces and use the notebooks in the evaluation/results folder .
Click here for easy_deploy [COMING SOON!]
in the mean time please follow the instructions below:
Please follow the instructions in this readme exactly.
please use command line with administrator priviledges for the below.
pip install google-cloud-aiplatform
https://console.cloud.google.com/vertex-ai
and click create new project.
Create a new project and add a payment method.
click 'enable all recommended APIs'
click on 'multimodal' on the left then 'my prompts' on the top:
click on 'create prompt' and 'GET CODE' on the top right in the next screen:
then click on 'curl' on the top right to find your 'endpoint' and projectid , and other information e.g.
cat << EOF > request.json
{
"contents": [
{
"role": "user",
"parts": []
}
],
"generation_config": {
"maxOutputTokens": 2048,
"temperature": 0.4,
"topP": 1,
"topK": 32
}
}
EOF
API_ENDPOINT="us-central1-aiplatform.googleapis.com"
PROJECT_ID="focused-album-408018"
MODEL_ID="gemini-pro-vision"
LOCATION_ID="us-central1"
curl \
-X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
"https://${API_ENDPOINT}/v1/projects/${PROJECT_ID}/locations/${LOCATION_ID}/publishers/google/models/${MODEL_ID}:streamGenerateContent" -d '@request.json'
run the following command to find your API key after following the instructions above:
gcloud auth print-access-token
IMPORTANT : this key will expire every less than 30 minutes, so please refresh it regularly and accordingly
https://platform.openai.com/api-keys
https://oai.azure.com/portal
deploy your models
go to playground
click on view code :
make a note of your endpoint
, API Key
, and model name
to use it later.
visit this url and download and install the required packages :
https://www.sqlite.org/download.html
For Windows :
You need the SQLite source files, including the sqlite3.h header file, for the pysqlite3 installation.
Go to the SQLite Download Page.
Download the sqlite-amalgamation-*.zip file under the "Source Code" section.
Extract the contents of this zip file to a known directory (e.g., C:\sqlite).
Set Environment Variables:
You need to ensure that the directory where you extracted the SQLite source files is included in your system's PATH environment variable.
add your path :
setx SQLITE_INC "C:\sqlite"
proceed with the rest of the setup below.
TaskWeaver requires Python >= 3.10. It can be installed by running the following command:
# [optional to create conda environment]
# conda create -n taskweaver python=3.10
# conda activate taskweaver
# clone the repository
git clone https://github.com/microsoft/TaskWeaver.git
cd TaskWeaver
# install the requirements
pip install -r requirements.txt
Command Prompt: download and install wsl:
pip install wsl
then run
wsl
then install sqlite
sudo apt-get update
sudo apt-get install libsqlite3-dev #or : sqlite-devel
sudo pip install pysqlite3
This section provides instructions on setting up the project. Please turn off your firewall and use administrator priviledges on the command line.
clone this repository using the command line :
git clone https://github.com/Tonic-AI/DataTonic.git
edit 'OAI_CONFIG_LIST'
"api_key": "your OpenAI Key goes here",
and
"api_key": "your Google's GenAI Key goes here",
1. modify Line 135 in autogen_module.py
```python
os.environ['OPENAI_API_KEY'] = 'Your key here'
```
2. modify .env.example
```os
OPENAI_API_KEY = "your_key_here"
```
save as '.env' - this should create a new file.
**or**
rename to '.env' - this will rename the existing file.
3. modify src\tonicweaver\taskweaver_config.json
```json
{
"llm.api_base": "https://api.openai.com/v1",
"llm.api_key": "",
"llm.model": "gpt-4-1106-preview"
}
```
4.
edit ./src/semantic_kernel/semantic_kernel_module.py
line 64: semantic_kernel_data_module = SemanticKernelDataModule('<google_api_key>', '<google_search_engine_id>')
and
line 158: semantic_kernel_data_module = SemanticKernelDataModule('<google_api_key>', '<google_search_engine_id>')
with your google API key and Search Engine ID , made above.
src/semantic_kernel/googleconnector.py
from the project directory :
cd ./src/tonicweaver
git clone https://github.com/microsoft/TaskWeaver.git
cd ./src/tonicweaver/TaskWeaver
# install the requirements
pip install -r requirements.txt
from the project directory :
pip install -r requirements.txt
python run app.py
We welcome contributions from the community! Whether you're opening a bug report, suggesting a new feature, or submitting a pull request, every contribution is valuable to us. Please follow these guidelines to contribute to DataTonic.
Before you begin, ensure you have the latest version of the main branch:
git checkout main
git pull origin main
Then, create a new branch for your contribution:
Copy code
git checkout -b <your-branch-name>
If you encounter any bugs, please file an issue on our GitHub repository. Include as much detail as possible:
We are always looking for suggestions to improve DataTonic. If you have an idea, please open an issue with the tag 'enhancement'. Provide:
If you'd like to contribute code, please follow these steps:
Follow the setup instructions in the README to get DataTonic running on your local machine.
Ensure that your changes adhere to the existing code structure and standards. Add or update tests as necessary.
Write clear and meaningful commit messages. This helps to understand the purpose of your changes and speed up the review process.
git commit -m "A brief description of the commit"
Push your changes to your remote branch:
git push origin <your-branch-name>
Go to the repository on GitHub and open a new pull request against the main branch. Provide a clear description of the problem you're solving. Link any relevant issues.
Maintainers will review your pull request. Be responsive to feedback to ensure a smooth process.
Please note that this project is released with a Contributor Code of Conduct. By participating in this project, you agree to abide by its terms.
By contributing to DataTonic, you agree that your contributions will be licensed under its LICENSE.
Thank you for contributing to DataTonic!🚀