Async AI Chat Application on top of FastAPI
Async AI Chat Application Powered by gpt4free on top of FastAPI
It contains two generic entity models Interaction and Message. We store all of the data in the Postgres database. The interaction with the database is done with the SQLAlchemy library, and the simple GET and POST endpoints are exposed via the API, which is written with the FastAPI framework.
To manage dependencies, we use poetry.
To launch an API instance, you should:
poetry install
for that.You can also run the project via docker-compose
(i.e. docker compose up -d
) on port 80
in which you would need the .docker.env containing the following variable to create the database:
SQLALCHEMY_DATABASE_URI=postgresql+asyncpg://<username>:<password>@ifsguid_db/<db-name>
Here is a benchmark of the API using by wrk to demonstrate the performance of the service in different configuration:
Service | Loading Strategy | WRK Configuration | Throughput (reqs/sec) |
---|---|---|---|
Async | Joined | 4 threads, 10 conns | 132 |
Async | Selectin | 4 threads, 10 conns | 112 |
Sync | Lazy | 4 threads, 10 conns | 36 |
Sync | Joined | 4 threads, 10 conns | 132 |
Sync | Selectin | 4 threads, 10 conns | 114 |
Async | Joined | 4 threads, 50 conns | 159 |
Sync | Joined | 4 threads, 50 conns | 1 |
Async | Joined | 4 threads, 15 conns | 126 |
Sync | Joined | 4 threads, 15 conns | 69 |