ChatdollKit enables you to make your 3D model into a chatbot
3D virtual assistant SDK that enables you to make your 3D model into a voice-enabled chatbot. ๐ฏ๐ตๆฅๆฌ่ชใฎREADMEใฏใใกใ
3D Model
Generative AI
Dialog
I/O
Platforms
... and more! See ChatdollKit Documentation to learn details.
You can learn how to setup ChatdollKit by watching this video that runs the demo scene(including chat with ChatGPT): https://www.youtube.com/watch?v=rRtm18QSJtc
Download the latest version of ChatdollKit.unitypackage and import it into your Unity project after import dependencies;
Burst
from Unity Package Manager (Window > Package Manager)Add 3D model to the scene and adjust as you like. Also install required resources for the 3D model like shaders etc. In this README, I use Cygnet-chan that we can perchase at Booth. https://booth.pm/ja/items/1870320
And, import animation clips. In this README, I use Anime Girls Idle Animations Free. I believe it is worth for you to purchase the pro edition๐
Put ChatdollKit/Prefabs/ChatdollKit
or ChatdollKit/Prefabs/ChatdollKitVRM
to the scene. And, create EventSystem to use UI components.
Select Setup ModelController
in the context menu of ModelController. If NOT VRM, make sure that shapekey for blink to Blink Blend Shape Name
is set after setup. If not correct or blank, set it manually.
Select Setup Animator
in the context menu of ModelController and select the folder that contains animation clips or their parent folder. In this case put animation clips in 01_Idles
and 03_Others
onto Base Layer
for override blending, 02_Layers
onto Additive Layer
for additive blending.
Next, see the Base Layer
of newly created AnimatorController in the folder you selected. Confirm the value for transition to the state you want to set it for idle animation.
Lastly, set the value to Idle Animation Value
on the inspector of ModelController.
On the inspector of DialogController
, set Wake Word
to start conversation (e.g. hello / ใใใซใกใฏ๐ฏ๐ต), Cancel Word
to stop comversation (e.g. end / ใใใพใ๐ฏ๐ต), Prompt Voice
to require voice request from user (e.g. what's up? / ใฉใใใใฎ๏ผ๐ฏ๐ต).
Select the speech service (OpenAI/Azure/Google/Watson) you use and set API key and some properties like Region and BaseUrl on inspector of ChatdollKit.
Attach Examples/Echo/Skills/EchoSkill
to ChatdollKit
. This is a skill for just echo. Or, if you want to enjoy conversation with AI, attach components and set OpenAI API Key to ChatGPTService
:
Select Setup VRC FaceExpression Proxy
in the context menu of VRC FaceExpression Proxy. Neutral, Joy, Angry, Sorrow and Fun face expression with all zero value and Blink face with blend shape for blink = 100.0f are automatically created.
You can edit shape keys by editing Face Clip Configuration directly or by capturing on inspector of VRCFaceExpressionProxy.
Press Play button of Unity editor. You can see the model starts with idling animation and blinking.
Wake Word
on inspector (e.g. hello / ใใใซใกใฏ๐ฏ๐ต)Prompt Voice
on inspector (e.g. what's up? / ใฉใใใใฎ๏ผ๐ฏ๐ต)To use Azure OpenAI Service set following info on inspector of ChatGPTService component:
Chat Completion Url
format: https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/chat/completions?api-version={api-version}
API Key to Api Key
Set true to Is Azure
NOTE: Model
on inspector is ignored. Engine in url is used.
See the MultiSkills
example. That is more rich application including:
Router
is an example of how to decide the topic user want to talkTranslateDialog
is an example that shows how to process dialogWe are now preparing contents to create more rich virtual assistant using ChatdollKit.
Refer to the following tips for now. We are preparing demo for WebGL.
To use dlopen, you need to use Emscriptenโs linking support, see https://github.com/kripken/emscripten/wiki/Linking
await
) because JavaScript doesn't support threading. Use UniTask instead.ChatdollMicrophone
that is compatible with WebGL.