Inochi2D is a free and open source 2D animation framework similar to Live2D, currently in beta phase. We can already use it for VTuber activities using their official streaming app, and a discussion regarding implementation of this framework in Ren'Py is already started by both Inochi2D and Ren'Py developers. You can learn more about the project here, follow their Twitter account here, or join their Discord community here.
Please note that Mannequin is not related with Inochi2D. If you want to support Inochi2D for the betterment of both the game development and VTuber industry as a whole, we strongly recommend donating via their Patreon, GitHub Sponsors, or contributing codes and translations to the GitHub repository.
For VTuber livestreaming, use their official livestreaming app (Inochi Session) which can be downloaded from itch.io or GitHub. We strongly recommend using the nightly builds of Inochi Session if you plan to use characters exported by Mannequin. These nightly builds of Inochi Session can be downloaded from here.
Both Inochi2D and the Inochi2D export feature in Mannequin is still in beta. Backup your character often (especially your exported
.inx file) and expect a lot of changes happening quickly as both Inochi2D and Mannequin develops further!
To use Inochi2D export feature in Mannequin, simply choose the
Inochi Creator (.inx) format option when exporting your character. This will create an Inochi Creator file, which contains layered images ready for rigging. Below is an example of Inochi2D export result when opened in Inochi Creator.
As you can see, not only various elements are separated into different layers (nodes), they are also structured optimally for easier rigging. For example, limbs are positioned as a child node of torso, and eyebrows/eyes/mouth/nose are positioned as a child node of head.
Instead of just exporting static character with layered images, you can also export pre-rigged characters with head turn, eye gaze, blink and lip sync already set up. Templates which are pre-rigged for Inochi2D are marked with Inochi Creator icon.
You need to use these marked templates for the following aspects of your character in order to generate a fully rigged output:
- Hair (every hair parts from Primary Hairstyle, Bangs, to Additional Hair Parts)
- Face Colors
- Clothing (optional, required for clothing templates that are worn on head)
After making sure you have chosen proper templates for the aspects mentioned above, simply choose the
Inochi Creator (.inx) format option when exporting your character. Now your exported character will have parameters already generated when opened with Inochi Creator.
In order to use your pre-rigged character for VTuber livestreaming, first you'll need:
- A laptop or USB webcam
- The latest nightly builds of Inochi Session which can be downloaded from here (this is temporary, if Inochi Session 0.5.5 is already out you can use that version instead).
- The latest version of OpenSeeFace which can be downloaded from here.
- The latest version of Open Broadcaster Software (OBS) which can be downloaded from here.
- The latest version of OBS Spout2 Plugin which can be downloaded from here.
To set up OpenSeeFace tracking in Inochi Session, configure the virtual space by opening
View -> Virtual Space. In the
Virtual Space window that shows up, enter your desired virtual space name in the input field ("Default", for example). Then, click the
+ button to create a new virtual space.
After creating the virtual space, the next step is to add OpenSeeFace as a tracker. To do this, select the virtual space that you've just created, and in the right side of
Virtual Space window you will see another
+ button. Click this button, and choose
OpenSeeFace in the drop-down menu that shows up.
After that, click
Save Changes and close the window.
After setting up OpenSeeFace in Inochi Session, you should run the OpenSeeFace tracker so it can start capturing expression from your webcam and send the data to Inochi Session. To do this, first extract the ZIP file that you got from the GitHub link above. Open the extracted folder, then open the
Binary subfolder. Inside, you'll find
run.bat. If you want to do some set-up first (choosing framerate, resolution, or if you have multiple webcam connected to your system) then open
run.bat. Otherwise, you can open
facetracker.exe for instant use.
For Linux, you can follow this guide.
To load your previously exported character to Inochi Session, just drag and drop the correspoding
.inx file from file manager/explorer to Inochi Session window. If OpenSeeFace has already set up by following previous steps above, your character should immediately move according to the face tracker.
To reposition your character, just click and drag across the Inochi Session window. To zoom in/out, click and hold your character, and use mouse scroll.
If you are experiencing trailing images glitch in Inochi Session, turning off Post Processing might help.
Now that your character is already moving in Inochi Session according to the face tracker, the next step is displaying the output into OBS. First, make sure you have already installed the OBS Spout2 Plugin from the GitHub link above. Then, add a new source in OBS and choose
Properties window that shows up, use these values in the drop-down menu fields:
Spout Senders, choose
Inochi Session(or if you're not running another app which also uses Spout2, you can choose
Use first available sender)
Composite Mode, choose
After that, click
OK. Your character will now appear in the preview, with transparent background, ready to be composited to your stream!
In order to improve performance in Inochi Session, consider doing these:
- Disable post processing effects by opening the Scene Settings panel
View -> Scene Settingsand uncheck
- Resize Inochi Session window to a smaller size.
.inpfile instead of
.inxfile generated by Mannequin. To generate an
.inpfile from your existing
.inxfile, open the corresponding file in Inochi Creator, and then open
File -> Export -> Inochi2D Puppet. In the
Export Optionswindow that shows up, click the
Resolutiondrop-down menu and choose
4096x4096for optimal result.
If you want to customize Mannequin's exported
.inx file further using Inochi Creator, you can learn more about how to use it here.