Skip to main content

Exporting for Inochi2D

Inochi2D is a free and open source 2D animation framework similar to Live2D, currently in beta phase. We can already use it for VTuber activities using their official streaming app, and a discussion regarding implementation of this framework in Ren'Py is already started by both Inochi2D and Ren'Py developers. You can learn more about the project here, follow their Twitter account here, or join their Discord community here.

Please note that Mannequin is not related with Inochi2D. If you want to support Inochi2D for the betterment of both the game development and VTuber industry as a whole, we strongly recommend donating via their Patreon, GitHub Sponsors, or contributing codes and translations to the GitHub repository.

To edit Inochi2D animations, use their official animation editor app (Inochi Creator) which can be downloaded from itch.io or GitHub.

For VTuber livestreaming, use their official livestreaming app (Inochi Session) which can be downloaded from itch.io or GitHub. We strongly recommend using the nightly builds of Inochi Session if you plan to use characters exported by Mannequin. These nightly builds of Inochi Session can be downloaded from here.

warning

Both Inochi2D and the Inochi2D export feature in Mannequin is still in beta. Backup your character often (especially your exported .inx file) and expect a lot of changes happening quickly as both Inochi2D and Mannequin develops further!

Basic Inochi2D Export#

To use Inochi2D export feature in Mannequin, simply choose the Inochi Creator (.inx) format option when exporting your character. This will create an Inochi Creator file, which contains layered images ready for rigging. Below is an example of Inochi2D export result when opened in Inochi Creator.

Basic Inochi2D export opened with Inochi Creator.

As you can see, not only various elements are separated into different layers (nodes), they are also structured optimally for easier rigging. For example, limbs are positioned as a child node of torso, and eyebrows/eyes/mouth/nose are positioned as a child node of head.

Pre-rigged Inochi2D Export#

Instead of just exporting static character with layered images, you can also export pre-rigged characters with head turn, eye gaze, blink and lip sync already set up. Templates which are pre-rigged for Inochi2D are marked with Inochi Creator icon.

Eye template marked with Inochi Creator icon.

Mouth template marked with Inochi Creator icon.

You need to use these marked templates for the following aspects of your character in order to generate a fully rigged output:

  • Pose
  • Head
  • Ear
  • Hair (every hair parts from Primary Hairstyle, Bangs, to Additional Hair Parts)
  • Nose
  • Brow
  • Eye
  • Mouth
  • Face Colors
  • Clothing (optional, required for clothing templates that are worn on head)

After making sure you have chosen proper templates for the aspects mentioned above, simply choose the Inochi Creator (.inx) format option when exporting your character. Now your exported character will have parameters already generated when opened with Inochi Creator.

Pre-rigged Inochi2D export opened with Inochi Creator.

Using Pre-rigged Inochi2D Export for Livestreaming#

In order to use your pre-rigged character for VTuber livestreaming, first you'll need:

  • A laptop or USB webcam
  • The latest nightly builds of Inochi Session which can be downloaded from here (this is temporary, if Inochi Session 0.5.5 is already out you can use that version instead).
  • The latest version of OpenSeeFace which can be downloaded from here.
  • The latest version of Open Broadcaster Software (OBS) which can be downloaded from here.
  • The latest version of OBS Spout2 Plugin which can be downloaded from here.

Setting Up OpenSeeFace Tracking for Inochi Session#

To set up OpenSeeFace tracking in Inochi Session, configure the virtual space by opening View -> Virtual Space. In the Virtual Space window that shows up, enter your desired virtual space name in the input field ("Default", for example). Then, click the + button to create a new virtual space.

After creating the virtual space, the next step is to add OpenSeeFace as a tracker. To do this, select the virtual space that you've just created, and in the right side of Virtual Space window you will see another + button. Click this button, and choose OpenSeeFace in the drop-down menu that shows up.

  • For osf_bind_port, enter 11573
  • For osf_bind_ip, enter 0.0.0.0

OSF setup in Inochi Session.

After that, click Save Changes and close the window.

Running OpenSeeFace#

Windows#

After setting up OpenSeeFace in Inochi Session, you should run the OpenSeeFace tracker so it can start capturing expression from your webcam and send the data to Inochi Session. To do this, first extract the ZIP file that you got from the GitHub link above. Open the extracted folder, then open the Binary subfolder. Inside, you'll find facetracker.exe and run.bat. If you want to do some set-up first (choosing framerate, resolution, or if you have multiple webcam connected to your system) then open run.bat. Otherwise, you can open facetracker.exe for instant use.

Linux#

For Linux, you can follow this guide.

Loading Your Character to Inochi Session#

To load your previously exported character to Inochi Session, just drag and drop the correspoding .inx file from file manager/explorer to Inochi Session window. If OpenSeeFace has already set up by following previous steps above, your character should immediately move according to the face tracker.

Inochi Session ready to stream.

To reposition your character, just click and drag across the Inochi Session window. To zoom in/out, click and hold your character, and use mouse scroll.

warning

If you are experiencing trailing images glitch in Inochi Session, turning off Post Processing might help.

Sending Video Data from Inochi Session to OBS#

Now that your character is already moving in Inochi Session according to the face tracker, the next step is displaying the output into OBS. First, make sure you have already installed the OBS Spout2 Plugin from the GitHub link above. Then, add a new source in OBS and choose Spout2 Capture.

Spout2 Capture source Properties in OBS.

In the Properties window that shows up, use these values in the drop-down menu fields:

  • For Spout Senders, choose Inochi Session (or if you're not running another app which also uses Spout2, you can choose Use first available sender)
  • For Composite Mode, choose Premultiplied Alpha

After that, click OK. Your character will now appear in the preview, with transparent background, ready to be composited to your stream!

Inochi Session and OBS ready to stream.

Additional Tips#

In order to improve performance in Inochi Session, consider doing these:

  • Disable post processing effects by opening the Scene Settings panel View -> Scene Settings and uncheck Post Processing.
  • Resize Inochi Session window to a smaller size.
  • Use .inp file instead of .inx file generated by Mannequin. To generate an .inp file from your existing .inx file, open the corresponding file in Inochi Creator, and then open File -> Export -> Inochi2D Puppet. In the Export Options window that shows up, click the Resolution drop-down menu and choose 4096x4096 for optimal result.

If you want to customize Mannequin's exported .inx file further using Inochi Creator, you can learn more about how to use it here.