LMI Houdini Tools | Unlocking Octane & Render Network Capabilities with Andrey Lebrov
Tutorial and Q&A with Andrey Lebrov about Using LMI Houdini Tools on Render
Last updated
Tutorial and Q&A with Andrey Lebrov about Using LMI Houdini Tools on Render
Last updated
Key Advantages:
Massive‑scene baking shrinks TB‑scale files into tens‑of‑GB ORBXs
Solaris/LOP integration means everything stays live and editable
Single‑click ORBX assembly replaces days of manual splitting and re‑merging
Tag‑based isolation supercharges lookdev with zero node juggling
We maintain the integrity of the scene — nothing is reduced, compressed, or compromised. The key to achieving smaller file sizes lies in the automatic frame-splitting system. Instead of baking entire animation sequences as one continuous file, we divide them into smaller segments.
This results in manageable bakeouts that align with the optimal file size guidelines set by the Render Network. Currently, the most efficient rendering performance is achieved when uploaded scenes are kept within the 20 -30 GB range.
The new Octane Shot Manager for Solaris functions as a standalone utility, designed to streamline the ORBX workflow. At its core, it leverages a continuously evolving LUA script developed by Padi Frigg. This tool empowers users with a clean, intuitive interface that simplifies the ORBX merging process—just select the desired files, hit "Assemble ORBX", and a Render Network–ready, fully merged ORBX will be generated in your destination folder. Frame-splits are recognised automatically by naming patterns that are assigned on baking stage.
Beyond merging, we're actively expanding the Shot Manager’s capabilities. Upcoming features include more granular control over render-target assignments, allowing users to manage it from within the tool; change settings on the fly, change or add cameras etc. We're also preparing to roll out support for Cinema 4D and Blender, extending the Shot Manager’s utility across multiple DCCs.
Anywhere from 10 to 25 frames would work well. It's important to maintain balance, but it will depend on the complexity of your scene and animations specifically. Currently, the tool enforces a minimum split of 5 frames, so users won’t be able to divide scenes into smaller segments than that.
Solaris runs on a Hydra-based architecture, which is a core part of Pixar’s USD workflow. It works with any render delegate that supports Hydra, making it super flexible. Houdini isn’t the only player in this space—Maya, Blender, NVIDIA Omniverse, and Unreal Engine all tap into Hydra—but Houdini definitely stands out in terms of performance and execution. It’s just next level. And we can code and build incredible things in it.
Since Solaris is built on USD, it gets all the benefits that come with it. USD was designed from the ground up to handle massive, complex 3D scenes, so you get insane control over instancing, variations, and referencing. We’re constantly working to bring all that power into our tools—and that’s precisely what makes these new, more complex Octane scenes possible.
Right now, the tool bakes out ORBX only—but we’re already rolling out a new mode called “USD + ORBX Sampling.” This lets you bake everything into USD, with all the flexibility and power that comes with it, and automatically generate a 1-frame ORBX that includes all your Octane materials. The tool then assembles the USD files, samples the Octane materials from the ORBX, and assigns them correctly to the USD—seamlessly bridging the two.
Once native support for MaterialX (MtlX) lands, this workflow will still stick around. We want to keep the option open for anyone who prefers Octane Materials over MtlX. In that case, MtlX materials will bake directly into the USD, while Octane users can still enjoy the same sampled-material pipeline.
Think of it like this—SOPs (Surface Operators) is where you build everything. That includes modeling, simulations, animations, layout—you name it. It’s the creative sandbox.
LOPs (Lighting Operators), Solaris, is where you assemble, light, and lookdev your scenes. It’s the stage where all your assets come together. Most of what you do in LOPs references stuff built in SOPs—unless you’re working directly with USD files from disk, in which case you can skip SOPs entirely.
That’s really it. SOPs build it, LOPs lookdev it.
Yes, all the tools are packaged as HDAs. They’ll ship with a pre-configured .json package, and likely a small prerequisite script that adds a few lines to the user’s environment—nothing complicated. We’re aiming for a smooth setup experience: just a couple of clicks, and you’re ready to go. No hurdles, no mess—just plug and play.
In Houdini, it’s actually pretty hard to not know when something has changed—but you’re right, edge cases can slip through. At the moment, there’s no built-in change detection in the Octane Shot Manager, since Solaris references data rather than storing it directly. That’s something we’re definitely looking into.
That said, the whole idea behind this workflow is to approach large scenes brick by brick. You’re working on one element at a time—baking things down step-by-step. This gives you clear visibility over each piece, and if something breaks or needs tweaking, you don’t have to redo the entire scene—just the specific tag or chunk that’s affected.
So yeah, no auto-detection yet, but the modular approach already minimizes rework. And that’s a great suggestion—definitely on our radar now.
Right now, there’s a basic bake tracker that logs each bakeout directly in the console—including how long it took to process. But currently, that only covers the Houdini side of things. Once it hands off to Octane Standalone, we lose visibility—so no deep integration or detailed logging from that side yet.
Same goes for resource tracking: at the moment, we’re relying on external system monitors for checking RAM/VRAM usage. There’s nothing built into the tool (yet) for that kind of profiling.
With that being said, RenderCon was a fantastic opportunity to connect with the people behind the scenes—and let’s just say, there’s a lot of potential for making this more powerful down the line. No promises yet, but we’re optimistic about where this could go.
Do you embed the full UDIM tile set and complete shader definitions in each ORBX, or reference a shared library?
Yes, each ORBX maintains full shader integrity, including the entire UDIM tile set and complete shader definitions. Even if you're only baking a specific part—like just the arm or the head of a creature—the system still samples the full shader. So partial geometry doesn’t mean partial materials. Everything is embedded to ensure consistency and avoid any broken links or missing data down the line.
This approach guarantees that no matter how you split your bakes, the look remains intact—just as it should.
How do you avoid duplicating large texture files or losing tile‑coordinate integrity across ORBX boundaries?
Currently, there’s no deduplication happening at the assembly stage. That’s just a limitation of Octane Standalone—it’s missing some of the smarter handling that Render Network already has. Render Network treats files as hash codes, so there’s no fooling it. If something’s duplicated, it knows, and it only keeps one copy.
But with Standalone, and by extension any ORBX workflow, there’s no asset hash filtering—and we don’t have a way to force it during the baking process either. So yeah, duplicates can happen. This is being looked at and I'm sure very soon we will have comprehensive filtering from A to Z.
What metadata or conventions does your stitcher use to reconcile UDIM ranges and MaterialX node trees so the final assembled ORBX renders seamlessly?
Nothing like that is required. At the core of USD, you have to explicitly declare your geometry, its parts, and shader assignments—basically, you’re coding it in, unlike what you might be used to in DCCs like Blender or C4D where things are more intuitive, Drag and Drop. We do streamline this to some extent with our complimentary SOP tools—like the Scatter Tool and Layout Tool—and we’re actively exploring ways to automate these declarations. That’s especially important for our ‘USD + ORBX Sampling’ mode, where proper declarations are key. Without them, shaders won’t correctly hook into USD inputs. That said, once everything is wired up, the shaders and textures bake out cleanly and completely — they are not divided. Our test scenes include multiple UDIM sets, and they’ve all held up perfectly. So while there’s no magic pill around deduplication, we’re working on ways to make the process of shaders assignment smoother and automate the optimisation.