Tuesday, October 23, 2012

Multithreading dependency graphs... (LibEE Paper)

Woah! I've just finished reading through the paper Dreamworks published at Siggraph this year about their new fancy multi-threaded dependency graph.


Some seriously cool stuff here with some interesting insights which we should take into account when working on our own revamp. Having said that, even without worrying about multi-threading (it's still an area I'm not that familiar/comfortable dealing with yet... and it'd certainly be worth having some multithread gurus around if/when we go down this path) we still have a lot of design issues to tackle to even get a basic system that tackles most of the issues that we're currently facing with our old antiquated beast.


One thing that bothers me a bit with their approach is that they resorted to having to restrict everything to C++ nodes only to gain "optimal" performance. Perhaps it's just my unadulterated hatred of C++ and some of the syntactic abominations it's been inflicting on the world since it's release, but surely we must be able to derive some kind of efficient down-compilers or even VM-type approaches which will allow the usage of nicer syntax while approaching "reasonable" performance without problems.

I also wonder a bit about how much leverage we might be able to get out of the "non-editable" assumption they state. It seems that what we do now doesn't actually provide that much "editable graph" functionality currently (i.e. everytime we do make dependency changes, we just do a full rebuild anyway). So, perhaps we will be able to do some optimisations there...

Anyways, at least it's good to know that someone out there is still working on these problems these days (AFAIK, there's fairly scant info published out there about the intricacies of building these sorts of systems). Seeing a modern approach to this problem is certainly quite enlightening, and inspiring too.

9 comments:

  1. Seeing you study this things gives me chills. From me I can only say RAW SPEED PLIZ!

    ReplyDelete
  2. Thumbs up for you digging into this!

    ReplyDelete
  3. I've never understood the multithreaded issue in viewport. 3dsmax and Maya said they use it, in their last version. But I don't see why you simply assign one object per processor, and won't be fully multithreaded, but it still be very useful. Another idea would be to make give a frame to each processor, so when you hit play, he will compute next frames in advance, and it will be smoother. I'm not a C developper, but I do not see the difficulty there.
    Anyway, it's a very good news the dependancy graph will be rewritten, and it's a goood news you think about multithreading.

    ReplyDelete
    Replies
    1. That is because you don't begin to understand the concept of *dependency*

      Delete
    2. A frame per processor? Hmm so the frame-rates now go down with longer scenes?

      Delete
  4. Really nice paper. One interesting sentence: "This dirty state propagates through the graph based on static dirty rules which define the dependency relationship between attributes within a node."

    I think this is where a lot of design work has to be done for Blender (building a suitable threading engine is not exactly boiler plate coding either, but right decisions are easier to find i guess)

    With our current depgraph we only have object <-> object data dependencies, and accordingly just 4 possible "dirty rules". This overly simplistic model seems to be responsible for a lot of the unresolvable dependency cycles.

    ReplyDelete
    Replies
    1. Indeed. I kindof alluded to this above ("we still have a lot of design issues to tackle to even get a basic system").

      It was somewhat surprising to see this approach being used, since it's really just a more nuanced version of what we've got now. I guess I've been somewhat hoping that there is a better way than "static dirty rules" (e.g. tag dirty entrypoints, then directly traverse+execute what you need), for greater flexibility.

      Delete
  5. Really happy you work on despgraph, keep it up!

    ReplyDelete
  6. Hi Algorith,

    Just came across your article. I couldn't find a direct contact for you so I'll put this here and hope you see it.

    First, thanks for your kind words on the paper. It was 3+ years of work to get this thing built, basically a new engine written from the ground up. We are finally getting it out into production now.

    To answer a couple of issues you raised in your article and responses:

    - Re: restricting everything to C++. The key point was that in our previous tool we used Python, C++ and our own in-house scripting language to author nodes in the graph. To derive full performance from the new engine we had to stop using the scripting languages. So we were not saying C++ is the only option, just saying that from where we were before, it was the only one we could keep using. It would of course be possible to imagine an implementation using another implementation as long as it delivered high performance and was scalable and threadsafe.

    - Re: a better way than static dirty rules. We do a lot of work to avoid dealing with dirty state. We cache the list of nodes that need recomputing based on a particular set of dirty inputs and required outputs, and if the next graph evaluation has the same conditions as any one of the previous ones, we just pull the required list of nodes to compute directly from the cache, so no traversal at all for dirty state. Most of the time animators are posing with a relatively small number of controls, so it is feasible to cache all these possible paths through the graph and just pull them out as needed, and we end up almost never walking the graph once the caches are warmed up. For a more generic tool with large numbers of possible controls, it might not be so feasible and you might want to look into other ways to speeding up traversal.

    ReplyDelete