Lumion Support Center

Support for unlicensed users => Post here if you can't find your License Key => Topic started by: Member5393 on September 05, 2011, 07:28:55 pm

Title: when multi GPU ?
Post by: Member5393 on September 05, 2011, 07:28:55 pm
Dear LUMION STAFF

are you thinking about Lumion with multi GPU ? (like geforce gtx590) because is too slow
sorry but i don't speak english
Title: Re: when multi GPU ?
Post by: Member1739 on September 05, 2011, 08:31:14 pm
It is too slow?  LOL
What is slow?
What is faster?  What is faster?
Title: Re: when multi GPU ?
Post by: Member5393 on September 06, 2011, 12:18:45 am
It is too slow?  LOL
What is slow?
What is faster?  What is faster?
working with only one GPU and when the scene is too big ,lumion running slow.
the second GPU not working ...
Title: Re: when multi GPU ?
Post by: Member1739 on September 06, 2011, 02:28:12 am
Lumion currently only works with one GPU on the card.

There is a lot of discussion about this on these forums if you are willing to hunt them down.

 :)
Title: Re: when multi GPU ?
Post by: Member927 on September 06, 2011, 08:41:55 pm
If I will have 2 GPU cards set as SLi mode, Lumion will be able to access just 1 card?
Title: Re: when multi GPU ?
Post by: Member1739 on September 07, 2011, 12:25:28 am
That is my understanding. No SLI configuration YET.  DX12
Title: Re: when multi GPU ?
Post by: Remko on September 07, 2011, 02:16:54 pm
Lumion does support multi GPU but the performance gain is not that big. We'll see what happens with the new 2.0 render engine.
Title: Re: when multi GPU ?
Post by: Member1229 on January 17, 2012, 05:26:45 pm
Lumion does support multi GPU but the performance gain is not that big. We'll see what happens with the new 2.0 render engine.

So, what happened?
Title: Re: when multi GPU ?
Post by: Remko on January 18, 2012, 09:35:25 am
So, what happened?

Not looked at it yet. The 25th we'll get a special multi GPU computer so we can start doing some tests. Of course we looked at this before but I never could find why it does not work. It looks like we're doing all the things needed to make SLI work.
Title: Re: when multi GPU ?
Post by: Member2990 on January 18, 2012, 07:29:24 pm
Lumion does support multi GPU but the performance gain is not that big. We'll see what happens with the new 2.0 render engine.
Is this new render engine going to be part of a service-pack for Lumion 2? If so: :-9
Title: Re: when multi GPU ?
Post by: Member877 on January 19, 2012, 12:51:57 am
it is already inside lumion 2.0 if i understood right.
Otherwise indoor lighting wouldnt be possible ...
Title: Re: when multi GPU ?
Post by: Remko on January 19, 2012, 10:52:08 am
it is already inside lumion 2.0 if i understood right.
Otherwise indoor lighting wouldnt be possible ...


Indoor lighting has nothing to do with multi GPU. I'm wainting for the new computer to arrive so we can take a look at it. We looked at it before but apparently we need to do some additional tests because it seems like multiple GPUs does not have any positive effects on performance.
Title: Re: when multi GPU ?
Post by: Member877 on January 19, 2012, 10:55:50 am
but hasnt lumion 2.0 deferred rendering which was used for indoor lighting ?
Title: Re: when multi GPU ?
Post by: Morten on January 19, 2012, 02:38:23 pm
Yes, the 2.0 deferred render engine that was mentioned back in September is the same that Ferry and Remko implemented in Lumion 2.
Title: Re: when multi GPU ?
Post by: Member21144 on January 19, 2012, 05:30:46 pm
Thing moving fast. Just stumble upon this: http://stadia3d.com/ (http://stadia3d.com/)

Stadia enables architects to generate interactive renderings straight from Revit model data.
It work like this. Architect send the revit models and got baked textures .exe file within minute/hour. I think they found ways to automate it with unity or quest.

It seem cloud is big thing to come in most cpu rendering side. Multi GPU is monetizable. People can start Lumion farm rendering or how about Lumion cloud? Geohow and many users would be very happy and can do some impossible animation in no time.

My high quality V-ray rendering take almost an hour per image. Imagine we can get 10 minutes HD animation within the same timeframe after Lumion set up is done without bankrupcy. That's real breakthrough for the whole industry!!! In artists hand, 10 minutes of Lumion animation would not make less money than one V-Ray for them. And possible make a lot more money ;)

Thanks,
Ming
Title: Re: when multi GPU ?
Post by: Remko on January 20, 2012, 10:06:40 am
Thing moving fast. Just stumble upon this: http://stadia3d.com/ (http://stadia3d.com/)

Stadia enables architects to generate interactive renderings straight from Revit model data.
It work like this. Architect send the revit models and got baked textures .exe file within minute/hour. I think they found ways to automate it with unity or quest.

It seem cloud is big thing to come in most cpu rendering side. Multi GPU is monetizable. People can start Lumion farm rendering or how about Lumion cloud? Geohow and many users would be very happy and can do some impossible animation in no time.

My high quality V-ray rendering take almost an hour per image. Imagine we can get 10 minutes HD animation within the same timeframe after Lumion set up is done without bankrupcy. That's real breakthrough for the whole industry!!! In artists hand, 10 minutes of Lumion animation would not make less money than one V-Ray for them. And possible make a lot more money ;)

Thanks,
Ming

Things are moving fast indeed. We believe though that the rendering is not the only problem we need to 'solve'. A huge part of Lumion is providing the tools to make compelling movies.

The benefit of cloud processing is you can share the processing power with multiple users with the idea that people only spend a small time rendering. I wonder how this turns out. I also wonder how it compares to just renting cloud computing as a render farm for vray or something like that. Imagine you have a movie consisting of 1800 frames. Why not have 1800 computers crunch one image for one hour so you have a full vray movie in 1 hour? If computing is cheap enough it's all possible.

For swift and rapid interaction you still need a local client though. Or maybe you need something like OnLive and do everything in the cloud on a fast computer and stream it to a low end laptop or something like that. Unfortunately GPU rendering does not scale really well. This can be solved in the driver or API. If we, for example, could very precisely control each GPU we can send each sub sample to another GPU and we could make something so you have a full AA image at the same speed as a simple 2 star rendered image. I don't think this is possible yet though. Hopefully soon everyone has Window7 so we can move to DX11 or DX12 even. This would allow for some tricks to improve rendering performance. We could use hardware AA for example. It would also be easier to render complex lighting models by using the additional capabilities of the new API's.

This tool you mention uses the cloud to render lightmaps and then provides you the lightmapped model so you can fly trough it. We stepped away from this technology because we think it has many serious limitations. You can forget about the trees and vegitation and moving characters for example. You can also forget about complex or oddly shaped geometry. The lighting model is also very limited. It's still the same tech as used in Quake in 1996!!

The biggest drawback in my opinion is that your model is not dynamic anymore. For every little change or modification you'll have to send your model to the cloud again and wait for the solution.

The most promising thing I have seen is the cone tracing technology which is basically an optimized ray-tracing algorithm which is very fast. It does a decent job at reflections and it also does a very good job on indirect lighting. I hope they make good progress with this technology because it already looks promising.
Title: Re: when multi GPU ?
Post by: Member1012 on January 20, 2012, 06:58:48 pm
Thanks for taking the time to do this review, we all need reassurance that we have backed the right horse and that we continue to do the right thing for our firms. While we want you to keep your head down working there is so much competition and new development we all need to be vigilant. So roll on multi gpu and keep it in house I say.
Title: Re: when multi GPU ?
Post by: Member1739 on January 21, 2012, 01:05:18 am
Is there a 64GB graphics card?
All HD Textures.
Better start collecting textures now for that future card.
Title: Re: when multi GPU ?
Post by: Member21144 on January 21, 2012, 12:35:26 pm
Remko,

Thanks for your infos and insights. To me, I just feel that while we can have real-time navigation. Why we still need to wait quite long, compared to the real-time navigation, when export to files?

IMO there're so many CUDA computing farm projects which I may confused about applicable, these powerful stuff may required different designs. Since there're still rooms for speed improvement when we do export full animation, it is not yet near real-time, if we can take advantage on it. I just wish Lumion to be real BIG, maybe too excited ;).

Also, what you think about this ? Look impressive, exportable. Check out their videos:
http://www.lumatix.com/atrium-designer/ (http://www.lumatix.com/atrium-designer/)

With almost real-time G.I, it is one of a kind pre vis on 3ds max and I don't know why they slump thier price to the floor a few days ago:
Lumatix Atrium Designer.mp4 (http://www.youtube.com/watch?v=7k68F5Ip_3o#)
I doubt they may know something about 3DS Max Excalibur (XBR) on viewport improvements in coming versions. That's juts my guess, there're so many good reasons for them to do it.

To me, for non-3DS Max users and for one who don't like tedious work, Lumion quality is better and have potentials for more fields. But their integration with V-Ray, mental ray and standard lighting in 3DS Max is also real nice. Wish Lumion have some renderer, e.g. Octane, link to it.

Thanks,
Ming
Title: Re: when multi GPU ?
Post by: Member1012 on January 21, 2012, 10:34:09 pm
Yes 800 euro down to 49 euro, a great bargain or its coming in the next update in max? I wonder.
Title: Re: when multi GPU ?
Post by: Remko on January 30, 2012, 03:29:03 pm
Remko,

Thanks for your infos and insights. To me, I just feel that while we can have real-time navigation. Why we still need to wait quite long, compared to the real-time navigation, when export to files?

IMO there're so many CUDA computing farm projects which I may confused about applicable, these powerful stuff may required different designs. Since there're still rooms for speed improvement when we do export full animation, it is not yet near real-time, if we can take advantage on it. I just wish Lumion to be real BIG, maybe too excited ;).

Also, what you think about this ? Look impressive, exportable. Check out their videos:
http://www.lumatix.com/atrium-designer/ (http://www.lumatix.com/atrium-designer/)

With almost real-time G.I, it is one of a kind pre vis on 3ds max and I don't know why they slump thier price to the floor a few days ago:
Lumatix Atrium Designer.mp4 (http://www.youtube.com/watch?v=7k68F5Ip_3o#)
I doubt they may know something about 3DS Max Excalibur (XBR) on viewport improvements in coming versions. That's juts my guess, there're so many good reasons for them to do it.

To me, for non-3DS Max users and for one who don't like tedious work, Lumion quality is better and have potentials for more fields. But their integration with V-Ray, mental ray and standard lighting in 3DS Max is also real nice. Wish Lumion have some renderer, e.g. Octane, link to it.

Thanks,
Ming

The real time export slow down is caused by MP4 compression/disk access and in FULL quality it is caused by rendering everything 16 times to get more accuracy. Maybe at some point we will be able to implement hardware AA and it will be a lot faster. Right now we get the best quality with full AA. You can already render editor Quality by simply rendering in 1 star mode. In this mode the rendering at least is just as fast as real-time and you just get the overhead for saving the MP4 file.
Title: Re: when multi GPU ?
Post by: Member1534 on January 30, 2012, 06:05:35 pm

BT introduced AA  a few years ago , and its extremely fast + highest quality too

but does anyone know what ..............SHADER AA ...........stands for ?




(http://i39.tinypic.com/2mxrh5k.jpg)
You obviously don`t play computer games })

Antialiasing

In computer graphics, antialiasing is a software technique for diminishing jaggies - stairstep-like lines that should be smooth. Jaggies occur because the output device, the monitor or printer, doesn't have a high enough resolution to represent a smooth line. Antialiasing reduces the prominence of jaggies by surrounding the stairsteps with intermediate shades of gray (for gray-scaling devices) or color (for color devices). Although this reduces the jagged appearance of the lines, it also makes them fuzzier.
Another method for reducing jaggies is called smoothing, in which the printer changes the size and horizontal alignment of dots to make curves smoother.
Antialiasing is sometimes called oversampling.
Title: Re: when multi GPU ?
Post by: peterm on January 30, 2012, 10:06:15 pm
but does anyone know what ..............SHADER AA ...........stands for ?

Software AA: AA in a CPU base algorithm, most accurate (currently) due to the maths and precision available on a CPU (x86).  For AA to run on the CPU there's a performance issue if its also trying to run everything else and do AA at the same time, although its generally threaded to its own process.

Hardware AA: lots of GPU's now directly provide AA of varying levels, it can be switched on/off directly in the GU control panel or allows for software to use the switch.  Its a dedicated process in GPU code of the hardware.  I think it has slightly lower precision but off loads the cost of AA from the CPU so is a performance tool.

Shader AA:  the graphics middleware drivers for either DirectX or OpenGL allow for shader code to be included in 3d applications including realtime and 3d game apps.  The shader code provides generalised instructions to the GPU to do tasks, such as colour an object, bump map it etc.  So the AA is coded in a shader and runs fast due performance of the GPU, but may not be as accurate as CPU, but depends on implementation and number of passes to get final result.  DirectX9 has certain limitations that DirectX11 provides alternatives to.  There are a number of different AA solutions such as FXAA, MLAA etc, each has their own pros and cons of performance versus quality and some are slightly more suitable to certain types of rendering requirements.

Remko and the team there are guru shader coders and its how Lumion works (in part especially all the effects and rendering).
Title: Re: when multi GPU ?
Post by: Remko on January 31, 2012, 01:53:14 pm
Software AA: AA in a CPU base algorithm, most accurate (currently) due to the maths and precision available on a CPU (x86).  For AA to run on the CPU there's a performance issue if its also trying to run everything else and do AA at the same time, although its generally threaded to its own process.

Hardware AA: lots of GPU's now directly provide AA of varying levels, it can be switched on/off directly in the GU control panel or allows for software to use the switch.  Its a dedicated process in GPU code of the hardware.  I think it has slightly lower precision but off loads the cost of AA from the CPU so is a performance tool.

Shader AA:  the graphics middleware drivers for either DirectX or OpenGL allow for shader code to be included in 3d applications including realtime and 3d game apps.  The shader code provides generalised instructions to the GPU to do tasks, such as colour an object, bump map it etc.  So the AA is coded in a shader and runs fast due performance of the GPU, but may not be as accurate as CPU, but depends on implementation and number of passes to get final result.  DirectX9 has certain limitations that DirectX11 provides alternatives to.  There are a number of different AA solutions such as FXAA, MLAA etc, each has their own pros and cons of performance versus quality and some are slightly more suitable to certain types of rendering requirements.

Remko and the team there are guru shader coders and its how Lumion works (in part especially all the effects and rendering).

The key for good images is supersampling. In essence we calculate the average of many strategically chosen samples to create the final results. Because the final result is an average of several samples the smoothness and accuracy is increased. Hardware AA uses the GPU for antialiassing which essentially means we have to render everything once and the gpu will render the subsamples for us. Unfortunately DX9 does not have really good support for this so we have to render everything 16 times. A hardware AA solution would render everything 2 times slower for example while the AA we use renders everything 16 times slower. The jump to DX11 or higher will only be a matter of time. We are closely checking how many people still use windows XP and if that number dips below 10% or something we will start considering DX11.
Title: Re: when multi GPU ?
Post by: Morten on January 31, 2012, 02:59:21 pm
We are closely checking how many people still use windows XP and if that number dips below 10% or something we will start considering DX11.

Windows usage statistics for January 2012 on the Lumion website:
Windows 7 = 71.54%
Windows XP = 22.07%
Windows Vista = 4.89%
Server 2003 = 1.34%
Windows NT = 0.15%

Still some way to go...
Title: Re: when multi GPU ?
Post by: Remko on January 31, 2012, 03:37:20 pm
Windows usage statistics for January 2012 on the Lumion website:
Windows 7 = 71.54%
Windows XP = 22.07%
Windows Vista = 4.89%
Server 2003 = 1.34%
Windows NT = 0.15%

Still some way to go...

Well, this is completely not representive because there are many people on XP who have access to Win 7 and there are many other reasons why the percentages are different. My guess is that quite a lot of users ditched XP already. We'll probably wait until DirectX 12 and jump right on that so people directly have access to the ultimate tech. It depends when DX 12 will be released. My earliest guess would be that Lumion 4 can be DX 12. I don't think the SDK will be available or available long enough by the time we release Lumion 3 at the end of this year. So... End 2013 I think probably all users will be using Win7 Win8 or Vista.
Title: Re: when multi GPU ?
Post by: Member21342 on January 31, 2012, 03:44:10 pm
Windows usage statistics for January 2012 on the Lumion website:
Windows 7 = 71.54%
Windows XP = 22.07%
Windows Vista = 4.89%
Server 2003 = 1.34%
Windows NT = 0.15%

Still some way to go...

its close, very close. But this is for your website? maybe its even better for your software.
As I see it, to be using Lumion and create models "for" Lumion you need quite a new computer. And I think that almost all new computers use win7.
Title: Re: when multi GPU ?
Post by: Remko on February 06, 2012, 02:56:51 pm
its close, very close. But this is for your website? maybe its even better for your software.
As I see it, to be using Lumion and create models "for" Lumion you need quite a new computer. And I think that almost all new computers use win7.

I did some benchmarks for multi GPU performance and came to the conclusion that with the current way multi GPU works it will probably not work for Lumion.

First of all I noticed considerable stability issues with all 3D applictions and a multi GPU setup. This is not good because you want your system to be stable.

Second, there are many ways to use multiple GPUs. IN short there are two methods. The first method uses one card for one frame and renders the next frame on another card. This gets you good performance in most games although it introduces a bit of extra lag. This method will not work in Lumion because Lumion renders multiple frames in one cycle. So, although it might improve results in build mode you will not get better results for rendering.

The other method is to split the screen and have one GPU render the top half and have the other render the bottom half. With this method we get no performance gain because there is just too much data that has to go from one GPU to the other.

The only way I can imagine multi GPU might work with Lumion is if you enable stereo and somehow have the driver render each eye on a different card. I did not try this.