Okay, so I think I solved the problem. After hing the graphics card driver reinstalled, it seemed to function properly (gzclient and gzserver are both listed as processes on the GPU after running nvidia-smi). That being said, it looks like only 8-9% of the GPU (about 550 MiB/7982 MiB) is being utilized. I'm wondering if people he achieved better utilization of their GPU, or if this looks about right?
Edit: wanted to add that the whole reason for this is question that the RTF of the simulation started out at 0.9 and would deteriorate to 0.1. I do know that I'm adding models continuously so that definitely contributes to the problem, but I'm deleting them at the same time, so the model shouldn't deteriorate that much over time. Is there something else I'm missing/could be improving on? Appreciate any thoughts.
Asked by luna on 2020-08-17 16:29:25 UTC
CommentsAnother note: I was using NoMachine previously so that I could see graphics generated by the Ubuntu server I was running the simulation on. For some reason, when running nvidia-smi while NoMachine was running caused the GPU-utilization to be lower (8-9%). Without NoMachine, the GPU-utilization was about 25-35%, which seems much better, although I didn't notice a speed up in the RTF of the simulation. My guess is that NoMachine probably also utilizes the GPU/CPU of whatever it runs on as well.
Asked by luna on 2020-08-19 14:11:12 UTC
I think that gazebo use the gpu only for rendering purpose, for exemple if you he a camera or laser in your setup, or simply to display the gzclient. But the physics engine itself use only CPU.
Asked by Cl茅ment Rolinat on 2020-08-26 03:17:34 UTC