Resource Menu


posted by Pierre Sarrailh at Apr 22, 2016 5:38 PM
Quote
I see on another thread that you get the following stacktrace on Java Out of Memory Error:
spis.Util.Matrix.SparseMatrix.getValues(SparseMatrix.java:374)
Most of the time, this comes from a too large number of surface elements on the SC. If you have more than or about 10 000 elements (dielectric surfaces only) on the SC surface mesh it can start to be a problem.
To verify if the problem comes from that, you may try to change all the SC material properties to a metal. If it solves your problem, and if you want to have dielectric surfaces, you have just to reduce the number of surface elements on the dielectric surface.
In fact, due to different numerical models, the size of the circuit solver is in nē with n the number of dielectric mesh elements on the SC.

To answer to your previous question, the cost of the GUI is quite small on this kind of meshes.
I already run a LEO simulation with 500 000 tetrahedron less than 50Gb but the SC was fully conductor.

Regards, Pierre
posted by EW at Apr 6, 2016 5:01 PM
Quote
I have secondary electron turned off at the moment. It is the step "converting data for numerical kernel" that takes so much memory, before I start the simulation. Is it possible to run this step without the GUI?
posted by Pierre Sarrailh at Apr 6, 2016 12:57 PM
Quote
The parameter to control the number of 'macroparticles per cell' is used only for particles coming from the environment. Sometimes, the problems come from the number of macroparticles of secondary electrons created.
You can check that seeing the macroparticle number in the "console".

The parameter to control the number of macoparticle for secondary electrons is "electronSecondaryDensification". It coresponds to the number of macroparticles emitted over the number of macroparticles collected. It may be decreased to less than 1. In this case, a statistal densification is used.
Depending on the situation, I have used "electronSecondaryDensification" from 1000.0 (when I want a low noise on secondary) to 0.001 (in order to have a fast calculation).

Regards, Pierre
posted by EW at Apr 1, 2016 3:33 PM
Quote
I'm now using a Linux computer with 24GB. I still seem to be limited to under 100000 tetrahedra. When it is setting up the simulation, the memory utilization goes up to around 22GB, and then goes back down to around 14 once the setup is complete, and I can then run the simulation. But, due to the amount of memory it takes to set up, I can't go higher than 100000 tetrahedra. Is this normal? It is set for 5 macroparticles per cell, full PIC.
posted by EW at Mar 3, 2016 3:44 PM
Quote
I am having this JAVA memory error. I am running on Windows and have around 389000 tetrahedra in my mesh. Do you think it is too much mesh for 8GB RAM or should I continue troubleshooting time steps?
posted by Mehmet Balta at Jun 10, 2014 1:58 PM
Quote
You are right. Thank you. The problem is definetely about timesteps. I will try to deal with it.

Best Regards, Mehmet

posted by Benoit Thiebault at Jun 5, 2014 9:51 PM
Quote
The time steps to set depend on the physics you want to model (type of populations, temperature I guess, etc.), there are no values that work for every sort of plasma.

I can't help you more with settings of the kernel, not my area of expertise :)

posted by Mehmet Balta at Jun 5, 2014 4:04 PM
Quote
Hi,

Thank you for your response. I assume that this problem is caused by the timesteps. Can you please have a glance and tell me if something you see might not be set correctly?

duration: 0.004
fixedDt: 0
plasmadt = 1.6e-6
plasmaduration= 1.6e-6
plasmaSpeedUp= 1
simulationdt =1.6e-5
simulationdtinit = 1.6e-6
simulationdtMaxFactor = 5

source duration = 1e-6 secondary duration 1.6e-6 secondarydt= 1.6e-6 iondt = 1.6e-6 sceonarioparameter5 = 1e-6 sceonarioparameter4 = 1.6e-6

Regards, Mehmet

posted by Benoit Thiebault at Jun 5, 2014 1:18 PM
Quote
Hi,

30 000 or 12 000 tetrahedra don't normally require such amount of RAM. We run simulations with 300 000 tetrahedra with 8Gb.

There are several possibilities:

  • you did not correctly increase the Xmx. See user manual for this. You can also run Visual VM http://visualvm.java.net/ to see the memory footprint grow
  • there is a problem with your simulation settings: we have seen on occasion, if the timesteps are not correctly set, the software to inject too many particles. I don't remember if there is an instrument available in the live monitoring to visualize the number of macro particles, but if it grows indefinitely, that could be the source of the problem.
Another side comment: if you don't want to be spamed, do not give your email address on a public forum. At least change the @ sign by (at) to prevent bots from retrieving it
posted by Mehmet Balta at Jun 5, 2014 1:09 PM
Quote
Hi again luo xiaoming,

You can reach me from my mail and we can share information since we do similar work now.

mehmet.yigit.balta@gmail.com

Regards, Mehmet

posted by Mehmet Balta at Jun 4, 2014 1:05 PM
Quote
Hey, for now, only solution I see is to reduce the number of tetrahedracells. There must be some other way, tough…
posted by luo xiaoming at Jun 4, 2014 10:03 AM
Quote
I had the same problem when I tryed to get a reliable current density. I used a cylindrical external boundary(r=0.5m,h=2m) and the thruster SPT100. I also change the simulationDt to 1e-5 and the number of the tetrahedracells is 31285. But it stoped calculation with unknow reason. Please let me know if you can find some information. Thank you very much!
posted by Mehmet Balta at Jun 3, 2014 10:28 AM
Quote
Hello everybody,

I am trying to verify T5 and SPT100 thruster plumes. I have no problem with large time steps. I use a spherical external boundary(r=1m) and only the thruster. When I use small timesteps(such as simulationdt 1e-5 and plasmadt= 1e-6). When I use more than 12000 tetrahedracells Java gives out of memory error after some percentage.

I have Linux Debian, 32Gb Ram. I addressed Java heap memory 25Gb (Xmx 25000M). For this kind of computer, this error is quite unexpected. Does somebody have an idea why this memory error might occur ?

Thank you,

Regards, Mehmet