First time here? Checkout the FAQ!
x
0 votes
by (640 points)
In slimfem implementation, what is the difference between algebraic nodes numbering (stored for example in a vector mesh.pl.algebraic_nodes()) and PETSc numbering (stored in a vector mesh.get_numbering(SF::NBR_PETSC))? I can only guess that the PETSc one takes into account also a sort of overlapping between the nodes owned by the processors, but outputs seems to be very different between the two numberings if printed. Also, I was wondering which one is taken into account when matrices are built.

Thank you in advance

1 Answer

+1 vote
by (8.1k points)
selected by
 
Best answer
mesh.pl.algebraic_nodes() returns a vector of local indices. It contains the indices of the d.o.f. that belong uniquely to the process. As such, this is the non-overlapping part of the overlapping domains. The name "algebraic nodes" is historical, as the linear algebra system definition in PETSc uses non-overlapping d.o.f.

The vector returned by mesh.get_numbering(SF::NBR_PETSC) contains global indices of all d.o.f. in the overlapping domains. As such, each rank can look up the global index of it's local d.o.f., for example when inserting entries into a matrix.
by (640 points)
Thank you, I have another question related: why in mesh.pl.get_numbering(SF::NBR_REF) we find different nodes w.r.t. the ones stored in mesh.pl.algebraic_nodes()?
As far as I understand  each processor seems to hold in mesh.xyz() just the coordinates for the nodes in mesh.pl.get_numbering(SF::NBR_REF), while algebraic_nodes() stores the indices of the local elements. If I wanted to extract coordinates for algebraic_nodes() do I have to perform some cumbersome communication or is there a simpler way? I would find cleaner handling coherent DD nodes, but I am sure I am missing something on the parallel architecture
by (8.1k points)
mesh.pl.get_numbering(SF::NBR_REF) are again global indices. You are comparing global with local indices.

Lets say we are looking at the local vertex index 5. Then mesh.xyz[5*3+0] till mesh.xyz[5*3+2] are the vertex coordinates (of course usually we dont access them this way, but via an element_view), mesh.pl.get_numbering(SF::NBR_REF)[5] is the (global) index in the mesh on the hard disk of this vertex, and mesh.pl.get_numbering(SF::NBR_PETSC)[5] is the (global) index in the parallel, distributed linear equation system.

The whole point of storing the numberings is to not have to communicate.

Coming back to algebraic_nodes(): Lets denote Np := mesh.l_numpts on rank p. Then the implicit local index range is [0, Np-1] (this local range is not stored since trivial). algebraic_nodes() is some subset of this interval.

For example, if Np := 5, then the local range is [0 1 2 3 4] and one possible algebraic_nodes() set would be [1 3].. Hope that makes sense now.

The reason we have algebraic_nodes() is that in many algorithms one needs to work on a non-overlapping parallel distribution of the nodes, while in other algorithms one needs a non-overlapping distribution of the *elements* (thus a overlapping node distribution). By storing algebraic_nodes() we can access both overlapping and non-overlapping local nodal index ranges.
by (640 points)
Yes, thank you very much. Paying more attention to the output of some tests I did and comparing them to your answers it is clear what you mean.
Welcome to openCARP Q&A. Ask questions and receive answers from other members of the community. For best support, please use appropriate TAGS!
architecture, carputils, documentation, experiments, installation-containers-packages, limpet, slimfem, website, governance
MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], config: ["MMLorHTML.js"], jax: ["input/TeX"], processEscapes: true } }); MathJax.Hub.Config({ "HTML-CSS": { linebreaks: { automatic: true } } });
...