Why is ParallelDo slower than Do? The 2019 Stack Overflow Developer Survey Results Are...

Is Cinnamon a desktop environment or a window manager? (Or both?)

What does Linus Torvalds mean when he says that Git "never ever" tracks a file?

Cooking pasta in a water boiler

Ubuntu Server install with full GUI

Geography at the pixel level

If I score a critical hit on an 18 or higher, what are my chances of getting a critical hit if I roll 3d20?

How much of the clove should I use when using big garlic heads?

Why was M87 targeted for the Event Horizon Telescope instead of Sagittarius A*?

Is it okay to consider publishing in my first year of PhD?

Did the UK government pay "millions and millions of dollars" to try to snag Julian Assange?

What do these terms in Caesar's Gallic Wars mean?

Why doesn't shell automatically fix "useless use of cat"?

Unitary representations of finite groups over finite fields

Are spiders unable to hurt humans, especially very small spiders?

How can I define good in a religion that claims no moral authority?

What is this sharp, curved notch on my knife for?

Falsification in Math vs Science

Does HR tell a hiring manager about salary negotiations?

Kerning for subscripts of sigma?

What is the motivation for a law requiring 2 parties to consent for recording a conversation

How do I free up internal storage if I don't have any apps downloaded?

How to support a colleague who finds meetings extremely tiring?

For what reasons would an animal species NOT cross a *horizontal* land bridge?

What do hard-Brexiteers want with respect to the Irish border?



Why is ParallelDo slower than Do?



The 2019 Stack Overflow Developer Survey Results Are InInternal`Bag inside CompileWhy won't Parallelize speed up my code?ParallelTable 70 times slower on 16 cores than Table on single coreWhy is ParallelMap way slower than MapParallelTable slower than Table in the presence of an Association?How to make the ProgressIndicator for ParallelDo more efficientParallel calculation is 5 times slower than the non-parallel oneParallelTable much slower than Table on RandomReal with arbitrary precisionParallelDo with doubly-indexed iteratorsClearAttributes in ParallelDoWriting to file with ParallelDoParallelDo gives different solution to Eigensystem












5












$begingroup$


I have problems to write parallel code in mathematica.
Why is



candidates = {};
SetSharedVariable[candidates];

Do[
ParallelDo[

eq = RandomReal[] + RandomReal[];
AppendTo[candidates, eq]

, {j, 1, 1000}]
, {i, 1, 10}]


slower than the non parallel version



candidates = {};
Do[
Do[

eq = RandomReal[] + RandomReal[];
AppendTo[candidates, eq]

, {j, 1, 1000}]
, {i, 1, 10}]


?










share|improve this question









New contributor




Matthias Heller is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$












  • $begingroup$
    I reverted your post to before the edit because it looks like a different question (like Henrik said in his comment). Note, however, that if you ask it in precisely such form it will be likely closed due to not enough info: you need to provide the minimal working example, not through some undefined functions into a piece of code that no one will be able to run and test.
    $endgroup$
    – corey979
    14 hours ago






  • 1




    $begingroup$
    See here mathematica.stackexchange.com/a/48296/12 I suggest you don't use SetSharedVariable until you get quite fluent in using the parallel tools. It effectively "unparallelizes" your code.
    $endgroup$
    – Szabolcs
    13 hours ago


















5












$begingroup$


I have problems to write parallel code in mathematica.
Why is



candidates = {};
SetSharedVariable[candidates];

Do[
ParallelDo[

eq = RandomReal[] + RandomReal[];
AppendTo[candidates, eq]

, {j, 1, 1000}]
, {i, 1, 10}]


slower than the non parallel version



candidates = {};
Do[
Do[

eq = RandomReal[] + RandomReal[];
AppendTo[candidates, eq]

, {j, 1, 1000}]
, {i, 1, 10}]


?










share|improve this question









New contributor




Matthias Heller is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$












  • $begingroup$
    I reverted your post to before the edit because it looks like a different question (like Henrik said in his comment). Note, however, that if you ask it in precisely such form it will be likely closed due to not enough info: you need to provide the minimal working example, not through some undefined functions into a piece of code that no one will be able to run and test.
    $endgroup$
    – corey979
    14 hours ago






  • 1




    $begingroup$
    See here mathematica.stackexchange.com/a/48296/12 I suggest you don't use SetSharedVariable until you get quite fluent in using the parallel tools. It effectively "unparallelizes" your code.
    $endgroup$
    – Szabolcs
    13 hours ago
















5












5








5


1



$begingroup$


I have problems to write parallel code in mathematica.
Why is



candidates = {};
SetSharedVariable[candidates];

Do[
ParallelDo[

eq = RandomReal[] + RandomReal[];
AppendTo[candidates, eq]

, {j, 1, 1000}]
, {i, 1, 10}]


slower than the non parallel version



candidates = {};
Do[
Do[

eq = RandomReal[] + RandomReal[];
AppendTo[candidates, eq]

, {j, 1, 1000}]
, {i, 1, 10}]


?










share|improve this question









New contributor




Matthias Heller is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$




I have problems to write parallel code in mathematica.
Why is



candidates = {};
SetSharedVariable[candidates];

Do[
ParallelDo[

eq = RandomReal[] + RandomReal[];
AppendTo[candidates, eq]

, {j, 1, 1000}]
, {i, 1, 10}]


slower than the non parallel version



candidates = {};
Do[
Do[

eq = RandomReal[] + RandomReal[];
AppendTo[candidates, eq]

, {j, 1, 1000}]
, {i, 1, 10}]


?







parallelization






share|improve this question









New contributor




Matthias Heller is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




Matthias Heller is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited 14 hours ago









corey979

20.9k64282




20.9k64282






New contributor




Matthias Heller is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 15 hours ago









Matthias HellerMatthias Heller

283




283




New contributor




Matthias Heller is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Matthias Heller is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Matthias Heller is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












  • $begingroup$
    I reverted your post to before the edit because it looks like a different question (like Henrik said in his comment). Note, however, that if you ask it in precisely such form it will be likely closed due to not enough info: you need to provide the minimal working example, not through some undefined functions into a piece of code that no one will be able to run and test.
    $endgroup$
    – corey979
    14 hours ago






  • 1




    $begingroup$
    See here mathematica.stackexchange.com/a/48296/12 I suggest you don't use SetSharedVariable until you get quite fluent in using the parallel tools. It effectively "unparallelizes" your code.
    $endgroup$
    – Szabolcs
    13 hours ago




















  • $begingroup$
    I reverted your post to before the edit because it looks like a different question (like Henrik said in his comment). Note, however, that if you ask it in precisely such form it will be likely closed due to not enough info: you need to provide the minimal working example, not through some undefined functions into a piece of code that no one will be able to run and test.
    $endgroup$
    – corey979
    14 hours ago






  • 1




    $begingroup$
    See here mathematica.stackexchange.com/a/48296/12 I suggest you don't use SetSharedVariable until you get quite fluent in using the parallel tools. It effectively "unparallelizes" your code.
    $endgroup$
    – Szabolcs
    13 hours ago


















$begingroup$
I reverted your post to before the edit because it looks like a different question (like Henrik said in his comment). Note, however, that if you ask it in precisely such form it will be likely closed due to not enough info: you need to provide the minimal working example, not through some undefined functions into a piece of code that no one will be able to run and test.
$endgroup$
– corey979
14 hours ago




$begingroup$
I reverted your post to before the edit because it looks like a different question (like Henrik said in his comment). Note, however, that if you ask it in precisely such form it will be likely closed due to not enough info: you need to provide the minimal working example, not through some undefined functions into a piece of code that no one will be able to run and test.
$endgroup$
– corey979
14 hours ago




1




1




$begingroup$
See here mathematica.stackexchange.com/a/48296/12 I suggest you don't use SetSharedVariable until you get quite fluent in using the parallel tools. It effectively "unparallelizes" your code.
$endgroup$
– Szabolcs
13 hours ago






$begingroup$
See here mathematica.stackexchange.com/a/48296/12 I suggest you don't use SetSharedVariable until you get quite fluent in using the parallel tools. It effectively "unparallelizes" your code.
$endgroup$
– Szabolcs
13 hours ago












1 Answer
1






active

oldest

votes


















5












$begingroup$

Because managing write access to shared memory is expensive: Subprocesses have to wait until they are granted write access (because another process uses that ressource).



Moreover, it is in general more efficient to use Parallel only upon the most outer loop construct.



By the way: Using Append and AppendTo are the worst methods to build a list, because they involve a copy of the full list each time another element is appended. Instead of complexity $O(n)$ for a list of $n$ elements, you get an implementation of complexity $O(n^2)$. Better use Table or, if you don't know how long the list is about to get, use Sow and Reap. Internal`Bag is a further option, and it is even compilable.






share|improve this answer











$endgroup$













  • $begingroup$
    Thanks, that actually helped a lot. I just dont understand how to use Sow and Reap to avoid Append To be more specific: instead of ParallelDo I use now ParallelTable: eq = ParallelTable[ FNumeric[ SetPrecision[N[monlistnumeric[[i]] + monlistnumeric[[j]], 20], 10]] , {j, jj}]; FNumeric is a function, that returns either 0 or a value I want to store. I then do eq = DeleteCases[eq, 0]; candidates = Join[candidates, eq]; Is there a more efficient way to do this?
    $endgroup$
    – Matthias Heller
    15 hours ago








  • 2




    $begingroup$
    @MatthiasHeller, you're welcome. How is this new code related to your post? You should consider a new post with your real problem and all relevant data. I may have a look. In general, depending on the details, there are various ways to perform the computation efficiently; these way might not use Parallel at all, but rather Compiled code.
    $endgroup$
    – Henrik Schumacher
    14 hours ago












Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "387"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});






Matthias Heller is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f195006%2fwhy-is-paralleldo-slower-than-do%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









5












$begingroup$

Because managing write access to shared memory is expensive: Subprocesses have to wait until they are granted write access (because another process uses that ressource).



Moreover, it is in general more efficient to use Parallel only upon the most outer loop construct.



By the way: Using Append and AppendTo are the worst methods to build a list, because they involve a copy of the full list each time another element is appended. Instead of complexity $O(n)$ for a list of $n$ elements, you get an implementation of complexity $O(n^2)$. Better use Table or, if you don't know how long the list is about to get, use Sow and Reap. Internal`Bag is a further option, and it is even compilable.






share|improve this answer











$endgroup$













  • $begingroup$
    Thanks, that actually helped a lot. I just dont understand how to use Sow and Reap to avoid Append To be more specific: instead of ParallelDo I use now ParallelTable: eq = ParallelTable[ FNumeric[ SetPrecision[N[monlistnumeric[[i]] + monlistnumeric[[j]], 20], 10]] , {j, jj}]; FNumeric is a function, that returns either 0 or a value I want to store. I then do eq = DeleteCases[eq, 0]; candidates = Join[candidates, eq]; Is there a more efficient way to do this?
    $endgroup$
    – Matthias Heller
    15 hours ago








  • 2




    $begingroup$
    @MatthiasHeller, you're welcome. How is this new code related to your post? You should consider a new post with your real problem and all relevant data. I may have a look. In general, depending on the details, there are various ways to perform the computation efficiently; these way might not use Parallel at all, but rather Compiled code.
    $endgroup$
    – Henrik Schumacher
    14 hours ago
















5












$begingroup$

Because managing write access to shared memory is expensive: Subprocesses have to wait until they are granted write access (because another process uses that ressource).



Moreover, it is in general more efficient to use Parallel only upon the most outer loop construct.



By the way: Using Append and AppendTo are the worst methods to build a list, because they involve a copy of the full list each time another element is appended. Instead of complexity $O(n)$ for a list of $n$ elements, you get an implementation of complexity $O(n^2)$. Better use Table or, if you don't know how long the list is about to get, use Sow and Reap. Internal`Bag is a further option, and it is even compilable.






share|improve this answer











$endgroup$













  • $begingroup$
    Thanks, that actually helped a lot. I just dont understand how to use Sow and Reap to avoid Append To be more specific: instead of ParallelDo I use now ParallelTable: eq = ParallelTable[ FNumeric[ SetPrecision[N[monlistnumeric[[i]] + monlistnumeric[[j]], 20], 10]] , {j, jj}]; FNumeric is a function, that returns either 0 or a value I want to store. I then do eq = DeleteCases[eq, 0]; candidates = Join[candidates, eq]; Is there a more efficient way to do this?
    $endgroup$
    – Matthias Heller
    15 hours ago








  • 2




    $begingroup$
    @MatthiasHeller, you're welcome. How is this new code related to your post? You should consider a new post with your real problem and all relevant data. I may have a look. In general, depending on the details, there are various ways to perform the computation efficiently; these way might not use Parallel at all, but rather Compiled code.
    $endgroup$
    – Henrik Schumacher
    14 hours ago














5












5








5





$begingroup$

Because managing write access to shared memory is expensive: Subprocesses have to wait until they are granted write access (because another process uses that ressource).



Moreover, it is in general more efficient to use Parallel only upon the most outer loop construct.



By the way: Using Append and AppendTo are the worst methods to build a list, because they involve a copy of the full list each time another element is appended. Instead of complexity $O(n)$ for a list of $n$ elements, you get an implementation of complexity $O(n^2)$. Better use Table or, if you don't know how long the list is about to get, use Sow and Reap. Internal`Bag is a further option, and it is even compilable.






share|improve this answer











$endgroup$



Because managing write access to shared memory is expensive: Subprocesses have to wait until they are granted write access (because another process uses that ressource).



Moreover, it is in general more efficient to use Parallel only upon the most outer loop construct.



By the way: Using Append and AppendTo are the worst methods to build a list, because they involve a copy of the full list each time another element is appended. Instead of complexity $O(n)$ for a list of $n$ elements, you get an implementation of complexity $O(n^2)$. Better use Table or, if you don't know how long the list is about to get, use Sow and Reap. Internal`Bag is a further option, and it is even compilable.







share|improve this answer














share|improve this answer



share|improve this answer








edited 14 hours ago

























answered 15 hours ago









Henrik SchumacherHenrik Schumacher

59.9k582167




59.9k582167












  • $begingroup$
    Thanks, that actually helped a lot. I just dont understand how to use Sow and Reap to avoid Append To be more specific: instead of ParallelDo I use now ParallelTable: eq = ParallelTable[ FNumeric[ SetPrecision[N[monlistnumeric[[i]] + monlistnumeric[[j]], 20], 10]] , {j, jj}]; FNumeric is a function, that returns either 0 or a value I want to store. I then do eq = DeleteCases[eq, 0]; candidates = Join[candidates, eq]; Is there a more efficient way to do this?
    $endgroup$
    – Matthias Heller
    15 hours ago








  • 2




    $begingroup$
    @MatthiasHeller, you're welcome. How is this new code related to your post? You should consider a new post with your real problem and all relevant data. I may have a look. In general, depending on the details, there are various ways to perform the computation efficiently; these way might not use Parallel at all, but rather Compiled code.
    $endgroup$
    – Henrik Schumacher
    14 hours ago


















  • $begingroup$
    Thanks, that actually helped a lot. I just dont understand how to use Sow and Reap to avoid Append To be more specific: instead of ParallelDo I use now ParallelTable: eq = ParallelTable[ FNumeric[ SetPrecision[N[monlistnumeric[[i]] + monlistnumeric[[j]], 20], 10]] , {j, jj}]; FNumeric is a function, that returns either 0 or a value I want to store. I then do eq = DeleteCases[eq, 0]; candidates = Join[candidates, eq]; Is there a more efficient way to do this?
    $endgroup$
    – Matthias Heller
    15 hours ago








  • 2




    $begingroup$
    @MatthiasHeller, you're welcome. How is this new code related to your post? You should consider a new post with your real problem and all relevant data. I may have a look. In general, depending on the details, there are various ways to perform the computation efficiently; these way might not use Parallel at all, but rather Compiled code.
    $endgroup$
    – Henrik Schumacher
    14 hours ago
















$begingroup$
Thanks, that actually helped a lot. I just dont understand how to use Sow and Reap to avoid Append To be more specific: instead of ParallelDo I use now ParallelTable: eq = ParallelTable[ FNumeric[ SetPrecision[N[monlistnumeric[[i]] + monlistnumeric[[j]], 20], 10]] , {j, jj}]; FNumeric is a function, that returns either 0 or a value I want to store. I then do eq = DeleteCases[eq, 0]; candidates = Join[candidates, eq]; Is there a more efficient way to do this?
$endgroup$
– Matthias Heller
15 hours ago






$begingroup$
Thanks, that actually helped a lot. I just dont understand how to use Sow and Reap to avoid Append To be more specific: instead of ParallelDo I use now ParallelTable: eq = ParallelTable[ FNumeric[ SetPrecision[N[monlistnumeric[[i]] + monlistnumeric[[j]], 20], 10]] , {j, jj}]; FNumeric is a function, that returns either 0 or a value I want to store. I then do eq = DeleteCases[eq, 0]; candidates = Join[candidates, eq]; Is there a more efficient way to do this?
$endgroup$
– Matthias Heller
15 hours ago






2




2




$begingroup$
@MatthiasHeller, you're welcome. How is this new code related to your post? You should consider a new post with your real problem and all relevant data. I may have a look. In general, depending on the details, there are various ways to perform the computation efficiently; these way might not use Parallel at all, but rather Compiled code.
$endgroup$
– Henrik Schumacher
14 hours ago




$begingroup$
@MatthiasHeller, you're welcome. How is this new code related to your post? You should consider a new post with your real problem and all relevant data. I may have a look. In general, depending on the details, there are various ways to perform the computation efficiently; these way might not use Parallel at all, but rather Compiled code.
$endgroup$
– Henrik Schumacher
14 hours ago










Matthias Heller is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















Matthias Heller is a new contributor. Be nice, and check out our Code of Conduct.













Matthias Heller is a new contributor. Be nice, and check out our Code of Conduct.












Matthias Heller is a new contributor. Be nice, and check out our Code of Conduct.
















Thanks for contributing an answer to Mathematica Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f195006%2fwhy-is-paralleldo-slower-than-do%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Taj Mahal Inhaltsverzeichnis Aufbau | Geschichte | 350-Jahr-Feier | Heutige Bedeutung | Siehe auch |...

Baia Sprie Cuprins Etimologie | Istorie | Demografie | Politică și administrație | Arii naturale...

Nicolae Petrescu-Găină Cuprins Biografie | Opera | In memoriam | Varia | Controverse, incertitudini...