Efficient Algorithms for Destroyed Document ReconstructionStereo images rectification and disparity: which...

Efficient Algorithms for Destroyed Document Reconstruction

What does it mean for something to be strictly less than epsilon for an arbitrary epsilon?

If a character has cast the Fly spell on themselves, can they "hand off" to the Levitate spell without interruption?

Illustrating that universal optimality is stronger than sphere packing

Why is unzipped file smaller than zipped file

How to test if argument is a single space?

Existence of a model of ZFC in which the natural numbers are really the natural numbers

Team member is vehemently against code formatting

mmap: effect of other processes writing to a file previously mapped read-only

Adobe Illustrator: How can I change the profile of a dashed stroke?

Does the fact that we can only measure the two-way speed of light undermine the axiom of invariance?

Meaning of "half-crown enclosure"

How to safely discharge oneself

Negative impact of having the launch pad away from the Equator

Proto-Indo-European (PIE) words with IPA

Is being an extrovert a necessary condition to be a manager?

What is the winged creature on the back of the Mordenkainen's Tome of Foes book?

What happens when redirecting with 3>&1 1>/dev/null?

What defines a person who is circumcised "of the heart"?

How would a physicist explain this starship engine?

Salesforce bug enabled "Modify All"

JavaScript: Access 'this' when calling function stored in variable

nginx conf: http2 module not working in Chrome in ubuntu 18.04

amsmath: How can I use the equation numbering and label manually and anywhere?



Efficient Algorithms for Destroyed Document Reconstruction


Stereo images rectification and disparity: which algorithms?Which algorithms are usable for heatmaps and what are their pros and consEfficient flood filling (seed filling)How to test image segmentation algorithms?Viewing images that compressed using lossless algorithmsSources on dictionary learning and related algorithmsHigh Dimensional Spaces for ImagesImage registration algorithms for images with varying distancesAlgorithms to correct misspelled word?More Efficient Feature Method Than Haar-Feature For Face Detection













1












$begingroup$


I am not certain this is the proper site for this question however I am mainly looking for resources on this topic (perhaps code). I was watching TV and one of the characters had a lawyer who destroyed his documents using a paper shredder. A lab tech said that the shredder was special.



I am not familiar with this area of computer science/ mathematics but I am looking for information on efficient algorithms to reconstruct destroyed documents. I can come up with a naive approach that is brute force fairly easily I imagine but just going through all the pieces and looking for edges that are the same but this doesn't sound feasible as the number of combinations will explode.



Note: By destroyed documents I am talking about taking a document (printed out) and then shredding it into small pieces and reassembling it by determining which pieces fit together.










share|cite|improve this question









New contributor



Shogun is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$








  • 1




    $begingroup$
    Can you edit your question to define "destroyed documents"?
    $endgroup$
    – lox
    3 hours ago






  • 1




    $begingroup$
    You should look at the methods used to recover the Stasi (East German secret police) archives that were shredded or mostly -- oops all the shredders are broken from over use -- torn up after the fall of the Berlin Wall. The BBC has a very high-level summary.
    $endgroup$
    – David Richerby
    3 hours ago
















1












$begingroup$


I am not certain this is the proper site for this question however I am mainly looking for resources on this topic (perhaps code). I was watching TV and one of the characters had a lawyer who destroyed his documents using a paper shredder. A lab tech said that the shredder was special.



I am not familiar with this area of computer science/ mathematics but I am looking for information on efficient algorithms to reconstruct destroyed documents. I can come up with a naive approach that is brute force fairly easily I imagine but just going through all the pieces and looking for edges that are the same but this doesn't sound feasible as the number of combinations will explode.



Note: By destroyed documents I am talking about taking a document (printed out) and then shredding it into small pieces and reassembling it by determining which pieces fit together.










share|cite|improve this question









New contributor



Shogun is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$








  • 1




    $begingroup$
    Can you edit your question to define "destroyed documents"?
    $endgroup$
    – lox
    3 hours ago






  • 1




    $begingroup$
    You should look at the methods used to recover the Stasi (East German secret police) archives that were shredded or mostly -- oops all the shredders are broken from over use -- torn up after the fall of the Berlin Wall. The BBC has a very high-level summary.
    $endgroup$
    – David Richerby
    3 hours ago














1












1








1





$begingroup$


I am not certain this is the proper site for this question however I am mainly looking for resources on this topic (perhaps code). I was watching TV and one of the characters had a lawyer who destroyed his documents using a paper shredder. A lab tech said that the shredder was special.



I am not familiar with this area of computer science/ mathematics but I am looking for information on efficient algorithms to reconstruct destroyed documents. I can come up with a naive approach that is brute force fairly easily I imagine but just going through all the pieces and looking for edges that are the same but this doesn't sound feasible as the number of combinations will explode.



Note: By destroyed documents I am talking about taking a document (printed out) and then shredding it into small pieces and reassembling it by determining which pieces fit together.










share|cite|improve this question









New contributor



Shogun is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$




I am not certain this is the proper site for this question however I am mainly looking for resources on this topic (perhaps code). I was watching TV and one of the characters had a lawyer who destroyed his documents using a paper shredder. A lab tech said that the shredder was special.



I am not familiar with this area of computer science/ mathematics but I am looking for information on efficient algorithms to reconstruct destroyed documents. I can come up with a naive approach that is brute force fairly easily I imagine but just going through all the pieces and looking for edges that are the same but this doesn't sound feasible as the number of combinations will explode.



Note: By destroyed documents I am talking about taking a document (printed out) and then shredding it into small pieces and reassembling it by determining which pieces fit together.







image-processing






share|cite|improve this question









New contributor



Shogun is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










share|cite|improve this question









New contributor



Shogun is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








share|cite|improve this question




share|cite|improve this question








edited 3 hours ago







Shogun













New contributor



Shogun is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








asked 4 hours ago









ShogunShogun

1085




1085




New contributor



Shogun is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




New contributor




Shogun is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










  • 1




    $begingroup$
    Can you edit your question to define "destroyed documents"?
    $endgroup$
    – lox
    3 hours ago






  • 1




    $begingroup$
    You should look at the methods used to recover the Stasi (East German secret police) archives that were shredded or mostly -- oops all the shredders are broken from over use -- torn up after the fall of the Berlin Wall. The BBC has a very high-level summary.
    $endgroup$
    – David Richerby
    3 hours ago














  • 1




    $begingroup$
    Can you edit your question to define "destroyed documents"?
    $endgroup$
    – lox
    3 hours ago






  • 1




    $begingroup$
    You should look at the methods used to recover the Stasi (East German secret police) archives that were shredded or mostly -- oops all the shredders are broken from over use -- torn up after the fall of the Berlin Wall. The BBC has a very high-level summary.
    $endgroup$
    – David Richerby
    3 hours ago








1




1




$begingroup$
Can you edit your question to define "destroyed documents"?
$endgroup$
– lox
3 hours ago




$begingroup$
Can you edit your question to define "destroyed documents"?
$endgroup$
– lox
3 hours ago




1




1




$begingroup$
You should look at the methods used to recover the Stasi (East German secret police) archives that were shredded or mostly -- oops all the shredders are broken from over use -- torn up after the fall of the Berlin Wall. The BBC has a very high-level summary.
$endgroup$
– David Richerby
3 hours ago




$begingroup$
You should look at the methods used to recover the Stasi (East German secret police) archives that were shredded or mostly -- oops all the shredders are broken from over use -- torn up after the fall of the Berlin Wall. The BBC has a very high-level summary.
$endgroup$
– David Richerby
3 hours ago










1 Answer
1






active

oldest

votes


















2












$begingroup$

Your problem is NP-Complete, even for strips (n strips yields (2n)!) used, so people use heuristics, transforms like Hough and morphological filters (to match continuity of text, but this heavily increases complexity for matching) or any kind of genetic / NN search, Ants Colony Optimization.



For summary of consecutive steps and various algorithm I recommend An Investigation into Automated Shredded Document Reconstruction using Heuristic Search Algorithms.



The problem itself may end up in nasty cases, when document is not fully sharp (blurred, printed with low resolution) and strips width is small and cut by physical cutter with dulled edges, because standard merging methods like panorama photo sticher gets lost and yield improper results. This is due to lost information by missing small strips, otherwise if you have full digital image cut into pieces, it is as hard as Jigsaw puzzle, non-digital image falls into approximate search.



To make algorithm automatic another problem is pieces feeding, rarely you can give axis aligned strips, so to start process it is nice to input all stripes as one picture with pieces lay by hand, this imposes another problem (this one is easy) to detect blobs and rotate them.



By special shredder instead of stripes yield very small rectangles. For comparison, P-1 class shredder gives stripes 6-12mm wide of any length (about 1800mm^2), class P-7 gives rectangles with area less than 5mm^2. When you get rectangles instead of stripes, problem yields (4n)! permutations, assuming one one-sided document, if there are lots of shreds from unrelated documents (no pictures, text only) in one bag, problem is not really tractable.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    There may be (2n)! arrangements of the shredded strips, but does that still determine the time complexity? Whenever you find "matches" you can "group" them together, behaving as a "thick strip", where only the first and last edge matter for the sake of comparison against other strips. This "clumping" should reduce the search space hugely, but IDK if it will still be O(n!)
    $endgroup$
    – Alexander
    1 hour ago










  • $begingroup$
    @Alexander This is not the complexity per se. The true hardness comes from the fact, that you are not fully sure, whether your match is really good. If you take a look at the pdf, figure 6.1 page 69, the tigers picture and all consecutive pictures, there are errors. You still have to check fitness of all edges pairwise , take for example several pieces, "grouping them" seems nice, but by choosing elements you prevent some other matches, which may get lower fit but MSE is lower. If exact matching of the edges is viable option, it will be blazingly fast, in my answer I assume it is not possible.
    $endgroup$
    – Evil
    1 hour ago










  • $begingroup$
    Makes sense! thanks
    $endgroup$
    – Alexander
    40 mins ago












Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "419"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});






Shogun is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f109567%2fefficient-algorithms-for-destroyed-document-reconstruction%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









2












$begingroup$

Your problem is NP-Complete, even for strips (n strips yields (2n)!) used, so people use heuristics, transforms like Hough and morphological filters (to match continuity of text, but this heavily increases complexity for matching) or any kind of genetic / NN search, Ants Colony Optimization.



For summary of consecutive steps and various algorithm I recommend An Investigation into Automated Shredded Document Reconstruction using Heuristic Search Algorithms.



The problem itself may end up in nasty cases, when document is not fully sharp (blurred, printed with low resolution) and strips width is small and cut by physical cutter with dulled edges, because standard merging methods like panorama photo sticher gets lost and yield improper results. This is due to lost information by missing small strips, otherwise if you have full digital image cut into pieces, it is as hard as Jigsaw puzzle, non-digital image falls into approximate search.



To make algorithm automatic another problem is pieces feeding, rarely you can give axis aligned strips, so to start process it is nice to input all stripes as one picture with pieces lay by hand, this imposes another problem (this one is easy) to detect blobs and rotate them.



By special shredder instead of stripes yield very small rectangles. For comparison, P-1 class shredder gives stripes 6-12mm wide of any length (about 1800mm^2), class P-7 gives rectangles with area less than 5mm^2. When you get rectangles instead of stripes, problem yields (4n)! permutations, assuming one one-sided document, if there are lots of shreds from unrelated documents (no pictures, text only) in one bag, problem is not really tractable.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    There may be (2n)! arrangements of the shredded strips, but does that still determine the time complexity? Whenever you find "matches" you can "group" them together, behaving as a "thick strip", where only the first and last edge matter for the sake of comparison against other strips. This "clumping" should reduce the search space hugely, but IDK if it will still be O(n!)
    $endgroup$
    – Alexander
    1 hour ago










  • $begingroup$
    @Alexander This is not the complexity per se. The true hardness comes from the fact, that you are not fully sure, whether your match is really good. If you take a look at the pdf, figure 6.1 page 69, the tigers picture and all consecutive pictures, there are errors. You still have to check fitness of all edges pairwise , take for example several pieces, "grouping them" seems nice, but by choosing elements you prevent some other matches, which may get lower fit but MSE is lower. If exact matching of the edges is viable option, it will be blazingly fast, in my answer I assume it is not possible.
    $endgroup$
    – Evil
    1 hour ago










  • $begingroup$
    Makes sense! thanks
    $endgroup$
    – Alexander
    40 mins ago
















2












$begingroup$

Your problem is NP-Complete, even for strips (n strips yields (2n)!) used, so people use heuristics, transforms like Hough and morphological filters (to match continuity of text, but this heavily increases complexity for matching) or any kind of genetic / NN search, Ants Colony Optimization.



For summary of consecutive steps and various algorithm I recommend An Investigation into Automated Shredded Document Reconstruction using Heuristic Search Algorithms.



The problem itself may end up in nasty cases, when document is not fully sharp (blurred, printed with low resolution) and strips width is small and cut by physical cutter with dulled edges, because standard merging methods like panorama photo sticher gets lost and yield improper results. This is due to lost information by missing small strips, otherwise if you have full digital image cut into pieces, it is as hard as Jigsaw puzzle, non-digital image falls into approximate search.



To make algorithm automatic another problem is pieces feeding, rarely you can give axis aligned strips, so to start process it is nice to input all stripes as one picture with pieces lay by hand, this imposes another problem (this one is easy) to detect blobs and rotate them.



By special shredder instead of stripes yield very small rectangles. For comparison, P-1 class shredder gives stripes 6-12mm wide of any length (about 1800mm^2), class P-7 gives rectangles with area less than 5mm^2. When you get rectangles instead of stripes, problem yields (4n)! permutations, assuming one one-sided document, if there are lots of shreds from unrelated documents (no pictures, text only) in one bag, problem is not really tractable.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    There may be (2n)! arrangements of the shredded strips, but does that still determine the time complexity? Whenever you find "matches" you can "group" them together, behaving as a "thick strip", where only the first and last edge matter for the sake of comparison against other strips. This "clumping" should reduce the search space hugely, but IDK if it will still be O(n!)
    $endgroup$
    – Alexander
    1 hour ago










  • $begingroup$
    @Alexander This is not the complexity per se. The true hardness comes from the fact, that you are not fully sure, whether your match is really good. If you take a look at the pdf, figure 6.1 page 69, the tigers picture and all consecutive pictures, there are errors. You still have to check fitness of all edges pairwise , take for example several pieces, "grouping them" seems nice, but by choosing elements you prevent some other matches, which may get lower fit but MSE is lower. If exact matching of the edges is viable option, it will be blazingly fast, in my answer I assume it is not possible.
    $endgroup$
    – Evil
    1 hour ago










  • $begingroup$
    Makes sense! thanks
    $endgroup$
    – Alexander
    40 mins ago














2












2








2





$begingroup$

Your problem is NP-Complete, even for strips (n strips yields (2n)!) used, so people use heuristics, transforms like Hough and morphological filters (to match continuity of text, but this heavily increases complexity for matching) or any kind of genetic / NN search, Ants Colony Optimization.



For summary of consecutive steps and various algorithm I recommend An Investigation into Automated Shredded Document Reconstruction using Heuristic Search Algorithms.



The problem itself may end up in nasty cases, when document is not fully sharp (blurred, printed with low resolution) and strips width is small and cut by physical cutter with dulled edges, because standard merging methods like panorama photo sticher gets lost and yield improper results. This is due to lost information by missing small strips, otherwise if you have full digital image cut into pieces, it is as hard as Jigsaw puzzle, non-digital image falls into approximate search.



To make algorithm automatic another problem is pieces feeding, rarely you can give axis aligned strips, so to start process it is nice to input all stripes as one picture with pieces lay by hand, this imposes another problem (this one is easy) to detect blobs and rotate them.



By special shredder instead of stripes yield very small rectangles. For comparison, P-1 class shredder gives stripes 6-12mm wide of any length (about 1800mm^2), class P-7 gives rectangles with area less than 5mm^2. When you get rectangles instead of stripes, problem yields (4n)! permutations, assuming one one-sided document, if there are lots of shreds from unrelated documents (no pictures, text only) in one bag, problem is not really tractable.






share|cite|improve this answer









$endgroup$



Your problem is NP-Complete, even for strips (n strips yields (2n)!) used, so people use heuristics, transforms like Hough and morphological filters (to match continuity of text, but this heavily increases complexity for matching) or any kind of genetic / NN search, Ants Colony Optimization.



For summary of consecutive steps and various algorithm I recommend An Investigation into Automated Shredded Document Reconstruction using Heuristic Search Algorithms.



The problem itself may end up in nasty cases, when document is not fully sharp (blurred, printed with low resolution) and strips width is small and cut by physical cutter with dulled edges, because standard merging methods like panorama photo sticher gets lost and yield improper results. This is due to lost information by missing small strips, otherwise if you have full digital image cut into pieces, it is as hard as Jigsaw puzzle, non-digital image falls into approximate search.



To make algorithm automatic another problem is pieces feeding, rarely you can give axis aligned strips, so to start process it is nice to input all stripes as one picture with pieces lay by hand, this imposes another problem (this one is easy) to detect blobs and rotate them.



By special shredder instead of stripes yield very small rectangles. For comparison, P-1 class shredder gives stripes 6-12mm wide of any length (about 1800mm^2), class P-7 gives rectangles with area less than 5mm^2. When you get rectangles instead of stripes, problem yields (4n)! permutations, assuming one one-sided document, if there are lots of shreds from unrelated documents (no pictures, text only) in one bag, problem is not really tractable.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered 3 hours ago









EvilEvil

8,47742447




8,47742447












  • $begingroup$
    There may be (2n)! arrangements of the shredded strips, but does that still determine the time complexity? Whenever you find "matches" you can "group" them together, behaving as a "thick strip", where only the first and last edge matter for the sake of comparison against other strips. This "clumping" should reduce the search space hugely, but IDK if it will still be O(n!)
    $endgroup$
    – Alexander
    1 hour ago










  • $begingroup$
    @Alexander This is not the complexity per se. The true hardness comes from the fact, that you are not fully sure, whether your match is really good. If you take a look at the pdf, figure 6.1 page 69, the tigers picture and all consecutive pictures, there are errors. You still have to check fitness of all edges pairwise , take for example several pieces, "grouping them" seems nice, but by choosing elements you prevent some other matches, which may get lower fit but MSE is lower. If exact matching of the edges is viable option, it will be blazingly fast, in my answer I assume it is not possible.
    $endgroup$
    – Evil
    1 hour ago










  • $begingroup$
    Makes sense! thanks
    $endgroup$
    – Alexander
    40 mins ago


















  • $begingroup$
    There may be (2n)! arrangements of the shredded strips, but does that still determine the time complexity? Whenever you find "matches" you can "group" them together, behaving as a "thick strip", where only the first and last edge matter for the sake of comparison against other strips. This "clumping" should reduce the search space hugely, but IDK if it will still be O(n!)
    $endgroup$
    – Alexander
    1 hour ago










  • $begingroup$
    @Alexander This is not the complexity per se. The true hardness comes from the fact, that you are not fully sure, whether your match is really good. If you take a look at the pdf, figure 6.1 page 69, the tigers picture and all consecutive pictures, there are errors. You still have to check fitness of all edges pairwise , take for example several pieces, "grouping them" seems nice, but by choosing elements you prevent some other matches, which may get lower fit but MSE is lower. If exact matching of the edges is viable option, it will be blazingly fast, in my answer I assume it is not possible.
    $endgroup$
    – Evil
    1 hour ago










  • $begingroup$
    Makes sense! thanks
    $endgroup$
    – Alexander
    40 mins ago
















$begingroup$
There may be (2n)! arrangements of the shredded strips, but does that still determine the time complexity? Whenever you find "matches" you can "group" them together, behaving as a "thick strip", where only the first and last edge matter for the sake of comparison against other strips. This "clumping" should reduce the search space hugely, but IDK if it will still be O(n!)
$endgroup$
– Alexander
1 hour ago




$begingroup$
There may be (2n)! arrangements of the shredded strips, but does that still determine the time complexity? Whenever you find "matches" you can "group" them together, behaving as a "thick strip", where only the first and last edge matter for the sake of comparison against other strips. This "clumping" should reduce the search space hugely, but IDK if it will still be O(n!)
$endgroup$
– Alexander
1 hour ago












$begingroup$
@Alexander This is not the complexity per se. The true hardness comes from the fact, that you are not fully sure, whether your match is really good. If you take a look at the pdf, figure 6.1 page 69, the tigers picture and all consecutive pictures, there are errors. You still have to check fitness of all edges pairwise , take for example several pieces, "grouping them" seems nice, but by choosing elements you prevent some other matches, which may get lower fit but MSE is lower. If exact matching of the edges is viable option, it will be blazingly fast, in my answer I assume it is not possible.
$endgroup$
– Evil
1 hour ago




$begingroup$
@Alexander This is not the complexity per se. The true hardness comes from the fact, that you are not fully sure, whether your match is really good. If you take a look at the pdf, figure 6.1 page 69, the tigers picture and all consecutive pictures, there are errors. You still have to check fitness of all edges pairwise , take for example several pieces, "grouping them" seems nice, but by choosing elements you prevent some other matches, which may get lower fit but MSE is lower. If exact matching of the edges is viable option, it will be blazingly fast, in my answer I assume it is not possible.
$endgroup$
– Evil
1 hour ago












$begingroup$
Makes sense! thanks
$endgroup$
– Alexander
40 mins ago




$begingroup$
Makes sense! thanks
$endgroup$
– Alexander
40 mins ago










Shogun is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















Shogun is a new contributor. Be nice, and check out our Code of Conduct.













Shogun is a new contributor. Be nice, and check out our Code of Conduct.












Shogun is a new contributor. Be nice, and check out our Code of Conduct.
















Thanks for contributing an answer to Computer Science Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f109567%2fefficient-algorithms-for-destroyed-document-reconstruction%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Taj Mahal Inhaltsverzeichnis Aufbau | Geschichte | 350-Jahr-Feier | Heutige Bedeutung | Siehe auch |...

Baia Sprie Cuprins Etimologie | Istorie | Demografie | Politică și administrație | Arii naturale...

Nicolae Petrescu-Găină Cuprins Biografie | Opera | In memoriam | Varia | Controverse, incertitudini...