Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
A few slices short of a loaf.

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                                         

Gobbledback Paper

ultra secure paper
 
(+2, -2)
  [vote for,
against]

Gobbledback Paper is a range of ultra secure stationery, consisting of sheets of paper which are blank on one side, but covered with lines of totally random gobbledegook on the other.

The gobbledeback comes in many forms, but each one is clearly described so that it can be matched exactly in character by that which is typed on the message side of the paper. ie 12 Point Times New Roman, with 10 point leading.

The purpose of all this is to ensure that when the documents are eventually shredded, there is absolutely no way they can be reconstituted by assembling the fragments.

Those in the public eye can also make sure that all private documents are only ever seen with the gobbledeback side face up.

xenzag, Apr 24 2011

Making briquettes from waste paper http://www.youtube...._8U&feature=related
[Klaatu, Apr 27 2011]


Please log in.
If you're not logged in, you can see what this page looks like, but you will not be able to add anything.



Annotation:







       //absolutely no way they can be reconstituted by assembling the fragments// Really? I don't quite see why. Once a page is partly reassembled, it should be obvious which is the front and which the gobbledeback. In fact, the back might even help to match strips, in some cases.
spidermother, Apr 24 2011
  

       Try it. There is no way to tell the front of each piece of paper from the back, and most shredding now produces fragments and not strips. That's why the idea uses the word "fragments" and not the word "strips" which are totally insecure.
xenzag, Apr 24 2011
  

       That makes a difference. Most of the shredders I've come across are the strip kind, probably because they were fairly old. Even so, you are providing redundant information, which can make code-cracking easier.
spidermother, Apr 24 2011
  

       I don't understand how any form of printed pattern would make it so that there's "no way to tell the front of each piece of paper from the back". If you know what the pattern is, can't you recognize it? Maybe a human has trouble, but couldn't I e.g. program a computer to do optical recognition on the paper and the backs, and sort out the fragments that way?
jutta, Apr 24 2011
  

       Well I imagine you don't know what the pattern is because it's random strings of letters.   

       I can see this working if the shredder cuts the paper into letter-sized squares.   

       This would be better implemented at the user's end rather than the stationary supplier's. Simply add a routine to the word processing software to run the printer's duplex mode, adding in the randomly generated text on the fly, using the font settings from the typed text.
pocmloc, Apr 24 2011
  

       You know something about the pattern - if you have part of a character, then you can potentially find the piece that has the rest of the character, and thus make progress even in the absence of usable clues on the other side.
spidermother, Apr 24 2011
  

       An elaboration on [pocmloc]'s elaboration:   

       The special military-grade wordprocessor, equipped with high-order Markov Unwinizer, generates, for every page of real text, a page of almost-coherent text (think: Postmodern Generator), similar except that the facts have been changed. To be used with double-sided printers, of course. The document assigning targets to MIRVed ICBMs in North Dakota substitutes different Russian cities selected from an atlas, for example, or the next-season's prime-time lineup describes nonexistent reality-based sitcom ideas culled from the slush-pile of rejected ideas. Not flagrantly impossible -- start with a very high- order Markov model primed with a topic-specific corpus, then add a database of proper nouns, which recognizes such, in the text, and substitutes others randomly.   

       Essentially the Markov model, here, is the inverse of the one the bad-guys are assumed to be using to help piece together the bits of paper.
mouseposture, Apr 24 2011
  

       Well, [spidermother], obviously the shredder is registered so that it cuts between the characters. And the gobbledytext is justified so that each of its characters is exactly lined up with one from the real text on the other side of the paper.
pocmloc, Apr 24 2011
  

       Look at it this way. The algorithm that pieces together the scraps of paper is trying many different combinations, screening them to see if any "make sense." Brute force doesn't work, because the combinatorial explosion makes the problem effectively insoluble. So it's trying to exploit every bit of non-randomness it can.   

       Suppose it's identified 100 scraps which, by their shape, potentially fit one next to the other: which combination is right? Suppose in 70000 possible pairs, the cut, for both scraps, goes right through the middle of a letter, and the partial letters don't match. That, as [pocmloc] pointed out, narrows things down. [xenzag]'s idea, in its original form, defeats that.   

       But for the remainder, the cut falls in the white space between letters. For some possible pairs, the first letter is, say, "Q." A first-order Markov model primed with an English-language corpus is going to assign a high probability to the next letter being "U." The brute-force trial & error algorithm will try that possibility first, improving computational efficiency. [xenzag]'s idea, in its original form, doesn't help, here.   

       A 4th-order Markov model might notice that the preceding two letters are "Al<space>" and assign a higher probability to "A." ('cause it's probably "Al Queda") A very high order Markov model might have noticed "terrorist threat" elsewhere on the page, and reassigned the priorities between "U" and "A" on that basis. A Markov model primed with topic-specific corpus, might choose "U" or "A" depending on whether the document originated with the Middle-East desk at the Ministry of Foreign Affairs, vs. the Medical Records department at East Overshoe Municipal Hospital.   

       If the gibberish is, itself, generated from such a topic- specific high-order Markov model, then it's invulnerable to this approach: the pieced-together document will contain approximately 50% gibberish.   

       (And, unfortunately, 50% real message. We really need to print more than one page of gibberish for every page of real document. Perhaps this could be done with a printer mounted directly on the shredder.)
mouseposture, Apr 24 2011
  

       [-] won't work:   

       - the back would have to be printed from the same printer or it'd be easy to tell the sides from each other (you don't have to know which side is which, just that they're different sides). At this point, granted, you'll be running reconstructions on both back and front, but only one will resolve.   

       - the fonts have to be fixed space and in the same "grid" as the back, or the shredder will cut more than whitespace thus easing reconstruction.   

       - using a Markov model on the back to muddle reconstruction means that you'd have to analyse a database of past correspondence to get the right words and distribution-frequencies, at which point, should you ever decide to encrypt, you've already given up your word distribution-frequency list.
FlyingToaster, Apr 24 2011
  

       //database of past correspondence// corpus, yes.   

       //should you ever decide to encrypt// Oh, wait -- I see. Damn that's clever. But hardly a deal- breaker.
mouseposture, Apr 24 2011
  

       Asuming a shredder with a set cut width, you don't need to analyze the background word frequency, but instead the background syllabic (well, letter grouping frequency). Using an additional convolution layer to generate dummy text that uses approximately the same letter-grouping frequency in unrelated words would aid somewhat in protecting encrpted text.   

       One thing, I believe, (without any evidence to back it up) that paper creation results in a different microstructure on both sides, minimizing the utility.
MechE, Apr 27 2011
  

       so, nothing like gabble duck paper? That would be more useful. Some kind of post-it notes on the back of the gabble duck, and any unauthorised readers would be eaten.
not_morrison_rm, Apr 27 2011
  

       The efficacy of this strategy would be improved by including sheets with gobbledygook on both sides, to be shredded with the message. The more such pages are included, the higher the security value. The serious cryptographer will incorporate several reams of mixed gobbledy and a few more of just gook to ensure maximal cryptographiness.
bungston, Apr 27 2011
  

       I thought this was going to be edible (gobbled) paper - the best way to stop some-one else reading your Confidential Top Secret stuff is eat the pages.
On that note, why shred? Dissolving the pages back to cellulose mush, which can then be recycled back into paper, would be far more effective. All you need is a vat of whatever is used by paper recyclers to mushify scrap paper (probably just water and a few additives).
neutrinos_shadow, Apr 27 2011
  

       Or, heat your house with the waste papers. <link>
Klaatu, Apr 27 2011
  

       //sheets with gobbledygook on both sides, to be shredded with the message// That's what I said.
mouseposture, Apr 27 2011
  

       [neutrinos_shadow] Futurama did that.
spidermother, Apr 28 2011
  

       //gabble duck paper// and a hooder paper-shredder.
FlyingToaster, Apr 28 2011
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle