Did I miss something?

Conferring with cryptographic professionals.  A copy of an email to Professor Dan Boneh, a professor of cryptography & cyber security at  Stanford.  To-date, this is the most simplified and straight forward explanation my cryptographic theories behind T.E.C.  What’s his opinion of my encryption theories that are the underlying properties of T.E.C. encryption codecs?

Dear Professor Boneh,

Hope this finds you well. I took your online course in Crypto I on Coursera. Hope it is ok that I contact you about these crypto ideas that I had.  Several years ago, while studying the Fibonacci sequence, I came up with several crypto ideas. These ideas work alone and in conjunction with some linguistic & compression facts (from other prior research). Separately, I was successful in publishing the linguistics and compression work.  While a published author (in comp sci on ACM and other fields), I was not successful in passing peer review with my crypto ideas. (At the time, my knowledge of conventional crypto lingo was not what it is now. The articles are on Arxiv.org at Cornell.) Although, there were those who accepted my ideas. (Some math professors who were my former comp sci professors. They did have some experience in the field.) While others may not accept my conjectures, I have yet to receive a logical or mathematical refutation for my ideas. May I run my ideas by you?

I thank you in advance for your consideration.


Givon Zirkind

If I am missing something, I would really like to know.

I. Premise: Frequency analysis is the basis of decryption. If the natural letter frequency distribution is normalized; then decryption is not possible.

There is a corollary to this. (An analogy from my work in compression.) This can be done by changing the symbol set (and other ways.) For example, instead of “book” — “bo@k”. So, every occurence of “the” can be different, or at least many possible combinations. “the” “th¢” “th£” “¥he” “€h§”. With enough alternate symbols for “e” & “t”, one can normalize the frequency of the letters. So, frequency analysis is no longer possible.

(Some professor commentator told me that frequency is normally done now-a-days in encryption. And, decryption is still possible. But, he did not give me any references as to how the decryption is done in spite of no availability of frequency analysis.)

I have read in William Friedman’s works (Military Cryptanalysis), that there is always a trace of the frequency. However, I have produced prototypes that I can not recover a frequency analysis for. I have used spreadsheet models and conventional cracking s/w. (There is a method that I read about, that I was not able to program. Something I read in Modern Cryptography by Bruce Schneier mentions something about using alignment and XORing a message with itself to reveal redundancy.  But, I was unsuccessful at that.)

So, I posit while humans are very good at recognizing patterns, (a person could see “b*@k” and assume it means “book”) but, computers are not.  If the frequency was normalized, a dictionary attack would fail and deciphering by computer would not be possible.  [While a dictionary attack would fail, it would be possible, by using a known plaintext attack, to figure out the extended symbol set.]

However, I further posit, that if the frequency were normalized AND then the text were encrypted, that decryption would not be possible at all. Neither by a human, nor by computer.

(I built codecs based upon this theory.  Each codec is language specific.  I was not able to decrypt texts of 500 words encrypted with this method.  I tried decryption by hand and with some common s/w.  I used the Gettyburg address as plaintext.)

Am I correct?

[Of course, in the real world of implementation, I can imagine weaknesses & ways to decrypt. For one, this requires a symbol table (like the ASCII table, with the extended alphabet to normalize the frequency) which must be kept secret. Memory could be dumped. Object code decompiled.  The table should not be static. Rather, the table should be dynamic. If the table were found out, then decryption is made much easier, possibly with brute force.]

II. Binary Encryption

There is more than just one binary representation. Base 2 is just one possible binary representation. There is base prime. Any number can be expressed as a combination of primes. Each position can represent the next prime. 0/1 indicating if that prime is selected.

Example: For the sequence 13-11-7-5 3-2-1-0; then 0000 1010 in base prime would express 3+1=4.

There is also base Phinary & Fibonacci. And, again, the positions can be permutated. Also, one could use 16 bytes to express 8 bits with a mask.  (Using base Fibonacci presents a lot of nice properties and is used in networking for its self recovery properties.)

Ex. 0-1-1-2 3-5-8-13.  So, 0100 1000 in base Fibonacci 1+3 = 4

Another example of alternate binary representations:
Conventional base 2 numbers are represented as
2^7 + 2^6 + 2^5 + 2^4 + 2^3 + 2^2 + 2^1 + 2^0.
But, I could just as well represent the number with
2^6 + 2^2 + 2^5 + 2^8 + 2^3 + 2^1 + 2^0 + 2^4 + 2^7.

This gives 8! possible combinations.  [Permutations are always more than exponentiation.  If a crypto system’s possiblities are exponentionally calculated, it would require the most computational power to decrypt. Correct?  Something you said in the course.  Then, if a system is based upon permutations, it would require even more computational power and; be computationally secure.  Correct?]

If using 16 bits, with 8 bits for data and a mask of 8 dumy bits, the combinations grow to be computationally secure.

If using base Fibonacci, the number grows considerably. (A minimum of 10 positions is required to express 0-255.)

The number of possible combinations makes the encryption computationally secure. (Certainly for base Fibonacci.)

[I understand that using these bases increases the length of the ciphertext.  However, I maintain that this is negligble and acceptabe for security.  Phinary on the other hand is too unwieldly for practical use.]

I know sequential encryption does not make for more security. Meaning, that 2 sequential Ceasar substitutions resolve to just one Ceaser substition. However, this encryption of the binary is not the same as the encryption of the plaintext. This encryption of the binary is totally unrelated to the plaintext, as far as I can calculate or perceive.

Ex. If presented with an encrypted bit stream and not knowing if it is big endian or little endian, can it be decrypted without guessing? Meaning, must I use brute force?  Try to first decrypt the bit stream as big endian and then try to decrypt the bit stream as little endian?  Or, is there a way (method, algorithm) for me to know if it is big endian or little endian?

And, as I enumerated, big endian & little endian are not the only possible binary representations available.

This is obscurity security. But, I posit the possibilities are so great, that it is computationally secure.

I posit, that without any encryption of the plaintext at all, choosing an alternate binary representation creates a different representation that has to be guessed. Brute force is the only way of knowing.

Am I correct?

[Of course, in the real world of implementation, I can imagine weaknesses & ways to decrypt. For one, this requires both parties knowing which binary representation was used.  That requires secrecy and can be compromised.  Selection parameters keylogged.  Object code decompiled.  Also, the representation chosen should not be static. Rather, the representation should be dynamic. If the representation were found out, then decryption is just ordinary decryption.]

III. Breaking Transitivity A != B != C


If A — plaintext; B — the ASCII table; C — Binary Representation
N() – language frequency normalization
Then, E1(A) != B != E2(C)

If the binary representation is unknown, then; for each and every separate, possible, binary representation; a new frequency analysis (or other decryption methods) must be tried.

Substituting E1(N(p)) for E1(A) ==> E1(N(p)) != B != E2(C); would only secure the encryption even more. This encryption would provide computationally secure & information theoretic encryption–without an algorithmic or mathematical method of solution. (Cryptanalytically unbreakable.)

Am I correct?

[Of course, in the real world of implementation, I can imagine weaknesses & ways to decrypt. For one, this requires a secret symbol table (like the ASCII table, with the extended alphabet to normalize the frequency) and secret binary representation. Memory could be dumped. Object code decompiled.  Also, nether the table nor the representation should be static. Rather, they should be dynamic. If they were found out, then decryption is possible.]