Skip Navigation

The RSA Cryptosystem - New General Megathread for the 20th of September 2023

On this day in 1983, a patent was granted to MIT for a new cryptographic algorithm: RSA. "RSA" stands for the names of its creators Rivest, Shamir, and Adlemen. RSA is a "public-key" cryptosystem. Prior to the creation of RSA, public-key cryptography was not in wide use.

Public-key cryptography

Cryptography is the study and practice of secure communication. Throughout most of its historical use, cryptographic techniques were entirely dependent on the involved parties already sharing a secret that could be used to reverse an encryption process. In early cryptography, the secret was itself the encryption process (for example, a Caesar cipher that substitutes letters in a secret message with letters a fixed number of steps down the alphabet). As cryptography became more systematic and widespread in use, it became necessary to separate cryptographic secrets from the cryptographic techniques themselves because the techniques could become known by the enemy (as well as static cryptographic schemes being more vulnerable to cryptanalysis). Regardless, there is still the issue of needing to share secrets between the communicating parties securely. This has taken many forms over the years, from word of mouth to systems of secure distribution of codebooks. But this kind of cryptography always requires an initial secure channel of communication to exchange secrets before an insecure channel can be made secure by the use of cryptography. And there is the risk of an enemy capturing keys and making the entire system worthless.

Only relatively recently has this fundamental problem been addressed in the form of public-key cryptography. In the late 20th century, it was proposed that a form of cryptography could exist where the 2 parties, seeking to communicate securely, could exchange some non-secret information (a "public" key) derived from privately held secret information (a "private" key), and use a mathematical function (a "trap-door" function) that is easy to compute in one direction (encryption) but hard to reverse without special information (decryption) to encipher messages to each other, using each other's respective public keys, that can't be easily decrypted without the corresponding private key. In other words, it should be easy to encipher messages to each other using a public key but hard to decrypt messages without the related private key. At the time this idea was proposed there was no known computationally-hard trap-door function that could make this possible in practice. Shortly after, several candidates and cryptosystems based upon them were described publicly πŸ‘, including one that is still with us today...

RSA

Ron Rivest, Adi Shamir, and Leonard Adleman at MIT had made many attempts to find a suitably secure trap-door function for creating a public-key cryptosystem over a year leading up to the publication of their famous paper in 1978. Rivest and Shamir, the computer scientists of the group, would create a candidate trap-door function while Adleman, the mathematician, would try to find a way to easily reverse the function without any other information (like a public key). Supposedly, it took them 42 attempts before they created a promising new trap-door function.

As described in their 1978 paper "A method for obtaining digital signatures and public-key cryptosystems", RSA is based upon the principle that factoring very large numbers is computationally difficult (for now!). The paper is a great read, if you're interested in these topics. The impact of RSA can't be overstated. The security of communications on the internet have been dependent on RSA and other public-key cryptosystems since the very beginning. If you check your browser's connection info right now, you'll see that the cryptographic signature attached to Hexbear's certificate is based on RSA! In the past, even the exchange of symmetric cipher keys between your web browser and the web server would have been conducted with RSA but there has been a move away from that to ensure the compromise of either side's RSA private keys would not compromise all communications that ever happened.

The future of RSA?

In 1994, a mathematician named Peter Shor, developed an algorithm for quantum computers that would be capable of factoring the large integers used in the RSA scheme. In spite of this, RSA has seen widespead and increasing use in securing communications on the internet. Until recently, the creation of a large enough quantum computer to run Shor's algorithm at sufficient scale was seen as very far off. With advances in practical quantum computers though, RSA is on its way out. Although current quantum computers are still a very long way off from being able to break RSA, it's looking more and more plausable that someone could eventually build one that is capable of cracking RSA. A competition being held by the US National Institute of Standards and Technology, similar to the one that selected the Advanced Encryption Algorithm, is already underway to select standard cryptographic algorithms that can survive attacks from quantum computers.

Megathreads and spaces to hang out:

reminders:

  • πŸ’š You nerds can join specific comms to see posts about all sorts of topics
  • πŸ’™ Hexbear’s algorithm prioritizes comments over upbears
  • πŸ’œ Sorting by new you nerd
  • 🌈 If you ever want to make your own megathread, you can reserve a spot here nerd
  • 🐢 Join the unofficial Hexbear-adjacent Mastodon instance toots.matapacos.dog

Links To Resources (Aid and Theory):

Aid:

Theory:

402

You're viewing a single thread.

402 comments
  • I don't like Prolog but I can see its potential. The contradiction between the dominant, static, sequential mode of computer programming and the highly dynamic reflective and creative nature of human thought must be resolved somehow. We can't keep going on like this. Programmer-reactionaries (Suckless, CAT-V, some GNU people, etc) want a return to the time before the complexity scaled out of control but it's not possible to return to that time. The capitalists in control of the direction of technological development don't care as long as the money keeps flowing. Actually, it's better for them because of the amount of labor needed to maintain this infrastructure is massive and can be exploited. Not to mention that they can take advantage of the endless security problems present in the computer infrastructure of enemies and competitors.

    • Why do you think a logical paradigm would work so much better than what already exists? IDK if maybe it's my neurodivergence but I find it way, way easier to think in an object oriented or functional fashion than a mathematical, logical paradigm. And I think OOP or FP are generally just as capable of being dynamic and reflective as any other high level paradigm.

      If anything, the issue might be that software development is a productive process where workers put something that is very close to their pure, unadultered thoughts into the commodities being produced. But the division of labor requires that those thoughts be beaten into a box that interfaces well with the rest of the pipeline, because allowing each individual's idiosyncracies bleed into each other's code would be a problem. So it's the division of labor that forces software developers to use limiting design patterns instead of following a more freeform process.

      • Why do you think a logical paradigm would work so much better than what already exists? IDK if maybe it's my neurodivergence but I find it way, way easier to think in an object oriented or functional fashion than a mathematical, logical paradigm.

        As it is right now, the way computer touchers get computers to do things is by explicitly defining what operations need to be carried out and how to integrate them into the larger state of the pre-defined program. What the logical approach could offer is operations being automatically and dynamically carried out and integrated as a natural result of programmer-defined relations between things instead of a static, predefined procedure that would be made up of long and buggy explicit instructions for the entire process.

        That being said, current logical programming languages have limited applications beyond simple deduction of facts based upon the rules of static, non-contradictory formal logic. It's so bad now that most logical programming languages have facilities for the programmer to drop into writing imperative code at any point.

        And I think OOP or FP are generally just as capable of being dynamic and reflective as any other high level paradigm.

        This kind of language I am trying to describe (and actually trying to create recently) doesn't exclude OOP. It actually elevates the relationships between objects to being the primary way of describing the development of program state. Also, when I said "dynamic and reflective" I meant it more in the materialist dialectical definition. As in human thought being made up of dynamic relationships between ideas and reflections of the material world.

        If anything, the issue might be that software development is a productive process where workers put something that is very close to their pure, unadultered thoughts into the commodities being produced. But the division of labor requires that those thoughts be beaten into a box that interfaces well with the rest of the pipeline, because allowing each individual's idiosyncracies bleed into each other's code would be a problem. So it's the division of labor that forces software developers to use limiting design patterns instead of following a more freeform process.

        I definitely agree that the conditions under which software is being made, in particular the techniques to remove creativity and force a standardized style of programming in order to make each developer a replaceable part in a vast machine, are making things worse. But I think the imperative style of programming is also flawed in general. We should be seeking ways to make the programming process more automated and less verbose.

        I hope that made any sense. These ideas are just some weird things I came to think after spending many years in this sphere of work.

        • when I said "dynamic and reflective" I meant it more in the materialist dialectical definition.

          Did not catch that that was what you were going for, I get it now. I think you're right, but abstraction in any shape already allows us to have the lower level operations shift around fluidly anyway. But having that be handled by the language itself would open up some interesting possibilities.

You've viewed 402 comments.