What was the rationale behind 36 bit computer architectures?What register size did early computers use?What...

How could Barty Crouch Jr. have run out of Polyjuice Potion at the end of the Goblet of Fire movie?

How can Kazakhstan perform MITM attacks on all HTTPS traffic?

"It is what it is" in French

How can I print a 1 cm overhang with minimal supports?

ExactlyOne extension method

Other than a swing wing, what types of variable geometry have flown?

What is a "staved" town, like in "Staverton"?

Wiring IKEA light fixture into old fixture

The seven story archetypes. Are they truly all of them?

Where is this photo of a group of hikers taken? Is it really in the Ural?

What was the rationale behind 36 bit computer architectures?

Has Peter Parker ever eaten bugs?

Can I pay with HKD in Macau or Shenzhen?

Company requiring me to let them review research from before I was hired

How to run a substitute command on only a certain part of the line

German phrase for 'suited and booted'

Found more old paper shares from broken up companies

What's the 1 inch size square knob sticking out of wall?

What does a black-and-white Puerto Rican flag signify?

Impact of throwing away fruit waste on a peak > 3200 m above a glacier

High income and difficulty during interviews

If I have the Armor of Shadows Eldritch Invocation do I know the Mage Armor spell?

Why did NASA use Imperial units?

How can I calculate the cost of Skyss bus tickets



What was the rationale behind 36 bit computer architectures?


What register size did early computers use?What was the first CPU with exposed pipeline?What was the value of the HP/1000 repeated indirection capability?Can we express the instructions to the Analytical Engine in terms of assembler or machine code?Was the IBM 5100 ever used for codebreaking?What is the history of the PDP-11 MARK instruction?How does the SNES (Super Nintendo) calculate the address of a character?Can S-100 cards attach to the ZX machines?Were there any working computers using residue number systems?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







7















Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers? As opposed to the various power-of-2 word sizes which seem to have won out?










share|improve this question









New contributor



Mark Harrison is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • Related question: retrocomputing.stackexchange.com/questions/1621/…

    – snips-n-snails
    6 hours ago











  • Back when people were starting to expect 32-bit integers, your Lisp interpreter could store 32-bits worth of immediate data and a 4-bit type code in a single machine word. (Don't ask me how I know!)

    – Solomon Slow
    4 hours ago


















7















Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers? As opposed to the various power-of-2 word sizes which seem to have won out?










share|improve this question









New contributor



Mark Harrison is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • Related question: retrocomputing.stackexchange.com/questions/1621/…

    – snips-n-snails
    6 hours ago











  • Back when people were starting to expect 32-bit integers, your Lisp interpreter could store 32-bits worth of immediate data and a 4-bit type code in a single machine word. (Don't ask me how I know!)

    – Solomon Slow
    4 hours ago














7












7








7








Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers? As opposed to the various power-of-2 word sizes which seem to have won out?










share|improve this question









New contributor



Mark Harrison is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers? As opposed to the various power-of-2 word sizes which seem to have won out?







architecture






share|improve this question









New contributor



Mark Harrison is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










share|improve this question









New contributor



Mark Harrison is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








share|improve this question




share|improve this question








edited 1 hour ago







Mark Harrison













New contributor



Mark Harrison is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








asked 8 hours ago









Mark HarrisonMark Harrison

1364 bronze badges




1364 bronze badges




New contributor



Mark Harrison is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




New contributor




Mark Harrison is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.















  • Related question: retrocomputing.stackexchange.com/questions/1621/…

    – snips-n-snails
    6 hours ago











  • Back when people were starting to expect 32-bit integers, your Lisp interpreter could store 32-bits worth of immediate data and a 4-bit type code in a single machine word. (Don't ask me how I know!)

    – Solomon Slow
    4 hours ago



















  • Related question: retrocomputing.stackexchange.com/questions/1621/…

    – snips-n-snails
    6 hours ago











  • Back when people were starting to expect 32-bit integers, your Lisp interpreter could store 32-bits worth of immediate data and a 4-bit type code in a single machine word. (Don't ask me how I know!)

    – Solomon Slow
    4 hours ago

















Related question: retrocomputing.stackexchange.com/questions/1621/…

– snips-n-snails
6 hours ago





Related question: retrocomputing.stackexchange.com/questions/1621/…

– snips-n-snails
6 hours ago













Back when people were starting to expect 32-bit integers, your Lisp interpreter could store 32-bits worth of immediate data and a 4-bit type code in a single machine word. (Don't ask me how I know!)

– Solomon Slow
4 hours ago





Back when people were starting to expect 32-bit integers, your Lisp interpreter could store 32-bits worth of immediate data and a 4-bit type code in a single machine word. (Don't ask me how I know!)

– Solomon Slow
4 hours ago










4 Answers
4






active

oldest

votes


















5















Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers?




Beside integer arithmetic 36 bit words work quite fine with two different byte sizes: Six and nine. Six bit was whats needed to store characters of the standard code for data transmission at that time: Baudot code or more exact ITA2.




As opposed to the various power-of-2 word sizes?




There is no inherent benefit of power of two word sizes. Any number can do.



Even more, there were no 'various power-of-two sizes' in the early and not so early days. Before the IBM/360 settled for a 32 Bit word size and four 8 bit bytes within a word and two nibble in a byte, power-of-two word sizes where an extreme exception (can't come up with any beside SAGE and IBM Stretch). The vast majority used word sizes dividable by 3 not at least to allow the use of octal representation. Before the IBM /360 with its 8 bit bytes, octal was as common to computer scientists as hex today - heck, Unix carries this legacy until today, making everyone learn octal at a time when hex is the general accepted way to display binary data.



Now, the reason why Amdahl did choose 8 bit bytes is rather simple: it was the most efficient way to store two BCD digits within a bye and thus a word as well. Operating in BCD was one main requirement for the /360 design, as it was meant to not only be compatible to, but as well replace all prior decimal machinery.



Is what seams today as 'natural' use of power of two it just a side effect from being able to handle decimal by a binary computer.



Conclusion: As so often in computing the answer is IBM /360 and the rest is history :)






share|improve this answer
























  • "There is no inherent benefit of power of two word sizes. " This is the most important part of this answer. Before microprocessors, computers were literally assembled by hand. If you didn't need more bits, you didn't wire them up.

    – DrSheldon
    15 mins ago



















4














Wiki page 36-bit shows some reasons (all copied from the page):




  • "This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code. "



  • And for characters:




    • six 5.32-bit DEC Radix-50 characters, plus four spare bits

    • six 6-bit Fieldata or IBM BCD characters (ubiquitous in early usage)

    • six 6-bit ASCII characters, supporting the upper-case unaccented letters, digits, space, and most ASCII punctuation characters. It was used on the PDP-6 and PDP-10 under the name sixbit.

    • five 7-bit characters and 1 unused bit (the usual PDP-6/10 convention, called five- seven ASCII)1[2]

    • four 8-bit characters (7-bit ASCII plus 1 spare bit, or 8-bit EBCDIC), plus four spare bits

    • four 9-bit characters1[2] (the Multics convention).








share|improve this answer








New contributor



Michel Keijzers is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.



























    3















    36 bit word size attractive




    Many sizes have been tried, but fundamentally, this results in a certain precision; from Wikpedia on 36-bit





    Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code.





     




    As opposed to the various power-of-2 word sizes?




    It is lack of requirement to conform to pre-existing specifications, for example, no internet, even simple disc files were not easily shared between computers back in those days.






    share|improve this answer































      1














      The key point made by Wikipedia seems to be:




      Prior to the introduction of computers, the state of the art in precision scientific and engineering calculation was the ten-digit, electrically powered, mechanical calculator....Computers, as the new competitor, had to match that accuracy....




      Many early computers did this by storing decimal digits. But when switching to binary:




      Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum).




      35 bits is obviously a slightly more awkward size than 36 bits anyway, but there are other reasons to choose it if your minimium size is 35 bits.





      1. 36 bits was on average a bit more efficient when packing characters into a word, especially for the 6-bit character encodings common at the time:



        Char size | 35 bit word       | 36 bit word
        ----------+-------------------+-------------------
        6-bit | 5 + 5 bits unused | 6 + 0 bits unused
        7-bit | 5 + 0 bits unused | 5 + 1 bit unused
        8-bit | 4 + 3 bits unused | 4 + 4 bits unused


      2. If you intend to make smaller computers later, having registers that are exactly divisible by two makes having some level of data interoperability easier, if not perfect. (Numerical data can easily be split into high and low words, and 6-char x 6-bit words can be split into two 3-char words, but packed 7- and 8-bit character data would be splitting parts of characters between words.)







      share|improve this answer




























        Your Answer








        StackExchange.ready(function() {
        var channelOptions = {
        tags: "".split(" "),
        id: "648"
        };
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function() {
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled) {
        StackExchange.using("snippets", function() {
        createEditor();
        });
        }
        else {
        createEditor();
        }
        });

        function createEditor() {
        StackExchange.prepareEditor({
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: false,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        imageUploader: {
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        },
        noCode: true, onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        });


        }
        });






        Mark Harrison is a new contributor. Be nice, and check out our Code of Conduct.










        draft saved

        draft discarded


















        StackExchange.ready(
        function () {
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f11801%2fwhat-was-the-rationale-behind-36-bit-computer-architectures%23new-answer', 'question_page');
        }
        );

        Post as a guest















        Required, but never shown

























        4 Answers
        4






        active

        oldest

        votes








        4 Answers
        4






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        5















        Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers?




        Beside integer arithmetic 36 bit words work quite fine with two different byte sizes: Six and nine. Six bit was whats needed to store characters of the standard code for data transmission at that time: Baudot code or more exact ITA2.




        As opposed to the various power-of-2 word sizes?




        There is no inherent benefit of power of two word sizes. Any number can do.



        Even more, there were no 'various power-of-two sizes' in the early and not so early days. Before the IBM/360 settled for a 32 Bit word size and four 8 bit bytes within a word and two nibble in a byte, power-of-two word sizes where an extreme exception (can't come up with any beside SAGE and IBM Stretch). The vast majority used word sizes dividable by 3 not at least to allow the use of octal representation. Before the IBM /360 with its 8 bit bytes, octal was as common to computer scientists as hex today - heck, Unix carries this legacy until today, making everyone learn octal at a time when hex is the general accepted way to display binary data.



        Now, the reason why Amdahl did choose 8 bit bytes is rather simple: it was the most efficient way to store two BCD digits within a bye and thus a word as well. Operating in BCD was one main requirement for the /360 design, as it was meant to not only be compatible to, but as well replace all prior decimal machinery.



        Is what seams today as 'natural' use of power of two it just a side effect from being able to handle decimal by a binary computer.



        Conclusion: As so often in computing the answer is IBM /360 and the rest is history :)






        share|improve this answer
























        • "There is no inherent benefit of power of two word sizes. " This is the most important part of this answer. Before microprocessors, computers were literally assembled by hand. If you didn't need more bits, you didn't wire them up.

          – DrSheldon
          15 mins ago
















        5















        Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers?




        Beside integer arithmetic 36 bit words work quite fine with two different byte sizes: Six and nine. Six bit was whats needed to store characters of the standard code for data transmission at that time: Baudot code or more exact ITA2.




        As opposed to the various power-of-2 word sizes?




        There is no inherent benefit of power of two word sizes. Any number can do.



        Even more, there were no 'various power-of-two sizes' in the early and not so early days. Before the IBM/360 settled for a 32 Bit word size and four 8 bit bytes within a word and two nibble in a byte, power-of-two word sizes where an extreme exception (can't come up with any beside SAGE and IBM Stretch). The vast majority used word sizes dividable by 3 not at least to allow the use of octal representation. Before the IBM /360 with its 8 bit bytes, octal was as common to computer scientists as hex today - heck, Unix carries this legacy until today, making everyone learn octal at a time when hex is the general accepted way to display binary data.



        Now, the reason why Amdahl did choose 8 bit bytes is rather simple: it was the most efficient way to store two BCD digits within a bye and thus a word as well. Operating in BCD was one main requirement for the /360 design, as it was meant to not only be compatible to, but as well replace all prior decimal machinery.



        Is what seams today as 'natural' use of power of two it just a side effect from being able to handle decimal by a binary computer.



        Conclusion: As so often in computing the answer is IBM /360 and the rest is history :)






        share|improve this answer
























        • "There is no inherent benefit of power of two word sizes. " This is the most important part of this answer. Before microprocessors, computers were literally assembled by hand. If you didn't need more bits, you didn't wire them up.

          – DrSheldon
          15 mins ago














        5












        5








        5








        Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers?




        Beside integer arithmetic 36 bit words work quite fine with two different byte sizes: Six and nine. Six bit was whats needed to store characters of the standard code for data transmission at that time: Baudot code or more exact ITA2.




        As opposed to the various power-of-2 word sizes?




        There is no inherent benefit of power of two word sizes. Any number can do.



        Even more, there were no 'various power-of-two sizes' in the early and not so early days. Before the IBM/360 settled for a 32 Bit word size and four 8 bit bytes within a word and two nibble in a byte, power-of-two word sizes where an extreme exception (can't come up with any beside SAGE and IBM Stretch). The vast majority used word sizes dividable by 3 not at least to allow the use of octal representation. Before the IBM /360 with its 8 bit bytes, octal was as common to computer scientists as hex today - heck, Unix carries this legacy until today, making everyone learn octal at a time when hex is the general accepted way to display binary data.



        Now, the reason why Amdahl did choose 8 bit bytes is rather simple: it was the most efficient way to store two BCD digits within a bye and thus a word as well. Operating in BCD was one main requirement for the /360 design, as it was meant to not only be compatible to, but as well replace all prior decimal machinery.



        Is what seams today as 'natural' use of power of two it just a side effect from being able to handle decimal by a binary computer.



        Conclusion: As so often in computing the answer is IBM /360 and the rest is history :)






        share|improve this answer














        Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers?




        Beside integer arithmetic 36 bit words work quite fine with two different byte sizes: Six and nine. Six bit was whats needed to store characters of the standard code for data transmission at that time: Baudot code or more exact ITA2.




        As opposed to the various power-of-2 word sizes?




        There is no inherent benefit of power of two word sizes. Any number can do.



        Even more, there were no 'various power-of-two sizes' in the early and not so early days. Before the IBM/360 settled for a 32 Bit word size and four 8 bit bytes within a word and two nibble in a byte, power-of-two word sizes where an extreme exception (can't come up with any beside SAGE and IBM Stretch). The vast majority used word sizes dividable by 3 not at least to allow the use of octal representation. Before the IBM /360 with its 8 bit bytes, octal was as common to computer scientists as hex today - heck, Unix carries this legacy until today, making everyone learn octal at a time when hex is the general accepted way to display binary data.



        Now, the reason why Amdahl did choose 8 bit bytes is rather simple: it was the most efficient way to store two BCD digits within a bye and thus a word as well. Operating in BCD was one main requirement for the /360 design, as it was meant to not only be compatible to, but as well replace all prior decimal machinery.



        Is what seams today as 'natural' use of power of two it just a side effect from being able to handle decimal by a binary computer.



        Conclusion: As so often in computing the answer is IBM /360 and the rest is history :)







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 5 hours ago









        RaffzahnRaffzahn

        64.8k6 gold badges159 silver badges268 bronze badges




        64.8k6 gold badges159 silver badges268 bronze badges













        • "There is no inherent benefit of power of two word sizes. " This is the most important part of this answer. Before microprocessors, computers were literally assembled by hand. If you didn't need more bits, you didn't wire them up.

          – DrSheldon
          15 mins ago



















        • "There is no inherent benefit of power of two word sizes. " This is the most important part of this answer. Before microprocessors, computers were literally assembled by hand. If you didn't need more bits, you didn't wire them up.

          – DrSheldon
          15 mins ago

















        "There is no inherent benefit of power of two word sizes. " This is the most important part of this answer. Before microprocessors, computers were literally assembled by hand. If you didn't need more bits, you didn't wire them up.

        – DrSheldon
        15 mins ago





        "There is no inherent benefit of power of two word sizes. " This is the most important part of this answer. Before microprocessors, computers were literally assembled by hand. If you didn't need more bits, you didn't wire them up.

        – DrSheldon
        15 mins ago













        4














        Wiki page 36-bit shows some reasons (all copied from the page):




        • "This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code. "



        • And for characters:




          • six 5.32-bit DEC Radix-50 characters, plus four spare bits

          • six 6-bit Fieldata or IBM BCD characters (ubiquitous in early usage)

          • six 6-bit ASCII characters, supporting the upper-case unaccented letters, digits, space, and most ASCII punctuation characters. It was used on the PDP-6 and PDP-10 under the name sixbit.

          • five 7-bit characters and 1 unused bit (the usual PDP-6/10 convention, called five- seven ASCII)1[2]

          • four 8-bit characters (7-bit ASCII plus 1 spare bit, or 8-bit EBCDIC), plus four spare bits

          • four 9-bit characters1[2] (the Multics convention).








        share|improve this answer








        New contributor



        Michel Keijzers is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.
























          4














          Wiki page 36-bit shows some reasons (all copied from the page):




          • "This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code. "



          • And for characters:




            • six 5.32-bit DEC Radix-50 characters, plus four spare bits

            • six 6-bit Fieldata or IBM BCD characters (ubiquitous in early usage)

            • six 6-bit ASCII characters, supporting the upper-case unaccented letters, digits, space, and most ASCII punctuation characters. It was used on the PDP-6 and PDP-10 under the name sixbit.

            • five 7-bit characters and 1 unused bit (the usual PDP-6/10 convention, called five- seven ASCII)1[2]

            • four 8-bit characters (7-bit ASCII plus 1 spare bit, or 8-bit EBCDIC), plus four spare bits

            • four 9-bit characters1[2] (the Multics convention).








          share|improve this answer








          New contributor



          Michel Keijzers is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






















            4












            4








            4







            Wiki page 36-bit shows some reasons (all copied from the page):




            • "This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code. "



            • And for characters:




              • six 5.32-bit DEC Radix-50 characters, plus four spare bits

              • six 6-bit Fieldata or IBM BCD characters (ubiquitous in early usage)

              • six 6-bit ASCII characters, supporting the upper-case unaccented letters, digits, space, and most ASCII punctuation characters. It was used on the PDP-6 and PDP-10 under the name sixbit.

              • five 7-bit characters and 1 unused bit (the usual PDP-6/10 convention, called five- seven ASCII)1[2]

              • four 8-bit characters (7-bit ASCII plus 1 spare bit, or 8-bit EBCDIC), plus four spare bits

              • four 9-bit characters1[2] (the Multics convention).








            share|improve this answer








            New contributor



            Michel Keijzers is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.









            Wiki page 36-bit shows some reasons (all copied from the page):




            • "This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code. "



            • And for characters:




              • six 5.32-bit DEC Radix-50 characters, plus four spare bits

              • six 6-bit Fieldata or IBM BCD characters (ubiquitous in early usage)

              • six 6-bit ASCII characters, supporting the upper-case unaccented letters, digits, space, and most ASCII punctuation characters. It was used on the PDP-6 and PDP-10 under the name sixbit.

              • five 7-bit characters and 1 unused bit (the usual PDP-6/10 convention, called five- seven ASCII)1[2]

              • four 8-bit characters (7-bit ASCII plus 1 spare bit, or 8-bit EBCDIC), plus four spare bits

              • four 9-bit characters1[2] (the Multics convention).









            share|improve this answer








            New contributor



            Michel Keijzers is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.








            share|improve this answer



            share|improve this answer






            New contributor



            Michel Keijzers is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.








            answered 7 hours ago









            Michel KeijzersMichel Keijzers

            2511 silver badge7 bronze badges




            2511 silver badge7 bronze badges




            New contributor



            Michel Keijzers is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.




            New contributor




            Michel Keijzers is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.

























                3















                36 bit word size attractive




                Many sizes have been tried, but fundamentally, this results in a certain precision; from Wikpedia on 36-bit





                Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code.





                 




                As opposed to the various power-of-2 word sizes?




                It is lack of requirement to conform to pre-existing specifications, for example, no internet, even simple disc files were not easily shared between computers back in those days.






                share|improve this answer




























                  3















                  36 bit word size attractive




                  Many sizes have been tried, but fundamentally, this results in a certain precision; from Wikpedia on 36-bit





                  Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code.





                   




                  As opposed to the various power-of-2 word sizes?




                  It is lack of requirement to conform to pre-existing specifications, for example, no internet, even simple disc files were not easily shared between computers back in those days.






                  share|improve this answer


























                    3












                    3








                    3








                    36 bit word size attractive




                    Many sizes have been tried, but fundamentally, this results in a certain precision; from Wikpedia on 36-bit





                    Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code.





                     




                    As opposed to the various power-of-2 word sizes?




                    It is lack of requirement to conform to pre-existing specifications, for example, no internet, even simple disc files were not easily shared between computers back in those days.






                    share|improve this answer














                    36 bit word size attractive




                    Many sizes have been tried, but fundamentally, this results in a certain precision; from Wikpedia on 36-bit





                    Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code.





                     




                    As opposed to the various power-of-2 word sizes?




                    It is lack of requirement to conform to pre-existing specifications, for example, no internet, even simple disc files were not easily shared between computers back in those days.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered 7 hours ago









                    Erik EidtErik Eidt

                    1,8006 silver badges13 bronze badges




                    1,8006 silver badges13 bronze badges























                        1














                        The key point made by Wikipedia seems to be:




                        Prior to the introduction of computers, the state of the art in precision scientific and engineering calculation was the ten-digit, electrically powered, mechanical calculator....Computers, as the new competitor, had to match that accuracy....




                        Many early computers did this by storing decimal digits. But when switching to binary:




                        Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum).




                        35 bits is obviously a slightly more awkward size than 36 bits anyway, but there are other reasons to choose it if your minimium size is 35 bits.





                        1. 36 bits was on average a bit more efficient when packing characters into a word, especially for the 6-bit character encodings common at the time:



                          Char size | 35 bit word       | 36 bit word
                          ----------+-------------------+-------------------
                          6-bit | 5 + 5 bits unused | 6 + 0 bits unused
                          7-bit | 5 + 0 bits unused | 5 + 1 bit unused
                          8-bit | 4 + 3 bits unused | 4 + 4 bits unused


                        2. If you intend to make smaller computers later, having registers that are exactly divisible by two makes having some level of data interoperability easier, if not perfect. (Numerical data can easily be split into high and low words, and 6-char x 6-bit words can be split into two 3-char words, but packed 7- and 8-bit character data would be splitting parts of characters between words.)







                        share|improve this answer






























                          1














                          The key point made by Wikipedia seems to be:




                          Prior to the introduction of computers, the state of the art in precision scientific and engineering calculation was the ten-digit, electrically powered, mechanical calculator....Computers, as the new competitor, had to match that accuracy....




                          Many early computers did this by storing decimal digits. But when switching to binary:




                          Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum).




                          35 bits is obviously a slightly more awkward size than 36 bits anyway, but there are other reasons to choose it if your minimium size is 35 bits.





                          1. 36 bits was on average a bit more efficient when packing characters into a word, especially for the 6-bit character encodings common at the time:



                            Char size | 35 bit word       | 36 bit word
                            ----------+-------------------+-------------------
                            6-bit | 5 + 5 bits unused | 6 + 0 bits unused
                            7-bit | 5 + 0 bits unused | 5 + 1 bit unused
                            8-bit | 4 + 3 bits unused | 4 + 4 bits unused


                          2. If you intend to make smaller computers later, having registers that are exactly divisible by two makes having some level of data interoperability easier, if not perfect. (Numerical data can easily be split into high and low words, and 6-char x 6-bit words can be split into two 3-char words, but packed 7- and 8-bit character data would be splitting parts of characters between words.)







                          share|improve this answer




























                            1












                            1








                            1







                            The key point made by Wikipedia seems to be:




                            Prior to the introduction of computers, the state of the art in precision scientific and engineering calculation was the ten-digit, electrically powered, mechanical calculator....Computers, as the new competitor, had to match that accuracy....




                            Many early computers did this by storing decimal digits. But when switching to binary:




                            Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum).




                            35 bits is obviously a slightly more awkward size than 36 bits anyway, but there are other reasons to choose it if your minimium size is 35 bits.





                            1. 36 bits was on average a bit more efficient when packing characters into a word, especially for the 6-bit character encodings common at the time:



                              Char size | 35 bit word       | 36 bit word
                              ----------+-------------------+-------------------
                              6-bit | 5 + 5 bits unused | 6 + 0 bits unused
                              7-bit | 5 + 0 bits unused | 5 + 1 bit unused
                              8-bit | 4 + 3 bits unused | 4 + 4 bits unused


                            2. If you intend to make smaller computers later, having registers that are exactly divisible by two makes having some level of data interoperability easier, if not perfect. (Numerical data can easily be split into high and low words, and 6-char x 6-bit words can be split into two 3-char words, but packed 7- and 8-bit character data would be splitting parts of characters between words.)







                            share|improve this answer















                            The key point made by Wikipedia seems to be:




                            Prior to the introduction of computers, the state of the art in precision scientific and engineering calculation was the ten-digit, electrically powered, mechanical calculator....Computers, as the new competitor, had to match that accuracy....




                            Many early computers did this by storing decimal digits. But when switching to binary:




                            Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum).




                            35 bits is obviously a slightly more awkward size than 36 bits anyway, but there are other reasons to choose it if your minimium size is 35 bits.





                            1. 36 bits was on average a bit more efficient when packing characters into a word, especially for the 6-bit character encodings common at the time:



                              Char size | 35 bit word       | 36 bit word
                              ----------+-------------------+-------------------
                              6-bit | 5 + 5 bits unused | 6 + 0 bits unused
                              7-bit | 5 + 0 bits unused | 5 + 1 bit unused
                              8-bit | 4 + 3 bits unused | 4 + 4 bits unused


                            2. If you intend to make smaller computers later, having registers that are exactly divisible by two makes having some level of data interoperability easier, if not perfect. (Numerical data can easily be split into high and low words, and 6-char x 6-bit words can be split into two 3-char words, but packed 7- and 8-bit character data would be splitting parts of characters between words.)








                            share|improve this answer














                            share|improve this answer



                            share|improve this answer








                            edited 2 hours ago

























                            answered 2 hours ago









                            Curt J. SampsonCurt J. Sampson

                            1,1882 silver badges18 bronze badges




                            1,1882 silver badges18 bronze badges






















                                Mark Harrison is a new contributor. Be nice, and check out our Code of Conduct.










                                draft saved

                                draft discarded


















                                Mark Harrison is a new contributor. Be nice, and check out our Code of Conduct.













                                Mark Harrison is a new contributor. Be nice, and check out our Code of Conduct.












                                Mark Harrison is a new contributor. Be nice, and check out our Code of Conduct.
















                                Thanks for contributing an answer to Retrocomputing Stack Exchange!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid



                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.


                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function () {
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f11801%2fwhat-was-the-rationale-behind-36-bit-computer-architectures%23new-answer', 'question_page');
                                }
                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                Taj Mahal Inhaltsverzeichnis Aufbau | Geschichte | 350-Jahr-Feier | Heutige Bedeutung | Siehe auch |...

                                Baia Sprie Cuprins Etimologie | Istorie | Demografie | Politică și administrație | Arii naturale...

                                Nicolae Petrescu-Găină Cuprins Biografie | Opera | In memoriam | Varia | Controverse, incertitudini...