Why did the VIC-II and SID use 6 µm technology in the era of 3 µm and 1.5 µm?When did MOS Technology...

How to run a command 1 out of N times in Bash

How did Gollum know Sauron was gathering the Haradrim to make war?

What is causing gaps in logs?

Why do modes sound so different, although they are basically the same as a mode of another scale?

meaning of "educating the ice"?

Are there balance issues when allowing attack of opportunity against any creature?

From not IT background to being a programmer

Datasets of Large Molecules

Why wasn't Linda Hamilton in T3?

How to find better food in airports

How to solve this inequality , when there is a irrational power?

Quick Slitherlink Puzzles: KPK and 123

Different past tense for various *et words

D Scale Question

How to use a tikzpicture as a node shape

Can authors email you PDFs of their textbook for free?

Is Chuck the Evil Sandwich Making Guy's head actually a sandwich?

Received email from ISP saying one of my devices has malware

How can I store milk for long periods of time?

Using font to highlight a god's speech in dialogue

Can UV radiation be safe for the skin?

Tasha's Hideous Laughter used on a deaf person?

Why is Mitch McConnell blocking nominees to the Federal Election Commission?

garage light with two hots and one neutral



Why did the VIC-II and SID use 6 µm technology in the era of 3 µm and 1.5 µm?


When did MOS Technology upgrade to 5µm?Why did so many early microcomputers use the MOS 6502 and variants?Were there any “off the shelf” graphics chips that supported 2D sprites in the 70's and 80's?Why does the C64 have the following palette?When did MOS Technology upgrade to 5µm?Were 64k RAM chips $5 in 1981?Part-bad chips other than RAMWhy the disparity between the screencodes and the character codes?Why did some CPUs use two Read/Write lines, and others just one?Why do Early DRAMs (e.g. 4116) have a negative Column Address Set-up Time?What was the role of Commodore-West Germany?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







6















In short, 3 µm looks like it was the "standard" process size at the time, and it was available to Commodore before the chips were designed. Therefore it looks like using the larger 5 to 7 µm process nodes was a deliberate choice in designing these chips. But I can't seem to find a sound reason why.



What I know so far



As this case study from IEEE spectrum states, the VIC-II was designed for a 5 µm process, and the SID used a 7 µm process which occasionally dipped to 6 µm.



This doesn't make a lot of sense to me because by the time these chips were designed beginning in 1981, 5 µm and especially 7 µm processes were outdated: 3 µm was available and used in actual products (like the Intel 8085) no later than 1976. Some products in development were even using 1.5 µm already, such as the Intel 80286 and NEC's 64k RAM, which were released in the same year the Commodore 64 was released in 1982.



The obvious culprit would be if MOS/Commodore specifically didn't have 3 µm yet, but according to this question, they probably had 3.5 µm by 1980, over a year before design for the VIC-II and SID began in 1981.



In interviews, the chip designers have cited chip area as a notable limitation. In other words, there were solid engineering reasons to prefer working with a smaller process and working with a larger process anyways had a clear negative impact on the design process. So the larger process nodes must offer something fairly significant to justify such restrictions.



So why did they intentionally design these chips using a larger process node?



Possible (entirely speculative) reasons



These are some things I've thought about, but I have absolutely no basis for believing any of them are true. I'm hoping some else will know more and offer some actual insight.




  • Was the older 5 to 7 µm somehow cheaper than the newer 3.5 µm?

  • Was 3.5 µm manufacturing not mature enough to risk for the new design? (seems unlikely, but maybe)

  • The newer 3.5 µm was HMOS-I instead of NMOS. Is that somehow significant?

  • The VIC-II & SID were originally designed with the idea of being sold for use in other systems. This explains some design restrictions, like why the VIC-II couldn't assume DRAM is necessarily larger than 16 kB or faster than 2 MHz, even though the specific DRAM in the Commodore 64 was 64 kB and could in theory handle clocks faster than 2 MHz. I can't think of any sensible reason why larger nodes would help broader compatibility, but compatibility clearly influenced other quirky design decisions, so maybe there's a link somehow?


EDIT: Rephrased first summary for clarity, and also rephrased second and fourth paragraphs for clarity










share|improve this question









New contributor



supernoob5000 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






















  • Commodore had probably serious troubles with its chip manufacturing, despite that they acquired a chip manufacturer company.

    – peterh
    9 hours ago


















6















In short, 3 µm looks like it was the "standard" process size at the time, and it was available to Commodore before the chips were designed. Therefore it looks like using the larger 5 to 7 µm process nodes was a deliberate choice in designing these chips. But I can't seem to find a sound reason why.



What I know so far



As this case study from IEEE spectrum states, the VIC-II was designed for a 5 µm process, and the SID used a 7 µm process which occasionally dipped to 6 µm.



This doesn't make a lot of sense to me because by the time these chips were designed beginning in 1981, 5 µm and especially 7 µm processes were outdated: 3 µm was available and used in actual products (like the Intel 8085) no later than 1976. Some products in development were even using 1.5 µm already, such as the Intel 80286 and NEC's 64k RAM, which were released in the same year the Commodore 64 was released in 1982.



The obvious culprit would be if MOS/Commodore specifically didn't have 3 µm yet, but according to this question, they probably had 3.5 µm by 1980, over a year before design for the VIC-II and SID began in 1981.



In interviews, the chip designers have cited chip area as a notable limitation. In other words, there were solid engineering reasons to prefer working with a smaller process and working with a larger process anyways had a clear negative impact on the design process. So the larger process nodes must offer something fairly significant to justify such restrictions.



So why did they intentionally design these chips using a larger process node?



Possible (entirely speculative) reasons



These are some things I've thought about, but I have absolutely no basis for believing any of them are true. I'm hoping some else will know more and offer some actual insight.




  • Was the older 5 to 7 µm somehow cheaper than the newer 3.5 µm?

  • Was 3.5 µm manufacturing not mature enough to risk for the new design? (seems unlikely, but maybe)

  • The newer 3.5 µm was HMOS-I instead of NMOS. Is that somehow significant?

  • The VIC-II & SID were originally designed with the idea of being sold for use in other systems. This explains some design restrictions, like why the VIC-II couldn't assume DRAM is necessarily larger than 16 kB or faster than 2 MHz, even though the specific DRAM in the Commodore 64 was 64 kB and could in theory handle clocks faster than 2 MHz. I can't think of any sensible reason why larger nodes would help broader compatibility, but compatibility clearly influenced other quirky design decisions, so maybe there's a link somehow?


EDIT: Rephrased first summary for clarity, and also rephrased second and fourth paragraphs for clarity










share|improve this question









New contributor



supernoob5000 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






















  • Commodore had probably serious troubles with its chip manufacturing, despite that they acquired a chip manufacturer company.

    – peterh
    9 hours ago














6












6








6








In short, 3 µm looks like it was the "standard" process size at the time, and it was available to Commodore before the chips were designed. Therefore it looks like using the larger 5 to 7 µm process nodes was a deliberate choice in designing these chips. But I can't seem to find a sound reason why.



What I know so far



As this case study from IEEE spectrum states, the VIC-II was designed for a 5 µm process, and the SID used a 7 µm process which occasionally dipped to 6 µm.



This doesn't make a lot of sense to me because by the time these chips were designed beginning in 1981, 5 µm and especially 7 µm processes were outdated: 3 µm was available and used in actual products (like the Intel 8085) no later than 1976. Some products in development were even using 1.5 µm already, such as the Intel 80286 and NEC's 64k RAM, which were released in the same year the Commodore 64 was released in 1982.



The obvious culprit would be if MOS/Commodore specifically didn't have 3 µm yet, but according to this question, they probably had 3.5 µm by 1980, over a year before design for the VIC-II and SID began in 1981.



In interviews, the chip designers have cited chip area as a notable limitation. In other words, there were solid engineering reasons to prefer working with a smaller process and working with a larger process anyways had a clear negative impact on the design process. So the larger process nodes must offer something fairly significant to justify such restrictions.



So why did they intentionally design these chips using a larger process node?



Possible (entirely speculative) reasons



These are some things I've thought about, but I have absolutely no basis for believing any of them are true. I'm hoping some else will know more and offer some actual insight.




  • Was the older 5 to 7 µm somehow cheaper than the newer 3.5 µm?

  • Was 3.5 µm manufacturing not mature enough to risk for the new design? (seems unlikely, but maybe)

  • The newer 3.5 µm was HMOS-I instead of NMOS. Is that somehow significant?

  • The VIC-II & SID were originally designed with the idea of being sold for use in other systems. This explains some design restrictions, like why the VIC-II couldn't assume DRAM is necessarily larger than 16 kB or faster than 2 MHz, even though the specific DRAM in the Commodore 64 was 64 kB and could in theory handle clocks faster than 2 MHz. I can't think of any sensible reason why larger nodes would help broader compatibility, but compatibility clearly influenced other quirky design decisions, so maybe there's a link somehow?


EDIT: Rephrased first summary for clarity, and also rephrased second and fourth paragraphs for clarity










share|improve this question









New contributor



supernoob5000 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











In short, 3 µm looks like it was the "standard" process size at the time, and it was available to Commodore before the chips were designed. Therefore it looks like using the larger 5 to 7 µm process nodes was a deliberate choice in designing these chips. But I can't seem to find a sound reason why.



What I know so far



As this case study from IEEE spectrum states, the VIC-II was designed for a 5 µm process, and the SID used a 7 µm process which occasionally dipped to 6 µm.



This doesn't make a lot of sense to me because by the time these chips were designed beginning in 1981, 5 µm and especially 7 µm processes were outdated: 3 µm was available and used in actual products (like the Intel 8085) no later than 1976. Some products in development were even using 1.5 µm already, such as the Intel 80286 and NEC's 64k RAM, which were released in the same year the Commodore 64 was released in 1982.



The obvious culprit would be if MOS/Commodore specifically didn't have 3 µm yet, but according to this question, they probably had 3.5 µm by 1980, over a year before design for the VIC-II and SID began in 1981.



In interviews, the chip designers have cited chip area as a notable limitation. In other words, there were solid engineering reasons to prefer working with a smaller process and working with a larger process anyways had a clear negative impact on the design process. So the larger process nodes must offer something fairly significant to justify such restrictions.



So why did they intentionally design these chips using a larger process node?



Possible (entirely speculative) reasons



These are some things I've thought about, but I have absolutely no basis for believing any of them are true. I'm hoping some else will know more and offer some actual insight.




  • Was the older 5 to 7 µm somehow cheaper than the newer 3.5 µm?

  • Was 3.5 µm manufacturing not mature enough to risk for the new design? (seems unlikely, but maybe)

  • The newer 3.5 µm was HMOS-I instead of NMOS. Is that somehow significant?

  • The VIC-II & SID were originally designed with the idea of being sold for use in other systems. This explains some design restrictions, like why the VIC-II couldn't assume DRAM is necessarily larger than 16 kB or faster than 2 MHz, even though the specific DRAM in the Commodore 64 was 64 kB and could in theory handle clocks faster than 2 MHz. I can't think of any sensible reason why larger nodes would help broader compatibility, but compatibility clearly influenced other quirky design decisions, so maybe there's a link somehow?


EDIT: Rephrased first summary for clarity, and also rephrased second and fourth paragraphs for clarity







history hardware commodore-64 chip






share|improve this question









New contributor



supernoob5000 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










share|improve this question









New contributor



supernoob5000 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








share|improve this question




share|improve this question








edited 7 hours ago







supernoob5000













New contributor



supernoob5000 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








asked 9 hours ago









supernoob5000supernoob5000

514 bronze badges




514 bronze badges




New contributor



supernoob5000 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




New contributor




supernoob5000 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.


















  • Commodore had probably serious troubles with its chip manufacturing, despite that they acquired a chip manufacturer company.

    – peterh
    9 hours ago



















  • Commodore had probably serious troubles with its chip manufacturing, despite that they acquired a chip manufacturer company.

    – peterh
    9 hours ago

















Commodore had probably serious troubles with its chip manufacturing, despite that they acquired a chip manufacturer company.

– peterh
9 hours ago





Commodore had probably serious troubles with its chip manufacturing, despite that they acquired a chip manufacturer company.

– peterh
9 hours ago










3 Answers
3






active

oldest

votes


















5
















In short, 3 µm looks like it was available at the time,




The questions are rather:





  • to whom it was available and


  • is it worth the investment.


Processes aren't anything you'd buy from some supplier but develop in house. The fact that Intel got a 3 µm process does not translate to any other manufacturer being able to do so and more important doing so.



Developing one in house requires considerable investment of money and time. This is true even when buying an existing process from a competitor.




so it looks like using the larger 5 to 7 µm process nodes was a deliberate choice in designing these chips. But I can't seem to find a sound reason why.




It's what they had machinery for and experience about. So it rather needs a reason why they should have invested considerable amount of money into updating/extending their <5µm production lines to meet increased use - eventually even building a new fab - when they could do these new chips with available and smooth running >5µm production line(s).



In fact, putting new designs on the older line even solves the task to keep them occupied, increasing return of investment in them. The machines were written off, everything produced was pure profit. Throwing them away when still able to make the chips needed would be exceptional stupid.



Adding some more manpower to make the new chips fit the older lines would pay off quite fast.



It might be helpful that Commodore wasn't in the chip business like Intel. Their goal wasn't to produce as many chips as possible a to make a return, but as many computers as possible. With computers chips are only a small part of the bill. In addition only a part of the chips used were produced in house, so every investment there would have an even smaller return than compared with pure chip manufacturers.



When comparing this with today's race for smaller structures it is also noteworthy that producing SIDs was not done in any competitive setup. They were special to type chips, not commodities (*1), with a fixed performance and no race for performance records. They were produced by exactly one chip maker, CSD, for exactly one customer: Commodore. In such a closed relation the race is for lowest production cost at the time and in the existing setting, not for some theoretical lower cost that can be achieved by making rather notable investments (*2).




So the larger process nodes must offer something fairly significant to justify such restriction.




They had existing and mature 5 µm production lines. Using them not only needed the least upfront investment, while having the fastest ramp up as well, but also kept them productive. All resulting in a better over all profit.



Bottom line: Business decisions are almost never driven by technology.





*1 - In fact, if they would have been commodity chips, it is a sure bet that Tramiel would have turned for the cheapest source and shut CSD down the very next day.



*2 - There was no huge projected sales case for the any designated computer at the time the chips were designed. Again a good reason to use the less expensive process regarding setup cost.






share|improve this answer




























  • Well, today you can actually buy a chip factory from the shelf but back then it was still unknown territory.

    – Janka
    7 hours ago






  • 1





    I used to tell people that there is a much bigger difference between no chip and a 7 µm chip than there is between a 7 µm chip and a 3 µm chip.

    – A. I. Breveleri
    7 hours ago











  • @A.I.Breveleri Quite a good point that is.

    – Raffzahn
    7 hours ago











  • I like that this provides a plausible rationale, but this needs to explain a few more things before I can accept this answer. Particularly, they had their 3.5 µm line for over a year before chip design began, and over 2 years before mass production. Surely that's enough time for the new production line to mature and mitigate the extra costs? And for business rationale, if your old process is already 3-6 years behind the competition, surely that's NOT the time to try to squeeze additional ROI when that puts your chip at a very real risk of being obsolete upon release?

    – supernoob5000
    6 hours ago











  • @supernoob5000 Erm, it seams as if you try to judge what happened back then thru todays focus on cutting edge technology. This race is dictated by the need to squeeze ever more transistors (to increase erformance) into a single CPU. Smaller structures are a must, as it's almost impossible to build today's top CPUs in a process from a few years ago. This was not the case back then. As shown the SID could be produced using the old process. And even more, there was no competition for that product forcing further integration. Who else was doing SIDs, fighting Commodore for market share?

    – Raffzahn
    5 hours ago



















2















I think that this question can be answered in a more general "why use a particular semiconductor process for a particular chip" way. Even today not all chips are made using the latest process. Choosing a process is all about tradeoffs.



Rough outline:




  1. The more advanced (smaller) process the higher the mask costs (so you don't use a more advanced process unless you have to)

  2. More advanced process has a higher manufacturing cost

  3. EDA tools and engineering cost also get more expensive


  4. You can not simply take an existing chip, shrink the layout and manufacture it with a different process, so you need a really good business case to make a new version of a chip

  5. When the process shrinks the logic parts of the chip shrink (and the power consumption of the digital parts), but the analog parts don't necessairly scale that way. Example: if you make a motor driver or power supply chip you simply need a big enough transistor to handle the power, so you get almost nothing for using a smaller process. If a chip has significant analog parts the die size will not get much smaller with a process change.

  6. Not all semiconductor devices are available for every process ("semiconductor device" in the sense of flash memory, high-voltage transistor etc.). Even today high-performance CPUs, RAMs and flash memories in smartphones are separate chips. It also takes time to develop such devices for new processes, so not all types of devices are available when a new process is initially developed.


  7. Obligatory automotive analogy: each process is like a building material, for example concrete and carbon fiber, you don't build bunkers with carbon fiber and spacecraft out of concrete. But of course you could. It would be just inefficient.






share|improve this answer


























  • I appreciate this answer for providing a nice general overview of the decision factors. But I can't accept this answer because the general reasons do not explain the scenario. Specifically, Commodore had the smaller process for over a year before design began, they had clear reasons to prefer a smaller process (chip area + more common), and from die shots, analog parts were not a large part of chip. From your list, that only leaves cost reasons or point 7, neither of which factors in obviously. I would like clarification on which of those factors justified the decision in this case and why.

    – supernoob5000
    6 hours ago



















2















After some more research, I believe I've stumbled across the real answer: The VIC-II and SID used a larger process node size because Commodore's fabrication line circa 1981 was uniquely positioned produce chips at that size at effectively no production cost whatsoever.



Based on what I've read, here's my best guess at what Commodore's fabrication situation looked like in the months when the VIC-II and SID were being designed:



In 1981, Commodore had two fabrication lines: the older NMOS line which could handle no smaller than 5 µm, and the newer HMOS line then capable of 3.5 µm. For the new line, Commodore's priority was updating their CPUs because they stood to benefit the most from smaller process nodes. This is shown by the first products they released from the new line: MOS 7501 and MOS 8501, both die shrunk upgrades of the 6510. Additionally, they recognized their process was lagging the rest of the world, so they wanted to shrink to 2 µm in a bid to catch up. This is shown by the MOS 8501, released around the same time as MOS 7501, but successfully using 2 µm technology. The end result was the HMOS line was occupied with these two high priority challenges at least until 1984 when the MOS 7501/8501 were ready.



While this was happening, the older NMOS line was no longer operating at full capacity because the HMOS line was taking over some of the responsibility for CPU fabrication, especially the new R&D fabrication. But chip fabs don't simply turn off if you aren't using them: it costs money and manpower to keep the facilities running even during the slower periods.



This is where the new chips come in. From the very source I cited when asking the question,




Because MOS Technology's fabrication facility was not running at full capacity, the equipment used for C64 test chips and multiple passes of silicon would otherwise have been idle. "We were using people who were there anyway," said Ziembicki. "You waste a little bit of silicon, but silicon's pretty cheap. It's only sand."




In other words, there was a huge design and debugging benefit from using the older line because they could build and debug test chips almost whenever they wanted and there was no cost to doing so.




With this, Winterble explained, a circuit buried deep inside the chips could be lifted out and run as a test chip, allowing through debugging without concern for other parts of the circuitry.




Of course, when mass production began, the production would cost something because it wouldn't be absorbed in overhead anymore. But that still offered a significant cost advantage in production yield.




Not only the development costs absorbed into company overhead, but there was no mark-up to pay, as there would have been if the chips had been build by another company. And yields were high because the chips were designed for a mature semiconductor-manufacturing process.




Yield seemed to have been a significant cost concern when designing the chips from the very beginning.




"We defined in advance the silicon size that would give a yield we were willing to live with..." - Charles Winterble




This is especially true when considering the overall design goals of the machine.




"When the design of the Commodore 64 began, the overriding goals were simplicity and low cost. The initial production cost of the Commodore 64 was targeted at $130; it turned out to be $135." - Al Charpentier




This is enough to construct a comparison of what the two process sizes would offer the VIC-II and the SID:



Pros of the newer, 3.5 µm process:




  • More chip area, meaning more features for the chips and fewer design compromises.


Pros of the older, 5 µm process:




  • Higher production yield, meaning lower production cost overall

  • Zero cost test chips

  • Does not impact the high priority projects of updating the CPU line and improving the new fab process.

  • Adequate chip area: Video and sound chips are expected to benefit less (in terms of business value) from smaller process nodes compared to a CPU.

  • Better match for one of the key overridng design goals: Low cost.


Given all this, of course they went with the older 5 µm process. It's almost hard to think why the newer fab would even have been considered to begin with.






share|improve this answer








New contributor



supernoob5000 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
























    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "648"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });






    supernoob5000 is a new contributor. Be nice, and check out our Code of Conduct.










    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f12208%2fwhy-did-the-vic-ii-and-sid-use-6-%25c2%25b5m-technology-in-the-era-of-3-%25c2%25b5m-and-1-5-%25c2%25b5m%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    5
















    In short, 3 µm looks like it was available at the time,




    The questions are rather:





    • to whom it was available and


    • is it worth the investment.


    Processes aren't anything you'd buy from some supplier but develop in house. The fact that Intel got a 3 µm process does not translate to any other manufacturer being able to do so and more important doing so.



    Developing one in house requires considerable investment of money and time. This is true even when buying an existing process from a competitor.




    so it looks like using the larger 5 to 7 µm process nodes was a deliberate choice in designing these chips. But I can't seem to find a sound reason why.




    It's what they had machinery for and experience about. So it rather needs a reason why they should have invested considerable amount of money into updating/extending their <5µm production lines to meet increased use - eventually even building a new fab - when they could do these new chips with available and smooth running >5µm production line(s).



    In fact, putting new designs on the older line even solves the task to keep them occupied, increasing return of investment in them. The machines were written off, everything produced was pure profit. Throwing them away when still able to make the chips needed would be exceptional stupid.



    Adding some more manpower to make the new chips fit the older lines would pay off quite fast.



    It might be helpful that Commodore wasn't in the chip business like Intel. Their goal wasn't to produce as many chips as possible a to make a return, but as many computers as possible. With computers chips are only a small part of the bill. In addition only a part of the chips used were produced in house, so every investment there would have an even smaller return than compared with pure chip manufacturers.



    When comparing this with today's race for smaller structures it is also noteworthy that producing SIDs was not done in any competitive setup. They were special to type chips, not commodities (*1), with a fixed performance and no race for performance records. They were produced by exactly one chip maker, CSD, for exactly one customer: Commodore. In such a closed relation the race is for lowest production cost at the time and in the existing setting, not for some theoretical lower cost that can be achieved by making rather notable investments (*2).




    So the larger process nodes must offer something fairly significant to justify such restriction.




    They had existing and mature 5 µm production lines. Using them not only needed the least upfront investment, while having the fastest ramp up as well, but also kept them productive. All resulting in a better over all profit.



    Bottom line: Business decisions are almost never driven by technology.





    *1 - In fact, if they would have been commodity chips, it is a sure bet that Tramiel would have turned for the cheapest source and shut CSD down the very next day.



    *2 - There was no huge projected sales case for the any designated computer at the time the chips were designed. Again a good reason to use the less expensive process regarding setup cost.






    share|improve this answer




























    • Well, today you can actually buy a chip factory from the shelf but back then it was still unknown territory.

      – Janka
      7 hours ago






    • 1





      I used to tell people that there is a much bigger difference between no chip and a 7 µm chip than there is between a 7 µm chip and a 3 µm chip.

      – A. I. Breveleri
      7 hours ago











    • @A.I.Breveleri Quite a good point that is.

      – Raffzahn
      7 hours ago











    • I like that this provides a plausible rationale, but this needs to explain a few more things before I can accept this answer. Particularly, they had their 3.5 µm line for over a year before chip design began, and over 2 years before mass production. Surely that's enough time for the new production line to mature and mitigate the extra costs? And for business rationale, if your old process is already 3-6 years behind the competition, surely that's NOT the time to try to squeeze additional ROI when that puts your chip at a very real risk of being obsolete upon release?

      – supernoob5000
      6 hours ago











    • @supernoob5000 Erm, it seams as if you try to judge what happened back then thru todays focus on cutting edge technology. This race is dictated by the need to squeeze ever more transistors (to increase erformance) into a single CPU. Smaller structures are a must, as it's almost impossible to build today's top CPUs in a process from a few years ago. This was not the case back then. As shown the SID could be produced using the old process. And even more, there was no competition for that product forcing further integration. Who else was doing SIDs, fighting Commodore for market share?

      – Raffzahn
      5 hours ago
















    5
















    In short, 3 µm looks like it was available at the time,




    The questions are rather:





    • to whom it was available and


    • is it worth the investment.


    Processes aren't anything you'd buy from some supplier but develop in house. The fact that Intel got a 3 µm process does not translate to any other manufacturer being able to do so and more important doing so.



    Developing one in house requires considerable investment of money and time. This is true even when buying an existing process from a competitor.




    so it looks like using the larger 5 to 7 µm process nodes was a deliberate choice in designing these chips. But I can't seem to find a sound reason why.




    It's what they had machinery for and experience about. So it rather needs a reason why they should have invested considerable amount of money into updating/extending their <5µm production lines to meet increased use - eventually even building a new fab - when they could do these new chips with available and smooth running >5µm production line(s).



    In fact, putting new designs on the older line even solves the task to keep them occupied, increasing return of investment in them. The machines were written off, everything produced was pure profit. Throwing them away when still able to make the chips needed would be exceptional stupid.



    Adding some more manpower to make the new chips fit the older lines would pay off quite fast.



    It might be helpful that Commodore wasn't in the chip business like Intel. Their goal wasn't to produce as many chips as possible a to make a return, but as many computers as possible. With computers chips are only a small part of the bill. In addition only a part of the chips used were produced in house, so every investment there would have an even smaller return than compared with pure chip manufacturers.



    When comparing this with today's race for smaller structures it is also noteworthy that producing SIDs was not done in any competitive setup. They were special to type chips, not commodities (*1), with a fixed performance and no race for performance records. They were produced by exactly one chip maker, CSD, for exactly one customer: Commodore. In such a closed relation the race is for lowest production cost at the time and in the existing setting, not for some theoretical lower cost that can be achieved by making rather notable investments (*2).




    So the larger process nodes must offer something fairly significant to justify such restriction.




    They had existing and mature 5 µm production lines. Using them not only needed the least upfront investment, while having the fastest ramp up as well, but also kept them productive. All resulting in a better over all profit.



    Bottom line: Business decisions are almost never driven by technology.





    *1 - In fact, if they would have been commodity chips, it is a sure bet that Tramiel would have turned for the cheapest source and shut CSD down the very next day.



    *2 - There was no huge projected sales case for the any designated computer at the time the chips were designed. Again a good reason to use the less expensive process regarding setup cost.






    share|improve this answer




























    • Well, today you can actually buy a chip factory from the shelf but back then it was still unknown territory.

      – Janka
      7 hours ago






    • 1





      I used to tell people that there is a much bigger difference between no chip and a 7 µm chip than there is between a 7 µm chip and a 3 µm chip.

      – A. I. Breveleri
      7 hours ago











    • @A.I.Breveleri Quite a good point that is.

      – Raffzahn
      7 hours ago











    • I like that this provides a plausible rationale, but this needs to explain a few more things before I can accept this answer. Particularly, they had their 3.5 µm line for over a year before chip design began, and over 2 years before mass production. Surely that's enough time for the new production line to mature and mitigate the extra costs? And for business rationale, if your old process is already 3-6 years behind the competition, surely that's NOT the time to try to squeeze additional ROI when that puts your chip at a very real risk of being obsolete upon release?

      – supernoob5000
      6 hours ago











    • @supernoob5000 Erm, it seams as if you try to judge what happened back then thru todays focus on cutting edge technology. This race is dictated by the need to squeeze ever more transistors (to increase erformance) into a single CPU. Smaller structures are a must, as it's almost impossible to build today's top CPUs in a process from a few years ago. This was not the case back then. As shown the SID could be produced using the old process. And even more, there was no competition for that product forcing further integration. Who else was doing SIDs, fighting Commodore for market share?

      – Raffzahn
      5 hours ago














    5














    5










    5










    In short, 3 µm looks like it was available at the time,




    The questions are rather:





    • to whom it was available and


    • is it worth the investment.


    Processes aren't anything you'd buy from some supplier but develop in house. The fact that Intel got a 3 µm process does not translate to any other manufacturer being able to do so and more important doing so.



    Developing one in house requires considerable investment of money and time. This is true even when buying an existing process from a competitor.




    so it looks like using the larger 5 to 7 µm process nodes was a deliberate choice in designing these chips. But I can't seem to find a sound reason why.




    It's what they had machinery for and experience about. So it rather needs a reason why they should have invested considerable amount of money into updating/extending their <5µm production lines to meet increased use - eventually even building a new fab - when they could do these new chips with available and smooth running >5µm production line(s).



    In fact, putting new designs on the older line even solves the task to keep them occupied, increasing return of investment in them. The machines were written off, everything produced was pure profit. Throwing them away when still able to make the chips needed would be exceptional stupid.



    Adding some more manpower to make the new chips fit the older lines would pay off quite fast.



    It might be helpful that Commodore wasn't in the chip business like Intel. Their goal wasn't to produce as many chips as possible a to make a return, but as many computers as possible. With computers chips are only a small part of the bill. In addition only a part of the chips used were produced in house, so every investment there would have an even smaller return than compared with pure chip manufacturers.



    When comparing this with today's race for smaller structures it is also noteworthy that producing SIDs was not done in any competitive setup. They were special to type chips, not commodities (*1), with a fixed performance and no race for performance records. They were produced by exactly one chip maker, CSD, for exactly one customer: Commodore. In such a closed relation the race is for lowest production cost at the time and in the existing setting, not for some theoretical lower cost that can be achieved by making rather notable investments (*2).




    So the larger process nodes must offer something fairly significant to justify such restriction.




    They had existing and mature 5 µm production lines. Using them not only needed the least upfront investment, while having the fastest ramp up as well, but also kept them productive. All resulting in a better over all profit.



    Bottom line: Business decisions are almost never driven by technology.





    *1 - In fact, if they would have been commodity chips, it is a sure bet that Tramiel would have turned for the cheapest source and shut CSD down the very next day.



    *2 - There was no huge projected sales case for the any designated computer at the time the chips were designed. Again a good reason to use the less expensive process regarding setup cost.






    share|improve this answer
















    In short, 3 µm looks like it was available at the time,




    The questions are rather:





    • to whom it was available and


    • is it worth the investment.


    Processes aren't anything you'd buy from some supplier but develop in house. The fact that Intel got a 3 µm process does not translate to any other manufacturer being able to do so and more important doing so.



    Developing one in house requires considerable investment of money and time. This is true even when buying an existing process from a competitor.




    so it looks like using the larger 5 to 7 µm process nodes was a deliberate choice in designing these chips. But I can't seem to find a sound reason why.




    It's what they had machinery for and experience about. So it rather needs a reason why they should have invested considerable amount of money into updating/extending their <5µm production lines to meet increased use - eventually even building a new fab - when they could do these new chips with available and smooth running >5µm production line(s).



    In fact, putting new designs on the older line even solves the task to keep them occupied, increasing return of investment in them. The machines were written off, everything produced was pure profit. Throwing them away when still able to make the chips needed would be exceptional stupid.



    Adding some more manpower to make the new chips fit the older lines would pay off quite fast.



    It might be helpful that Commodore wasn't in the chip business like Intel. Their goal wasn't to produce as many chips as possible a to make a return, but as many computers as possible. With computers chips are only a small part of the bill. In addition only a part of the chips used were produced in house, so every investment there would have an even smaller return than compared with pure chip manufacturers.



    When comparing this with today's race for smaller structures it is also noteworthy that producing SIDs was not done in any competitive setup. They were special to type chips, not commodities (*1), with a fixed performance and no race for performance records. They were produced by exactly one chip maker, CSD, for exactly one customer: Commodore. In such a closed relation the race is for lowest production cost at the time and in the existing setting, not for some theoretical lower cost that can be achieved by making rather notable investments (*2).




    So the larger process nodes must offer something fairly significant to justify such restriction.




    They had existing and mature 5 µm production lines. Using them not only needed the least upfront investment, while having the fastest ramp up as well, but also kept them productive. All resulting in a better over all profit.



    Bottom line: Business decisions are almost never driven by technology.





    *1 - In fact, if they would have been commodity chips, it is a sure bet that Tramiel would have turned for the cheapest source and shut CSD down the very next day.



    *2 - There was no huge projected sales case for the any designated computer at the time the chips were designed. Again a good reason to use the less expensive process regarding setup cost.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited 2 hours ago









    Renan

    1275 bronze badges




    1275 bronze badges










    answered 8 hours ago









    RaffzahnRaffzahn

    68.2k6 gold badges168 silver badges284 bronze badges




    68.2k6 gold badges168 silver badges284 bronze badges
















    • Well, today you can actually buy a chip factory from the shelf but back then it was still unknown territory.

      – Janka
      7 hours ago






    • 1





      I used to tell people that there is a much bigger difference between no chip and a 7 µm chip than there is between a 7 µm chip and a 3 µm chip.

      – A. I. Breveleri
      7 hours ago











    • @A.I.Breveleri Quite a good point that is.

      – Raffzahn
      7 hours ago











    • I like that this provides a plausible rationale, but this needs to explain a few more things before I can accept this answer. Particularly, they had their 3.5 µm line for over a year before chip design began, and over 2 years before mass production. Surely that's enough time for the new production line to mature and mitigate the extra costs? And for business rationale, if your old process is already 3-6 years behind the competition, surely that's NOT the time to try to squeeze additional ROI when that puts your chip at a very real risk of being obsolete upon release?

      – supernoob5000
      6 hours ago











    • @supernoob5000 Erm, it seams as if you try to judge what happened back then thru todays focus on cutting edge technology. This race is dictated by the need to squeeze ever more transistors (to increase erformance) into a single CPU. Smaller structures are a must, as it's almost impossible to build today's top CPUs in a process from a few years ago. This was not the case back then. As shown the SID could be produced using the old process. And even more, there was no competition for that product forcing further integration. Who else was doing SIDs, fighting Commodore for market share?

      – Raffzahn
      5 hours ago



















    • Well, today you can actually buy a chip factory from the shelf but back then it was still unknown territory.

      – Janka
      7 hours ago






    • 1





      I used to tell people that there is a much bigger difference between no chip and a 7 µm chip than there is between a 7 µm chip and a 3 µm chip.

      – A. I. Breveleri
      7 hours ago











    • @A.I.Breveleri Quite a good point that is.

      – Raffzahn
      7 hours ago











    • I like that this provides a plausible rationale, but this needs to explain a few more things before I can accept this answer. Particularly, they had their 3.5 µm line for over a year before chip design began, and over 2 years before mass production. Surely that's enough time for the new production line to mature and mitigate the extra costs? And for business rationale, if your old process is already 3-6 years behind the competition, surely that's NOT the time to try to squeeze additional ROI when that puts your chip at a very real risk of being obsolete upon release?

      – supernoob5000
      6 hours ago











    • @supernoob5000 Erm, it seams as if you try to judge what happened back then thru todays focus on cutting edge technology. This race is dictated by the need to squeeze ever more transistors (to increase erformance) into a single CPU. Smaller structures are a must, as it's almost impossible to build today's top CPUs in a process from a few years ago. This was not the case back then. As shown the SID could be produced using the old process. And even more, there was no competition for that product forcing further integration. Who else was doing SIDs, fighting Commodore for market share?

      – Raffzahn
      5 hours ago

















    Well, today you can actually buy a chip factory from the shelf but back then it was still unknown territory.

    – Janka
    7 hours ago





    Well, today you can actually buy a chip factory from the shelf but back then it was still unknown territory.

    – Janka
    7 hours ago




    1




    1





    I used to tell people that there is a much bigger difference between no chip and a 7 µm chip than there is between a 7 µm chip and a 3 µm chip.

    – A. I. Breveleri
    7 hours ago





    I used to tell people that there is a much bigger difference between no chip and a 7 µm chip than there is between a 7 µm chip and a 3 µm chip.

    – A. I. Breveleri
    7 hours ago













    @A.I.Breveleri Quite a good point that is.

    – Raffzahn
    7 hours ago





    @A.I.Breveleri Quite a good point that is.

    – Raffzahn
    7 hours ago













    I like that this provides a plausible rationale, but this needs to explain a few more things before I can accept this answer. Particularly, they had their 3.5 µm line for over a year before chip design began, and over 2 years before mass production. Surely that's enough time for the new production line to mature and mitigate the extra costs? And for business rationale, if your old process is already 3-6 years behind the competition, surely that's NOT the time to try to squeeze additional ROI when that puts your chip at a very real risk of being obsolete upon release?

    – supernoob5000
    6 hours ago





    I like that this provides a plausible rationale, but this needs to explain a few more things before I can accept this answer. Particularly, they had their 3.5 µm line for over a year before chip design began, and over 2 years before mass production. Surely that's enough time for the new production line to mature and mitigate the extra costs? And for business rationale, if your old process is already 3-6 years behind the competition, surely that's NOT the time to try to squeeze additional ROI when that puts your chip at a very real risk of being obsolete upon release?

    – supernoob5000
    6 hours ago













    @supernoob5000 Erm, it seams as if you try to judge what happened back then thru todays focus on cutting edge technology. This race is dictated by the need to squeeze ever more transistors (to increase erformance) into a single CPU. Smaller structures are a must, as it's almost impossible to build today's top CPUs in a process from a few years ago. This was not the case back then. As shown the SID could be produced using the old process. And even more, there was no competition for that product forcing further integration. Who else was doing SIDs, fighting Commodore for market share?

    – Raffzahn
    5 hours ago





    @supernoob5000 Erm, it seams as if you try to judge what happened back then thru todays focus on cutting edge technology. This race is dictated by the need to squeeze ever more transistors (to increase erformance) into a single CPU. Smaller structures are a must, as it's almost impossible to build today's top CPUs in a process from a few years ago. This was not the case back then. As shown the SID could be produced using the old process. And even more, there was no competition for that product forcing further integration. Who else was doing SIDs, fighting Commodore for market share?

    – Raffzahn
    5 hours ago













    2















    I think that this question can be answered in a more general "why use a particular semiconductor process for a particular chip" way. Even today not all chips are made using the latest process. Choosing a process is all about tradeoffs.



    Rough outline:




    1. The more advanced (smaller) process the higher the mask costs (so you don't use a more advanced process unless you have to)

    2. More advanced process has a higher manufacturing cost

    3. EDA tools and engineering cost also get more expensive


    4. You can not simply take an existing chip, shrink the layout and manufacture it with a different process, so you need a really good business case to make a new version of a chip

    5. When the process shrinks the logic parts of the chip shrink (and the power consumption of the digital parts), but the analog parts don't necessairly scale that way. Example: if you make a motor driver or power supply chip you simply need a big enough transistor to handle the power, so you get almost nothing for using a smaller process. If a chip has significant analog parts the die size will not get much smaller with a process change.

    6. Not all semiconductor devices are available for every process ("semiconductor device" in the sense of flash memory, high-voltage transistor etc.). Even today high-performance CPUs, RAMs and flash memories in smartphones are separate chips. It also takes time to develop such devices for new processes, so not all types of devices are available when a new process is initially developed.


    7. Obligatory automotive analogy: each process is like a building material, for example concrete and carbon fiber, you don't build bunkers with carbon fiber and spacecraft out of concrete. But of course you could. It would be just inefficient.






    share|improve this answer


























    • I appreciate this answer for providing a nice general overview of the decision factors. But I can't accept this answer because the general reasons do not explain the scenario. Specifically, Commodore had the smaller process for over a year before design began, they had clear reasons to prefer a smaller process (chip area + more common), and from die shots, analog parts were not a large part of chip. From your list, that only leaves cost reasons or point 7, neither of which factors in obviously. I would like clarification on which of those factors justified the decision in this case and why.

      – supernoob5000
      6 hours ago
















    2















    I think that this question can be answered in a more general "why use a particular semiconductor process for a particular chip" way. Even today not all chips are made using the latest process. Choosing a process is all about tradeoffs.



    Rough outline:




    1. The more advanced (smaller) process the higher the mask costs (so you don't use a more advanced process unless you have to)

    2. More advanced process has a higher manufacturing cost

    3. EDA tools and engineering cost also get more expensive


    4. You can not simply take an existing chip, shrink the layout and manufacture it with a different process, so you need a really good business case to make a new version of a chip

    5. When the process shrinks the logic parts of the chip shrink (and the power consumption of the digital parts), but the analog parts don't necessairly scale that way. Example: if you make a motor driver or power supply chip you simply need a big enough transistor to handle the power, so you get almost nothing for using a smaller process. If a chip has significant analog parts the die size will not get much smaller with a process change.

    6. Not all semiconductor devices are available for every process ("semiconductor device" in the sense of flash memory, high-voltage transistor etc.). Even today high-performance CPUs, RAMs and flash memories in smartphones are separate chips. It also takes time to develop such devices for new processes, so not all types of devices are available when a new process is initially developed.


    7. Obligatory automotive analogy: each process is like a building material, for example concrete and carbon fiber, you don't build bunkers with carbon fiber and spacecraft out of concrete. But of course you could. It would be just inefficient.






    share|improve this answer


























    • I appreciate this answer for providing a nice general overview of the decision factors. But I can't accept this answer because the general reasons do not explain the scenario. Specifically, Commodore had the smaller process for over a year before design began, they had clear reasons to prefer a smaller process (chip area + more common), and from die shots, analog parts were not a large part of chip. From your list, that only leaves cost reasons or point 7, neither of which factors in obviously. I would like clarification on which of those factors justified the decision in this case and why.

      – supernoob5000
      6 hours ago














    2














    2










    2









    I think that this question can be answered in a more general "why use a particular semiconductor process for a particular chip" way. Even today not all chips are made using the latest process. Choosing a process is all about tradeoffs.



    Rough outline:




    1. The more advanced (smaller) process the higher the mask costs (so you don't use a more advanced process unless you have to)

    2. More advanced process has a higher manufacturing cost

    3. EDA tools and engineering cost also get more expensive


    4. You can not simply take an existing chip, shrink the layout and manufacture it with a different process, so you need a really good business case to make a new version of a chip

    5. When the process shrinks the logic parts of the chip shrink (and the power consumption of the digital parts), but the analog parts don't necessairly scale that way. Example: if you make a motor driver or power supply chip you simply need a big enough transistor to handle the power, so you get almost nothing for using a smaller process. If a chip has significant analog parts the die size will not get much smaller with a process change.

    6. Not all semiconductor devices are available for every process ("semiconductor device" in the sense of flash memory, high-voltage transistor etc.). Even today high-performance CPUs, RAMs and flash memories in smartphones are separate chips. It also takes time to develop such devices for new processes, so not all types of devices are available when a new process is initially developed.


    7. Obligatory automotive analogy: each process is like a building material, for example concrete and carbon fiber, you don't build bunkers with carbon fiber and spacecraft out of concrete. But of course you could. It would be just inefficient.






    share|improve this answer













    I think that this question can be answered in a more general "why use a particular semiconductor process for a particular chip" way. Even today not all chips are made using the latest process. Choosing a process is all about tradeoffs.



    Rough outline:




    1. The more advanced (smaller) process the higher the mask costs (so you don't use a more advanced process unless you have to)

    2. More advanced process has a higher manufacturing cost

    3. EDA tools and engineering cost also get more expensive


    4. You can not simply take an existing chip, shrink the layout and manufacture it with a different process, so you need a really good business case to make a new version of a chip

    5. When the process shrinks the logic parts of the chip shrink (and the power consumption of the digital parts), but the analog parts don't necessairly scale that way. Example: if you make a motor driver or power supply chip you simply need a big enough transistor to handle the power, so you get almost nothing for using a smaller process. If a chip has significant analog parts the die size will not get much smaller with a process change.

    6. Not all semiconductor devices are available for every process ("semiconductor device" in the sense of flash memory, high-voltage transistor etc.). Even today high-performance CPUs, RAMs and flash memories in smartphones are separate chips. It also takes time to develop such devices for new processes, so not all types of devices are available when a new process is initially developed.


    7. Obligatory automotive analogy: each process is like a building material, for example concrete and carbon fiber, you don't build bunkers with carbon fiber and spacecraft out of concrete. But of course you could. It would be just inefficient.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered 8 hours ago









    filofilo

    1512 bronze badges




    1512 bronze badges
















    • I appreciate this answer for providing a nice general overview of the decision factors. But I can't accept this answer because the general reasons do not explain the scenario. Specifically, Commodore had the smaller process for over a year before design began, they had clear reasons to prefer a smaller process (chip area + more common), and from die shots, analog parts were not a large part of chip. From your list, that only leaves cost reasons or point 7, neither of which factors in obviously. I would like clarification on which of those factors justified the decision in this case and why.

      – supernoob5000
      6 hours ago



















    • I appreciate this answer for providing a nice general overview of the decision factors. But I can't accept this answer because the general reasons do not explain the scenario. Specifically, Commodore had the smaller process for over a year before design began, they had clear reasons to prefer a smaller process (chip area + more common), and from die shots, analog parts were not a large part of chip. From your list, that only leaves cost reasons or point 7, neither of which factors in obviously. I would like clarification on which of those factors justified the decision in this case and why.

      – supernoob5000
      6 hours ago

















    I appreciate this answer for providing a nice general overview of the decision factors. But I can't accept this answer because the general reasons do not explain the scenario. Specifically, Commodore had the smaller process for over a year before design began, they had clear reasons to prefer a smaller process (chip area + more common), and from die shots, analog parts were not a large part of chip. From your list, that only leaves cost reasons or point 7, neither of which factors in obviously. I would like clarification on which of those factors justified the decision in this case and why.

    – supernoob5000
    6 hours ago





    I appreciate this answer for providing a nice general overview of the decision factors. But I can't accept this answer because the general reasons do not explain the scenario. Specifically, Commodore had the smaller process for over a year before design began, they had clear reasons to prefer a smaller process (chip area + more common), and from die shots, analog parts were not a large part of chip. From your list, that only leaves cost reasons or point 7, neither of which factors in obviously. I would like clarification on which of those factors justified the decision in this case and why.

    – supernoob5000
    6 hours ago











    2















    After some more research, I believe I've stumbled across the real answer: The VIC-II and SID used a larger process node size because Commodore's fabrication line circa 1981 was uniquely positioned produce chips at that size at effectively no production cost whatsoever.



    Based on what I've read, here's my best guess at what Commodore's fabrication situation looked like in the months when the VIC-II and SID were being designed:



    In 1981, Commodore had two fabrication lines: the older NMOS line which could handle no smaller than 5 µm, and the newer HMOS line then capable of 3.5 µm. For the new line, Commodore's priority was updating their CPUs because they stood to benefit the most from smaller process nodes. This is shown by the first products they released from the new line: MOS 7501 and MOS 8501, both die shrunk upgrades of the 6510. Additionally, they recognized their process was lagging the rest of the world, so they wanted to shrink to 2 µm in a bid to catch up. This is shown by the MOS 8501, released around the same time as MOS 7501, but successfully using 2 µm technology. The end result was the HMOS line was occupied with these two high priority challenges at least until 1984 when the MOS 7501/8501 were ready.



    While this was happening, the older NMOS line was no longer operating at full capacity because the HMOS line was taking over some of the responsibility for CPU fabrication, especially the new R&D fabrication. But chip fabs don't simply turn off if you aren't using them: it costs money and manpower to keep the facilities running even during the slower periods.



    This is where the new chips come in. From the very source I cited when asking the question,




    Because MOS Technology's fabrication facility was not running at full capacity, the equipment used for C64 test chips and multiple passes of silicon would otherwise have been idle. "We were using people who were there anyway," said Ziembicki. "You waste a little bit of silicon, but silicon's pretty cheap. It's only sand."




    In other words, there was a huge design and debugging benefit from using the older line because they could build and debug test chips almost whenever they wanted and there was no cost to doing so.




    With this, Winterble explained, a circuit buried deep inside the chips could be lifted out and run as a test chip, allowing through debugging without concern for other parts of the circuitry.




    Of course, when mass production began, the production would cost something because it wouldn't be absorbed in overhead anymore. But that still offered a significant cost advantage in production yield.




    Not only the development costs absorbed into company overhead, but there was no mark-up to pay, as there would have been if the chips had been build by another company. And yields were high because the chips were designed for a mature semiconductor-manufacturing process.




    Yield seemed to have been a significant cost concern when designing the chips from the very beginning.




    "We defined in advance the silicon size that would give a yield we were willing to live with..." - Charles Winterble




    This is especially true when considering the overall design goals of the machine.




    "When the design of the Commodore 64 began, the overriding goals were simplicity and low cost. The initial production cost of the Commodore 64 was targeted at $130; it turned out to be $135." - Al Charpentier




    This is enough to construct a comparison of what the two process sizes would offer the VIC-II and the SID:



    Pros of the newer, 3.5 µm process:




    • More chip area, meaning more features for the chips and fewer design compromises.


    Pros of the older, 5 µm process:




    • Higher production yield, meaning lower production cost overall

    • Zero cost test chips

    • Does not impact the high priority projects of updating the CPU line and improving the new fab process.

    • Adequate chip area: Video and sound chips are expected to benefit less (in terms of business value) from smaller process nodes compared to a CPU.

    • Better match for one of the key overridng design goals: Low cost.


    Given all this, of course they went with the older 5 µm process. It's almost hard to think why the newer fab would even have been considered to begin with.






    share|improve this answer








    New contributor



    supernoob5000 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.


























      2















      After some more research, I believe I've stumbled across the real answer: The VIC-II and SID used a larger process node size because Commodore's fabrication line circa 1981 was uniquely positioned produce chips at that size at effectively no production cost whatsoever.



      Based on what I've read, here's my best guess at what Commodore's fabrication situation looked like in the months when the VIC-II and SID were being designed:



      In 1981, Commodore had two fabrication lines: the older NMOS line which could handle no smaller than 5 µm, and the newer HMOS line then capable of 3.5 µm. For the new line, Commodore's priority was updating their CPUs because they stood to benefit the most from smaller process nodes. This is shown by the first products they released from the new line: MOS 7501 and MOS 8501, both die shrunk upgrades of the 6510. Additionally, they recognized their process was lagging the rest of the world, so they wanted to shrink to 2 µm in a bid to catch up. This is shown by the MOS 8501, released around the same time as MOS 7501, but successfully using 2 µm technology. The end result was the HMOS line was occupied with these two high priority challenges at least until 1984 when the MOS 7501/8501 were ready.



      While this was happening, the older NMOS line was no longer operating at full capacity because the HMOS line was taking over some of the responsibility for CPU fabrication, especially the new R&D fabrication. But chip fabs don't simply turn off if you aren't using them: it costs money and manpower to keep the facilities running even during the slower periods.



      This is where the new chips come in. From the very source I cited when asking the question,




      Because MOS Technology's fabrication facility was not running at full capacity, the equipment used for C64 test chips and multiple passes of silicon would otherwise have been idle. "We were using people who were there anyway," said Ziembicki. "You waste a little bit of silicon, but silicon's pretty cheap. It's only sand."




      In other words, there was a huge design and debugging benefit from using the older line because they could build and debug test chips almost whenever they wanted and there was no cost to doing so.




      With this, Winterble explained, a circuit buried deep inside the chips could be lifted out and run as a test chip, allowing through debugging without concern for other parts of the circuitry.




      Of course, when mass production began, the production would cost something because it wouldn't be absorbed in overhead anymore. But that still offered a significant cost advantage in production yield.




      Not only the development costs absorbed into company overhead, but there was no mark-up to pay, as there would have been if the chips had been build by another company. And yields were high because the chips were designed for a mature semiconductor-manufacturing process.




      Yield seemed to have been a significant cost concern when designing the chips from the very beginning.




      "We defined in advance the silicon size that would give a yield we were willing to live with..." - Charles Winterble




      This is especially true when considering the overall design goals of the machine.




      "When the design of the Commodore 64 began, the overriding goals were simplicity and low cost. The initial production cost of the Commodore 64 was targeted at $130; it turned out to be $135." - Al Charpentier




      This is enough to construct a comparison of what the two process sizes would offer the VIC-II and the SID:



      Pros of the newer, 3.5 µm process:




      • More chip area, meaning more features for the chips and fewer design compromises.


      Pros of the older, 5 µm process:




      • Higher production yield, meaning lower production cost overall

      • Zero cost test chips

      • Does not impact the high priority projects of updating the CPU line and improving the new fab process.

      • Adequate chip area: Video and sound chips are expected to benefit less (in terms of business value) from smaller process nodes compared to a CPU.

      • Better match for one of the key overridng design goals: Low cost.


      Given all this, of course they went with the older 5 µm process. It's almost hard to think why the newer fab would even have been considered to begin with.






      share|improve this answer








      New contributor



      supernoob5000 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.
























        2














        2










        2









        After some more research, I believe I've stumbled across the real answer: The VIC-II and SID used a larger process node size because Commodore's fabrication line circa 1981 was uniquely positioned produce chips at that size at effectively no production cost whatsoever.



        Based on what I've read, here's my best guess at what Commodore's fabrication situation looked like in the months when the VIC-II and SID were being designed:



        In 1981, Commodore had two fabrication lines: the older NMOS line which could handle no smaller than 5 µm, and the newer HMOS line then capable of 3.5 µm. For the new line, Commodore's priority was updating their CPUs because they stood to benefit the most from smaller process nodes. This is shown by the first products they released from the new line: MOS 7501 and MOS 8501, both die shrunk upgrades of the 6510. Additionally, they recognized their process was lagging the rest of the world, so they wanted to shrink to 2 µm in a bid to catch up. This is shown by the MOS 8501, released around the same time as MOS 7501, but successfully using 2 µm technology. The end result was the HMOS line was occupied with these two high priority challenges at least until 1984 when the MOS 7501/8501 were ready.



        While this was happening, the older NMOS line was no longer operating at full capacity because the HMOS line was taking over some of the responsibility for CPU fabrication, especially the new R&D fabrication. But chip fabs don't simply turn off if you aren't using them: it costs money and manpower to keep the facilities running even during the slower periods.



        This is where the new chips come in. From the very source I cited when asking the question,




        Because MOS Technology's fabrication facility was not running at full capacity, the equipment used for C64 test chips and multiple passes of silicon would otherwise have been idle. "We were using people who were there anyway," said Ziembicki. "You waste a little bit of silicon, but silicon's pretty cheap. It's only sand."




        In other words, there was a huge design and debugging benefit from using the older line because they could build and debug test chips almost whenever they wanted and there was no cost to doing so.




        With this, Winterble explained, a circuit buried deep inside the chips could be lifted out and run as a test chip, allowing through debugging without concern for other parts of the circuitry.




        Of course, when mass production began, the production would cost something because it wouldn't be absorbed in overhead anymore. But that still offered a significant cost advantage in production yield.




        Not only the development costs absorbed into company overhead, but there was no mark-up to pay, as there would have been if the chips had been build by another company. And yields were high because the chips were designed for a mature semiconductor-manufacturing process.




        Yield seemed to have been a significant cost concern when designing the chips from the very beginning.




        "We defined in advance the silicon size that would give a yield we were willing to live with..." - Charles Winterble




        This is especially true when considering the overall design goals of the machine.




        "When the design of the Commodore 64 began, the overriding goals were simplicity and low cost. The initial production cost of the Commodore 64 was targeted at $130; it turned out to be $135." - Al Charpentier




        This is enough to construct a comparison of what the two process sizes would offer the VIC-II and the SID:



        Pros of the newer, 3.5 µm process:




        • More chip area, meaning more features for the chips and fewer design compromises.


        Pros of the older, 5 µm process:




        • Higher production yield, meaning lower production cost overall

        • Zero cost test chips

        • Does not impact the high priority projects of updating the CPU line and improving the new fab process.

        • Adequate chip area: Video and sound chips are expected to benefit less (in terms of business value) from smaller process nodes compared to a CPU.

        • Better match for one of the key overridng design goals: Low cost.


        Given all this, of course they went with the older 5 µm process. It's almost hard to think why the newer fab would even have been considered to begin with.






        share|improve this answer








        New contributor



        supernoob5000 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        After some more research, I believe I've stumbled across the real answer: The VIC-II and SID used a larger process node size because Commodore's fabrication line circa 1981 was uniquely positioned produce chips at that size at effectively no production cost whatsoever.



        Based on what I've read, here's my best guess at what Commodore's fabrication situation looked like in the months when the VIC-II and SID were being designed:



        In 1981, Commodore had two fabrication lines: the older NMOS line which could handle no smaller than 5 µm, and the newer HMOS line then capable of 3.5 µm. For the new line, Commodore's priority was updating their CPUs because they stood to benefit the most from smaller process nodes. This is shown by the first products they released from the new line: MOS 7501 and MOS 8501, both die shrunk upgrades of the 6510. Additionally, they recognized their process was lagging the rest of the world, so they wanted to shrink to 2 µm in a bid to catch up. This is shown by the MOS 8501, released around the same time as MOS 7501, but successfully using 2 µm technology. The end result was the HMOS line was occupied with these two high priority challenges at least until 1984 when the MOS 7501/8501 were ready.



        While this was happening, the older NMOS line was no longer operating at full capacity because the HMOS line was taking over some of the responsibility for CPU fabrication, especially the new R&D fabrication. But chip fabs don't simply turn off if you aren't using them: it costs money and manpower to keep the facilities running even during the slower periods.



        This is where the new chips come in. From the very source I cited when asking the question,




        Because MOS Technology's fabrication facility was not running at full capacity, the equipment used for C64 test chips and multiple passes of silicon would otherwise have been idle. "We were using people who were there anyway," said Ziembicki. "You waste a little bit of silicon, but silicon's pretty cheap. It's only sand."




        In other words, there was a huge design and debugging benefit from using the older line because they could build and debug test chips almost whenever they wanted and there was no cost to doing so.




        With this, Winterble explained, a circuit buried deep inside the chips could be lifted out and run as a test chip, allowing through debugging without concern for other parts of the circuitry.




        Of course, when mass production began, the production would cost something because it wouldn't be absorbed in overhead anymore. But that still offered a significant cost advantage in production yield.




        Not only the development costs absorbed into company overhead, but there was no mark-up to pay, as there would have been if the chips had been build by another company. And yields were high because the chips were designed for a mature semiconductor-manufacturing process.




        Yield seemed to have been a significant cost concern when designing the chips from the very beginning.




        "We defined in advance the silicon size that would give a yield we were willing to live with..." - Charles Winterble




        This is especially true when considering the overall design goals of the machine.




        "When the design of the Commodore 64 began, the overriding goals were simplicity and low cost. The initial production cost of the Commodore 64 was targeted at $130; it turned out to be $135." - Al Charpentier




        This is enough to construct a comparison of what the two process sizes would offer the VIC-II and the SID:



        Pros of the newer, 3.5 µm process:




        • More chip area, meaning more features for the chips and fewer design compromises.


        Pros of the older, 5 µm process:




        • Higher production yield, meaning lower production cost overall

        • Zero cost test chips

        • Does not impact the high priority projects of updating the CPU line and improving the new fab process.

        • Adequate chip area: Video and sound chips are expected to benefit less (in terms of business value) from smaller process nodes compared to a CPU.

        • Better match for one of the key overridng design goals: Low cost.


        Given all this, of course they went with the older 5 µm process. It's almost hard to think why the newer fab would even have been considered to begin with.







        share|improve this answer








        New contributor



        supernoob5000 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.








        share|improve this answer



        share|improve this answer






        New contributor



        supernoob5000 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.








        answered 3 hours ago









        supernoob5000supernoob5000

        514 bronze badges




        514 bronze badges




        New contributor



        supernoob5000 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.




        New contributor




        supernoob5000 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.



























            supernoob5000 is a new contributor. Be nice, and check out our Code of Conduct.










            draft saved

            draft discarded


















            supernoob5000 is a new contributor. Be nice, and check out our Code of Conduct.













            supernoob5000 is a new contributor. Be nice, and check out our Code of Conduct.












            supernoob5000 is a new contributor. Be nice, and check out our Code of Conduct.
















            Thanks for contributing an answer to Retrocomputing Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f12208%2fwhy-did-the-vic-ii-and-sid-use-6-%25c2%25b5m-technology-in-the-era-of-3-%25c2%25b5m-and-1-5-%25c2%25b5m%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Taj Mahal Inhaltsverzeichnis Aufbau | Geschichte | 350-Jahr-Feier | Heutige Bedeutung | Siehe auch |...

            Baia Sprie Cuprins Etimologie | Istorie | Demografie | Politică și administrație | Arii naturale...

            Nicolae Petrescu-Găină Cuprins Biografie | Opera | In memoriam | Varia | Controverse, incertitudini...